mirror of
https://github.com/element-hq/synapse
synced 2024-10-03 23:42:41 +00:00
Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes
This commit is contained in:
commit
bcb6b243e9
119 changed files with 2286 additions and 425 deletions
15
CHANGES.md
15
CHANGES.md
|
@ -1,3 +1,18 @@
|
||||||
|
Synapse 1.22.0 (2020-10-27)
|
||||||
|
===========================
|
||||||
|
|
||||||
|
No significant changes.
|
||||||
|
|
||||||
|
|
||||||
|
Synapse 1.22.0rc2 (2020-10-26)
|
||||||
|
==============================
|
||||||
|
|
||||||
|
Bugfixes
|
||||||
|
--------
|
||||||
|
|
||||||
|
- Fix bugs where ephemeral events were not sent to appservices. Broke in v1.22.0rc1. ([\#8648](https://github.com/matrix-org/synapse/issues/8648), [\#8656](https://github.com/matrix-org/synapse/issues/8656))
|
||||||
|
- Fix `user_daily_visits` table to not have duplicate rows per user/device due to multiple user agents. Broke in v1.22.0rc1. ([\#8654](https://github.com/matrix-org/synapse/issues/8654))
|
||||||
|
|
||||||
Synapse 1.22.0rc1 (2020-10-22)
|
Synapse 1.22.0rc1 (2020-10-22)
|
||||||
==============================
|
==============================
|
||||||
|
|
||||||
|
|
1
changelog.d/8455.bugfix
Normal file
1
changelog.d/8455.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix fetching of E2E cross signing keys over federation when only one of the master key and device signing key is cached already.
|
1
changelog.d/8519.feature
Normal file
1
changelog.d/8519.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add an admin api to delete a single file or files were not used for a defined time from server. Contributed by @dklimpel.
|
1
changelog.d/8539.feature
Normal file
1
changelog.d/8539.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Split admin API for reported events (`GET /_synapse/admin/v1/event_reports`) into detail and list endpoints. This is a breaking change to #8217 which was introduced in Synapse v1.21.0. Those who already use this API should check their scripts. Contributed by @dklimpel.
|
1
changelog.d/8580.bugfix
Normal file
1
changelog.d/8580.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix a bug where Synapse would blindly forward bad responses from federation to clients when retrieving profile information.
|
1
changelog.d/8582.doc
Normal file
1
changelog.d/8582.doc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Instructions for Azure AD in the OpenID Connect documentation. Contributed by peterk.
|
1
changelog.d/8614.misc
Normal file
1
changelog.d/8614.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Don't instansiate Requester directly.
|
1
changelog.d/8615.misc
Normal file
1
changelog.d/8615.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Type hints for `RegistrationStore`.
|
1
changelog.d/8620.bugfix
Normal file
1
changelog.d/8620.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix a bug where the account validity endpoint would silently fail if the user ID did not have an expiration time. It now returns a 400 error.
|
1
changelog.d/8621.misc
Normal file
1
changelog.d/8621.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Remove unused OPTIONS handlers.
|
1
changelog.d/8627.bugfix
Normal file
1
changelog.d/8627.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix email notifications for invites without local state.
|
1
changelog.d/8628.bugfix
Normal file
1
changelog.d/8628.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix handling of invalid group IDs to return a 400 rather than log an exception and return a 500.
|
1
changelog.d/8632.bugfix
Normal file
1
changelog.d/8632.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix handling of User-Agent headers that are invalid UTF-8, which caused user agents of users to not get correctly recorded.
|
1
changelog.d/8634.misc
Normal file
1
changelog.d/8634.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Correct Synapse's PyPI package name in the OpenID Connect installation instructions.
|
1
changelog.d/8639.misc
Normal file
1
changelog.d/8639.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix typos and spelling errors in the code.
|
1
changelog.d/8640.misc
Normal file
1
changelog.d/8640.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Reduce number of OpenTracing spans started.
|
1
changelog.d/8643.bugfix
Normal file
1
changelog.d/8643.bugfix
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix a bug in the `joined_rooms` admin API if the user has never joined any rooms. The bug was introduced, along with the API, in v1.21.0.
|
1
changelog.d/8644.misc
Normal file
1
changelog.d/8644.misc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add field `total` to device list in admin API.
|
1
changelog.d/8647.feature
Normal file
1
changelog.d/8647.feature
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Add an admin API `GET /_synapse/admin/v1/users/<user_id>/media` to get information about uploaded media. Contributed by @dklimpel.
|
|
@ -1 +0,0 @@
|
||||||
Fix a bug introduced in v1.22.0rc1 which would cause ephemeral events to not be sent to appservices.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix `user_daily_visits` to not have duplicate rows for UA. Broke in v1.22.0rc1.
|
|
|
@ -1 +0,0 @@
|
||||||
Fix a bug introduced in v1.22.0rc1 where presence events were not properly passed to application services.
|
|
1
changelog.d/8657.doc
Normal file
1
changelog.d/8657.doc
Normal file
|
@ -0,0 +1 @@
|
||||||
|
Fix the filepath of Dex's example config and the link to Dex's Getting Started guide in the OpenID Connect docs.
|
6
debian/changelog
vendored
6
debian/changelog
vendored
|
@ -1,3 +1,9 @@
|
||||||
|
matrix-synapse-py3 (1.22.0) stable; urgency=medium
|
||||||
|
|
||||||
|
* New synapse release 1.22.0.
|
||||||
|
|
||||||
|
-- Synapse Packaging team <packages@matrix.org> Tue, 27 Oct 2020 12:07:12 +0000
|
||||||
|
|
||||||
matrix-synapse-py3 (1.21.2) stable; urgency=medium
|
matrix-synapse-py3 (1.21.2) stable; urgency=medium
|
||||||
|
|
||||||
[ Synapse Packaging team ]
|
[ Synapse Packaging team ]
|
||||||
|
|
|
@ -17,67 +17,26 @@ It returns a JSON body like the following:
|
||||||
{
|
{
|
||||||
"event_reports": [
|
"event_reports": [
|
||||||
{
|
{
|
||||||
"content": {
|
|
||||||
"reason": "foo",
|
|
||||||
"score": -100
|
|
||||||
},
|
|
||||||
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
|
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
|
||||||
"event_json": {
|
|
||||||
"auth_events": [
|
|
||||||
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M",
|
|
||||||
"$oggsNXxzPFRE3y53SUNd7nsj69-QzKv03a1RucHu-ws"
|
|
||||||
],
|
|
||||||
"content": {
|
|
||||||
"body": "matrix.org: This Week in Matrix",
|
|
||||||
"format": "org.matrix.custom.html",
|
|
||||||
"formatted_body": "<strong>matrix.org</strong>:<br><a href=\"https://matrix.org/blog/\"><strong>This Week in Matrix</strong></a>",
|
|
||||||
"msgtype": "m.notice"
|
|
||||||
},
|
|
||||||
"depth": 546,
|
|
||||||
"hashes": {
|
|
||||||
"sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
|
|
||||||
},
|
|
||||||
"origin": "matrix.org",
|
|
||||||
"origin_server_ts": 1592291711430,
|
|
||||||
"prev_events": [
|
|
||||||
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"
|
|
||||||
],
|
|
||||||
"prev_state": [],
|
|
||||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
|
||||||
"sender": "@foobar:matrix.org",
|
|
||||||
"signatures": {
|
|
||||||
"matrix.org": {
|
|
||||||
"ed25519:a_JaEG": "cs+OUKW/iHx5pEidbWxh0UiNNHwe46Ai9LwNz+Ah16aWDNszVIe2gaAcVZfvNsBhakQTew51tlKmL2kspXk/Dg"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"type": "m.room.message",
|
|
||||||
"unsigned": {
|
|
||||||
"age_ts": 1592291711430,
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"id": 2,
|
"id": 2,
|
||||||
"reason": "foo",
|
"reason": "foo",
|
||||||
|
"score": -100,
|
||||||
"received_ts": 1570897107409,
|
"received_ts": 1570897107409,
|
||||||
"room_alias": "#alias1:matrix.org",
|
"canonical_alias": "#alias1:matrix.org",
|
||||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||||
|
"name": "Matrix HQ",
|
||||||
"sender": "@foobar:matrix.org",
|
"sender": "@foobar:matrix.org",
|
||||||
"user_id": "@foo:matrix.org"
|
"user_id": "@foo:matrix.org"
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"content": {
|
|
||||||
"reason": "bar",
|
|
||||||
"score": -100
|
|
||||||
},
|
|
||||||
"event_id": "$3IcdZsDaN_En-S1DF4EMCy3v4gNRKeOJs8W5qTOKj4I",
|
"event_id": "$3IcdZsDaN_En-S1DF4EMCy3v4gNRKeOJs8W5qTOKj4I",
|
||||||
"event_json": {
|
|
||||||
// hidden items
|
|
||||||
// see above
|
|
||||||
},
|
|
||||||
"id": 3,
|
"id": 3,
|
||||||
"reason": "bar",
|
"reason": "bar",
|
||||||
|
"score": -100,
|
||||||
"received_ts": 1598889612059,
|
"received_ts": 1598889612059,
|
||||||
"room_alias": "#alias2:matrix.org",
|
"canonical_alias": "#alias2:matrix.org",
|
||||||
"room_id": "!eGvUQuTCkHGVwNMOjv:matrix.org",
|
"room_id": "!eGvUQuTCkHGVwNMOjv:matrix.org",
|
||||||
|
"name": "Your room name here",
|
||||||
"sender": "@foobar:matrix.org",
|
"sender": "@foobar:matrix.org",
|
||||||
"user_id": "@bar:matrix.org"
|
"user_id": "@bar:matrix.org"
|
||||||
}
|
}
|
||||||
|
@ -113,17 +72,94 @@ The following fields are returned in the JSON response body:
|
||||||
- ``id``: integer - ID of event report.
|
- ``id``: integer - ID of event report.
|
||||||
- ``received_ts``: integer - The timestamp (in milliseconds since the unix epoch) when this report was sent.
|
- ``received_ts``: integer - The timestamp (in milliseconds since the unix epoch) when this report was sent.
|
||||||
- ``room_id``: string - The ID of the room in which the event being reported is located.
|
- ``room_id``: string - The ID of the room in which the event being reported is located.
|
||||||
|
- ``name``: string - The name of the room.
|
||||||
- ``event_id``: string - The ID of the reported event.
|
- ``event_id``: string - The ID of the reported event.
|
||||||
- ``user_id``: string - This is the user who reported the event and wrote the reason.
|
- ``user_id``: string - This is the user who reported the event and wrote the reason.
|
||||||
- ``reason``: string - Comment made by the ``user_id`` in this report. May be blank.
|
- ``reason``: string - Comment made by the ``user_id`` in this report. May be blank.
|
||||||
- ``content``: object - Content of reported event.
|
- ``score``: integer - Content is reported based upon a negative score, where -100 is "most offensive" and 0 is "inoffensive".
|
||||||
|
|
||||||
- ``reason``: string - Comment made by the ``user_id`` in this report. May be blank.
|
|
||||||
- ``score``: integer - Content is reported based upon a negative score, where -100 is "most offensive" and 0 is "inoffensive".
|
|
||||||
|
|
||||||
- ``sender``: string - This is the ID of the user who sent the original message/event that was reported.
|
- ``sender``: string - This is the ID of the user who sent the original message/event that was reported.
|
||||||
- ``room_alias``: string - The alias of the room. ``null`` if the room does not have a canonical alias set.
|
- ``canonical_alias``: string - The canonical alias of the room. ``null`` if the room does not have a canonical alias set.
|
||||||
- ``event_json``: object - Details of the original event that was reported.
|
|
||||||
- ``next_token``: integer - Indication for pagination. See above.
|
- ``next_token``: integer - Indication for pagination. See above.
|
||||||
- ``total``: integer - Total number of event reports related to the query (``user_id`` and ``room_id``).
|
- ``total``: integer - Total number of event reports related to the query (``user_id`` and ``room_id``).
|
||||||
|
|
||||||
|
Show details of a specific event report
|
||||||
|
=======================================
|
||||||
|
|
||||||
|
This API returns information about a specific event report.
|
||||||
|
|
||||||
|
The api is::
|
||||||
|
|
||||||
|
GET /_synapse/admin/v1/event_reports/<report_id>
|
||||||
|
|
||||||
|
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||||
|
server admin: see `README.rst <README.rst>`_.
|
||||||
|
|
||||||
|
It returns a JSON body like the following:
|
||||||
|
|
||||||
|
.. code:: jsonc
|
||||||
|
|
||||||
|
{
|
||||||
|
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
|
||||||
|
"event_json": {
|
||||||
|
"auth_events": [
|
||||||
|
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M",
|
||||||
|
"$oggsNXxzPFRE3y53SUNd7nsj69-QzKv03a1RucHu-ws"
|
||||||
|
],
|
||||||
|
"content": {
|
||||||
|
"body": "matrix.org: This Week in Matrix",
|
||||||
|
"format": "org.matrix.custom.html",
|
||||||
|
"formatted_body": "<strong>matrix.org</strong>:<br><a href=\"https://matrix.org/blog/\"><strong>This Week in Matrix</strong></a>",
|
||||||
|
"msgtype": "m.notice"
|
||||||
|
},
|
||||||
|
"depth": 546,
|
||||||
|
"hashes": {
|
||||||
|
"sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
|
||||||
|
},
|
||||||
|
"origin": "matrix.org",
|
||||||
|
"origin_server_ts": 1592291711430,
|
||||||
|
"prev_events": [
|
||||||
|
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"
|
||||||
|
],
|
||||||
|
"prev_state": [],
|
||||||
|
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||||
|
"sender": "@foobar:matrix.org",
|
||||||
|
"signatures": {
|
||||||
|
"matrix.org": {
|
||||||
|
"ed25519:a_JaEG": "cs+OUKW/iHx5pEidbWxh0UiNNHwe46Ai9LwNz+Ah16aWDNszVIe2gaAcVZfvNsBhakQTew51tlKmL2kspXk/Dg"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"type": "m.room.message",
|
||||||
|
"unsigned": {
|
||||||
|
"age_ts": 1592291711430,
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"id": <report_id>,
|
||||||
|
"reason": "foo",
|
||||||
|
"score": -100,
|
||||||
|
"received_ts": 1570897107409,
|
||||||
|
"canonical_alias": "#alias1:matrix.org",
|
||||||
|
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||||
|
"name": "Matrix HQ",
|
||||||
|
"sender": "@foobar:matrix.org",
|
||||||
|
"user_id": "@foo:matrix.org"
|
||||||
|
}
|
||||||
|
|
||||||
|
**URL parameters:**
|
||||||
|
|
||||||
|
- ``report_id``: string - The ID of the event report.
|
||||||
|
|
||||||
|
**Response**
|
||||||
|
|
||||||
|
The following fields are returned in the JSON response body:
|
||||||
|
|
||||||
|
- ``id``: integer - ID of event report.
|
||||||
|
- ``received_ts``: integer - The timestamp (in milliseconds since the unix epoch) when this report was sent.
|
||||||
|
- ``room_id``: string - The ID of the room in which the event being reported is located.
|
||||||
|
- ``name``: string - The name of the room.
|
||||||
|
- ``event_id``: string - The ID of the reported event.
|
||||||
|
- ``user_id``: string - This is the user who reported the event and wrote the reason.
|
||||||
|
- ``reason``: string - Comment made by the ``user_id`` in this report. May be blank.
|
||||||
|
- ``score``: integer - Content is reported based upon a negative score, where -100 is "most offensive" and 0 is "inoffensive".
|
||||||
|
- ``sender``: string - This is the ID of the user who sent the original message/event that was reported.
|
||||||
|
- ``canonical_alias``: string - The canonical alias of the room. ``null`` if the room does not have a canonical alias set.
|
||||||
|
- ``event_json``: object - Details of the original event that was reported.
|
||||||
|
|
|
@ -100,3 +100,82 @@ Response:
|
||||||
"num_quarantined": 10 # The number of media items successfully quarantined
|
"num_quarantined": 10 # The number of media items successfully quarantined
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
# Delete local media
|
||||||
|
This API deletes the *local* media from the disk of your own server.
|
||||||
|
This includes any local thumbnails and copies of media downloaded from
|
||||||
|
remote homeservers.
|
||||||
|
This API will not affect media that has been uploaded to external
|
||||||
|
media repositories (e.g https://github.com/turt2live/matrix-media-repo/).
|
||||||
|
See also [purge_remote_media.rst](purge_remote_media.rst).
|
||||||
|
|
||||||
|
## Delete a specific local media
|
||||||
|
Delete a specific `media_id`.
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```
|
||||||
|
DELETE /_synapse/admin/v1/media/<server_name>/<media_id>
|
||||||
|
|
||||||
|
{}
|
||||||
|
```
|
||||||
|
|
||||||
|
URL Parameters
|
||||||
|
|
||||||
|
* `server_name`: string - The name of your local server (e.g `matrix.org`)
|
||||||
|
* `media_id`: string - The ID of the media (e.g `abcdefghijklmnopqrstuvwx`)
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"deleted_media": [
|
||||||
|
"abcdefghijklmnopqrstuvwx"
|
||||||
|
],
|
||||||
|
"total": 1
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The following fields are returned in the JSON response body:
|
||||||
|
|
||||||
|
* `deleted_media`: an array of strings - List of deleted `media_id`
|
||||||
|
* `total`: integer - Total number of deleted `media_id`
|
||||||
|
|
||||||
|
## Delete local media by date or size
|
||||||
|
|
||||||
|
Request:
|
||||||
|
|
||||||
|
```
|
||||||
|
POST /_synapse/admin/v1/media/<server_name>/delete?before_ts=<before_ts>
|
||||||
|
|
||||||
|
{}
|
||||||
|
```
|
||||||
|
|
||||||
|
URL Parameters
|
||||||
|
|
||||||
|
* `server_name`: string - The name of your local server (e.g `matrix.org`).
|
||||||
|
* `before_ts`: string representing a positive integer - Unix timestamp in ms.
|
||||||
|
Files that were last used before this timestamp will be deleted. It is the timestamp of
|
||||||
|
last access and not the timestamp creation.
|
||||||
|
* `size_gt`: Optional - string representing a positive integer - Size of the media in bytes.
|
||||||
|
Files that are larger will be deleted. Defaults to `0`.
|
||||||
|
* `keep_profiles`: Optional - string representing a boolean - Switch to also delete files
|
||||||
|
that are still used in image data (e.g user profile, room avatar).
|
||||||
|
If `false` these files will be deleted. Defaults to `true`.
|
||||||
|
|
||||||
|
Response:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"deleted_media": [
|
||||||
|
"abcdefghijklmnopqrstuvwx",
|
||||||
|
"abcdefghijklmnopqrstuvwz"
|
||||||
|
],
|
||||||
|
"total": 2
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The following fields are returned in the JSON response body:
|
||||||
|
|
||||||
|
* `deleted_media`: an array of strings - List of deleted `media_id`
|
||||||
|
* `total`: integer - Total number of deleted `media_id`
|
||||||
|
|
|
@ -341,6 +341,89 @@ The following fields are returned in the JSON response body:
|
||||||
- ``total`` - Number of rooms.
|
- ``total`` - Number of rooms.
|
||||||
|
|
||||||
|
|
||||||
|
List media of an user
|
||||||
|
================================
|
||||||
|
Gets a list of all local media that a specific ``user_id`` has created.
|
||||||
|
The response is ordered by creation date descending and media ID descending.
|
||||||
|
The newest media is on top.
|
||||||
|
|
||||||
|
The API is::
|
||||||
|
|
||||||
|
GET /_synapse/admin/v1/users/<user_id>/media
|
||||||
|
|
||||||
|
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||||
|
server admin: see `README.rst <README.rst>`_.
|
||||||
|
|
||||||
|
A response body like the following is returned:
|
||||||
|
|
||||||
|
.. code:: json
|
||||||
|
|
||||||
|
{
|
||||||
|
"media": [
|
||||||
|
{
|
||||||
|
"created_ts": 100400,
|
||||||
|
"last_access_ts": null,
|
||||||
|
"media_id": "qXhyRzulkwLsNHTbpHreuEgo",
|
||||||
|
"media_length": 67,
|
||||||
|
"media_type": "image/png",
|
||||||
|
"quarantined_by": null,
|
||||||
|
"safe_from_quarantine": false,
|
||||||
|
"upload_name": "test1.png"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"created_ts": 200400,
|
||||||
|
"last_access_ts": null,
|
||||||
|
"media_id": "FHfiSnzoINDatrXHQIXBtahw",
|
||||||
|
"media_length": 67,
|
||||||
|
"media_type": "image/png",
|
||||||
|
"quarantined_by": null,
|
||||||
|
"safe_from_quarantine": false,
|
||||||
|
"upload_name": "test2.png"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"next_token": 3,
|
||||||
|
"total": 2
|
||||||
|
}
|
||||||
|
|
||||||
|
To paginate, check for ``next_token`` and if present, call the endpoint again
|
||||||
|
with ``from`` set to the value of ``next_token``. This will return a new page.
|
||||||
|
|
||||||
|
If the endpoint does not return a ``next_token`` then there are no more
|
||||||
|
reports to paginate through.
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
The following parameters should be set in the URL:
|
||||||
|
|
||||||
|
- ``user_id`` - string - fully qualified: for example, ``@user:server.com``.
|
||||||
|
- ``limit``: string representing a positive integer - Is optional but is used for pagination,
|
||||||
|
denoting the maximum number of items to return in this call. Defaults to ``100``.
|
||||||
|
- ``from``: string representing a positive integer - Is optional but used for pagination,
|
||||||
|
denoting the offset in the returned results. This should be treated as an opaque value and
|
||||||
|
not explicitly set to anything other than the return value of ``next_token`` from a previous call.
|
||||||
|
Defaults to ``0``.
|
||||||
|
|
||||||
|
**Response**
|
||||||
|
|
||||||
|
The following fields are returned in the JSON response body:
|
||||||
|
|
||||||
|
- ``media`` - An array of objects, each containing information about a media.
|
||||||
|
Media objects contain the following fields:
|
||||||
|
|
||||||
|
- ``created_ts`` - integer - Timestamp when the content was uploaded in ms.
|
||||||
|
- ``last_access_ts`` - integer - Timestamp when the content was last accessed in ms.
|
||||||
|
- ``media_id`` - string - The id used to refer to the media.
|
||||||
|
- ``media_length`` - integer - Length of the media in bytes.
|
||||||
|
- ``media_type`` - string - The MIME-type of the media.
|
||||||
|
- ``quarantined_by`` - string - The user ID that initiated the quarantine request
|
||||||
|
for this media.
|
||||||
|
|
||||||
|
- ``safe_from_quarantine`` - bool - Status if this media is safe from quarantining.
|
||||||
|
- ``upload_name`` - string - The name the media was uploaded with.
|
||||||
|
|
||||||
|
- ``next_token``: integer - Indication for pagination. See above.
|
||||||
|
- ``total`` - integer - Total number of media.
|
||||||
|
|
||||||
User devices
|
User devices
|
||||||
============
|
============
|
||||||
|
|
||||||
|
@ -375,7 +458,8 @@ A response body like the following is returned:
|
||||||
"last_seen_ts": 1474491775025,
|
"last_seen_ts": 1474491775025,
|
||||||
"user_id": "<user_id>"
|
"user_id": "<user_id>"
|
||||||
}
|
}
|
||||||
]
|
],
|
||||||
|
"total": 2
|
||||||
}
|
}
|
||||||
|
|
||||||
**Parameters**
|
**Parameters**
|
||||||
|
@ -400,6 +484,8 @@ The following fields are returned in the JSON response body:
|
||||||
devices was last seen. (May be a few minutes out of date, for efficiency reasons).
|
devices was last seen. (May be a few minutes out of date, for efficiency reasons).
|
||||||
- ``user_id`` - Owner of device.
|
- ``user_id`` - Owner of device.
|
||||||
|
|
||||||
|
- ``total`` - Total number of user's devices.
|
||||||
|
|
||||||
Delete multiple devices
|
Delete multiple devices
|
||||||
------------------
|
------------------
|
||||||
Deletes the given devices for a specific ``user_id``, and invalidates
|
Deletes the given devices for a specific ``user_id``, and invalidates
|
||||||
|
|
|
@ -37,7 +37,7 @@ as follows:
|
||||||
provided by `matrix.org` so no further action is needed.
|
provided by `matrix.org` so no further action is needed.
|
||||||
|
|
||||||
* If you installed Synapse into a virtualenv, run `/path/to/env/bin/pip
|
* If you installed Synapse into a virtualenv, run `/path/to/env/bin/pip
|
||||||
install synapse[oidc]` to install the necessary dependencies.
|
install matrix-synapse[oidc]` to install the necessary dependencies.
|
||||||
|
|
||||||
* For other installation mechanisms, see the documentation provided by the
|
* For other installation mechanisms, see the documentation provided by the
|
||||||
maintainer.
|
maintainer.
|
||||||
|
@ -52,14 +52,39 @@ specific providers.
|
||||||
|
|
||||||
Here are a few configs for providers that should work with Synapse.
|
Here are a few configs for providers that should work with Synapse.
|
||||||
|
|
||||||
|
### Microsoft Azure Active Directory
|
||||||
|
Azure AD can act as an OpenID Connect Provider. Register a new application under
|
||||||
|
*App registrations* in the Azure AD management console. The RedirectURI for your
|
||||||
|
application should point to your matrix server: `[synapse public baseurl]/_synapse/oidc/callback`
|
||||||
|
|
||||||
|
Go to *Certificates & secrets* and register a new client secret. Make note of your
|
||||||
|
Directory (tenant) ID as it will be used in the Azure links.
|
||||||
|
Edit your Synapse config file and change the `oidc_config` section:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
oidc_config:
|
||||||
|
enabled: true
|
||||||
|
issuer: "https://login.microsoftonline.com/<tenant id>/v2.0"
|
||||||
|
client_id: "<client id>"
|
||||||
|
client_secret: "<client secret>"
|
||||||
|
scopes: ["openid", "profile"]
|
||||||
|
authorization_endpoint: "https://login.microsoftonline.com/<tenant id>/oauth2/v2.0/authorize"
|
||||||
|
token_endpoint: "https://login.microsoftonline.com/<tenant id>/oauth2/v2.0/token"
|
||||||
|
userinfo_endpoint: "https://graph.microsoft.com/oidc/userinfo"
|
||||||
|
|
||||||
|
user_mapping_provider:
|
||||||
|
config:
|
||||||
|
localpart_template: "{{ user.preferred_username.split('@')[0] }}"
|
||||||
|
display_name_template: "{{ user.name }}"
|
||||||
|
```
|
||||||
|
|
||||||
### [Dex][dex-idp]
|
### [Dex][dex-idp]
|
||||||
|
|
||||||
[Dex][dex-idp] is a simple, open-source, certified OpenID Connect Provider.
|
[Dex][dex-idp] is a simple, open-source, certified OpenID Connect Provider.
|
||||||
Although it is designed to help building a full-blown provider with an
|
Although it is designed to help building a full-blown provider with an
|
||||||
external database, it can be configured with static passwords in a config file.
|
external database, it can be configured with static passwords in a config file.
|
||||||
|
|
||||||
Follow the [Getting Started
|
Follow the [Getting Started guide](https://dexidp.io/docs/getting-started/)
|
||||||
guide](https://github.com/dexidp/dex/blob/master/Documentation/getting-started.md)
|
|
||||||
to install Dex.
|
to install Dex.
|
||||||
|
|
||||||
Edit `examples/config-dev.yaml` config file from the Dex repo to add a client:
|
Edit `examples/config-dev.yaml` config file from the Dex repo to add a client:
|
||||||
|
@ -73,7 +98,7 @@ staticClients:
|
||||||
name: 'Synapse'
|
name: 'Synapse'
|
||||||
```
|
```
|
||||||
|
|
||||||
Run with `dex serve examples/config-dex.yaml`.
|
Run with `dex serve examples/config-dev.yaml`.
|
||||||
|
|
||||||
Synapse config:
|
Synapse config:
|
||||||
|
|
||||||
|
|
|
@ -1886,7 +1886,7 @@ sso:
|
||||||
# and issued at ("iat") claims are validated if present.
|
# and issued at ("iat") claims are validated if present.
|
||||||
#
|
#
|
||||||
# Note that this is a non-standard login type and client support is
|
# Note that this is a non-standard login type and client support is
|
||||||
# expected to be non-existant.
|
# expected to be non-existent.
|
||||||
#
|
#
|
||||||
# See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md.
|
# See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md.
|
||||||
#
|
#
|
||||||
|
@ -2402,7 +2402,7 @@ spam_checker:
|
||||||
#
|
#
|
||||||
# Options for the rules include:
|
# Options for the rules include:
|
||||||
#
|
#
|
||||||
# user_id: Matches agaisnt the creator of the alias
|
# user_id: Matches against the creator of the alias
|
||||||
# room_id: Matches against the room ID being published
|
# room_id: Matches against the room ID being published
|
||||||
# alias: Matches against any current local or canonical aliases
|
# alias: Matches against any current local or canonical aliases
|
||||||
# associated with the room
|
# associated with the room
|
||||||
|
@ -2448,7 +2448,7 @@ opentracing:
|
||||||
# This is a list of regexes which are matched against the server_name of the
|
# This is a list of regexes which are matched against the server_name of the
|
||||||
# homeserver.
|
# homeserver.
|
||||||
#
|
#
|
||||||
# By defult, it is empty, so no servers are matched.
|
# By default, it is empty, so no servers are matched.
|
||||||
#
|
#
|
||||||
#homeserver_whitelist:
|
#homeserver_whitelist:
|
||||||
# - ".*"
|
# - ".*"
|
||||||
|
|
|
@ -59,7 +59,7 @@ root:
|
||||||
# then write them to a file.
|
# then write them to a file.
|
||||||
#
|
#
|
||||||
# Replace "buffer" with "console" to log to stderr instead. (Note that you'll
|
# Replace "buffer" with "console" to log to stderr instead. (Note that you'll
|
||||||
# also need to update the configuation for the `twisted` logger above, in
|
# also need to update the configuration for the `twisted` logger above, in
|
||||||
# this case.)
|
# this case.)
|
||||||
#
|
#
|
||||||
handlers: [buffer]
|
handlers: [buffer]
|
||||||
|
|
2
mypy.ini
2
mypy.ini
|
@ -17,6 +17,7 @@ files =
|
||||||
synapse/federation,
|
synapse/federation,
|
||||||
synapse/handlers/_base.py,
|
synapse/handlers/_base.py,
|
||||||
synapse/handlers/account_data.py,
|
synapse/handlers/account_data.py,
|
||||||
|
synapse/handlers/account_validity.py,
|
||||||
synapse/handlers/appservice.py,
|
synapse/handlers/appservice.py,
|
||||||
synapse/handlers/auth.py,
|
synapse/handlers/auth.py,
|
||||||
synapse/handlers/cas_handler.py,
|
synapse/handlers/cas_handler.py,
|
||||||
|
@ -57,6 +58,7 @@ files =
|
||||||
synapse/spam_checker_api,
|
synapse/spam_checker_api,
|
||||||
synapse/state,
|
synapse/state,
|
||||||
synapse/storage/databases/main/events.py,
|
synapse/storage/databases/main/events.py,
|
||||||
|
synapse/storage/databases/main/registration.py,
|
||||||
synapse/storage/databases/main/stream.py,
|
synapse/storage/databases/main/stream.py,
|
||||||
synapse/storage/databases/main/ui_auth.py,
|
synapse/storage/databases/main/ui_auth.py,
|
||||||
synapse/storage/database.py,
|
synapse/storage/database.py,
|
||||||
|
|
|
@ -48,7 +48,7 @@ try:
|
||||||
except ImportError:
|
except ImportError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
__version__ = "1.22.0rc1"
|
__version__ = "1.22.0"
|
||||||
|
|
||||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||||
# We import here so that we don't have to install a bunch of deps when
|
# We import here so that we don't have to install a bunch of deps when
|
||||||
|
|
|
@ -184,9 +184,7 @@ class Auth:
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
ip_addr = self.hs.get_ip_from_request(request)
|
ip_addr = self.hs.get_ip_from_request(request)
|
||||||
user_agent = request.requestHeaders.getRawHeaders(
|
user_agent = request.get_user_agent("")
|
||||||
b"User-Agent", default=[b""]
|
|
||||||
)[0].decode("ascii", "surrogateescape")
|
|
||||||
|
|
||||||
access_token = self.get_access_token_from_request(request)
|
access_token = self.get_access_token_from_request(request)
|
||||||
|
|
||||||
|
|
|
@ -63,7 +63,7 @@ class JWTConfig(Config):
|
||||||
# and issued at ("iat") claims are validated if present.
|
# and issued at ("iat") claims are validated if present.
|
||||||
#
|
#
|
||||||
# Note that this is a non-standard login type and client support is
|
# Note that this is a non-standard login type and client support is
|
||||||
# expected to be non-existant.
|
# expected to be non-existent.
|
||||||
#
|
#
|
||||||
# See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md.
|
# See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md.
|
||||||
#
|
#
|
||||||
|
|
|
@ -105,7 +105,7 @@ root:
|
||||||
# then write them to a file.
|
# then write them to a file.
|
||||||
#
|
#
|
||||||
# Replace "buffer" with "console" to log to stderr instead. (Note that you'll
|
# Replace "buffer" with "console" to log to stderr instead. (Note that you'll
|
||||||
# also need to update the configuation for the `twisted` logger above, in
|
# also need to update the configuration for the `twisted` logger above, in
|
||||||
# this case.)
|
# this case.)
|
||||||
#
|
#
|
||||||
handlers: [buffer]
|
handlers: [buffer]
|
||||||
|
|
|
@ -143,7 +143,7 @@ class RegistrationConfig(Config):
|
||||||
RoomCreationPreset.TRUSTED_PRIVATE_CHAT,
|
RoomCreationPreset.TRUSTED_PRIVATE_CHAT,
|
||||||
}
|
}
|
||||||
|
|
||||||
# Pull the creater/inviter from the configuration, this gets used to
|
# Pull the creator/inviter from the configuration, this gets used to
|
||||||
# send invites for invite-only rooms.
|
# send invites for invite-only rooms.
|
||||||
mxid_localpart = config.get("auto_join_mxid_localpart")
|
mxid_localpart = config.get("auto_join_mxid_localpart")
|
||||||
self.auto_join_user_id = None
|
self.auto_join_user_id = None
|
||||||
|
|
|
@ -99,7 +99,7 @@ class RoomDirectoryConfig(Config):
|
||||||
#
|
#
|
||||||
# Options for the rules include:
|
# Options for the rules include:
|
||||||
#
|
#
|
||||||
# user_id: Matches agaisnt the creator of the alias
|
# user_id: Matches against the creator of the alias
|
||||||
# room_id: Matches against the room ID being published
|
# room_id: Matches against the room ID being published
|
||||||
# alias: Matches against any current local or canonical aliases
|
# alias: Matches against any current local or canonical aliases
|
||||||
# associated with the room
|
# associated with the room
|
||||||
|
|
|
@ -67,7 +67,7 @@ class TracerConfig(Config):
|
||||||
# This is a list of regexes which are matched against the server_name of the
|
# This is a list of regexes which are matched against the server_name of the
|
||||||
# homeserver.
|
# homeserver.
|
||||||
#
|
#
|
||||||
# By defult, it is empty, so no servers are matched.
|
# By default, it is empty, so no servers are matched.
|
||||||
#
|
#
|
||||||
#homeserver_whitelist:
|
#homeserver_whitelist:
|
||||||
# - ".*"
|
# - ".*"
|
||||||
|
|
|
@ -149,7 +149,7 @@ class FederationPolicyForHTTPS:
|
||||||
return SSLClientConnectionCreator(host, ssl_context, should_verify)
|
return SSLClientConnectionCreator(host, ssl_context, should_verify)
|
||||||
|
|
||||||
def creatorForNetloc(self, hostname, port):
|
def creatorForNetloc(self, hostname, port):
|
||||||
"""Implements the IPolicyForHTTPS interace so that this can be passed
|
"""Implements the IPolicyForHTTPS interface so that this can be passed
|
||||||
directly to agents.
|
directly to agents.
|
||||||
"""
|
"""
|
||||||
return self.get_options(hostname)
|
return self.get_options(hostname)
|
||||||
|
|
|
@ -59,7 +59,7 @@ class DictProperty:
|
||||||
#
|
#
|
||||||
# To exclude the KeyError from the traceback, we explicitly
|
# To exclude the KeyError from the traceback, we explicitly
|
||||||
# 'raise from e1.__context__' (which is better than 'raise from None',
|
# 'raise from e1.__context__' (which is better than 'raise from None',
|
||||||
# becuase that would omit any *earlier* exceptions).
|
# because that would omit any *earlier* exceptions).
|
||||||
#
|
#
|
||||||
raise AttributeError(
|
raise AttributeError(
|
||||||
"'%s' has no '%s' property" % (type(instance), self.key)
|
"'%s' has no '%s' property" % (type(instance), self.key)
|
||||||
|
|
|
@ -180,7 +180,7 @@ def only_fields(dictionary, fields):
|
||||||
in 'fields'.
|
in 'fields'.
|
||||||
|
|
||||||
If there are no event fields specified then all fields are included.
|
If there are no event fields specified then all fields are included.
|
||||||
The entries may include '.' charaters to indicate sub-fields.
|
The entries may include '.' characters to indicate sub-fields.
|
||||||
So ['content.body'] will include the 'body' field of the 'content' object.
|
So ['content.body'] will include the 'body' field of the 'content' object.
|
||||||
A literal '.' character in a field name may be escaped using a '\'.
|
A literal '.' character in a field name may be escaped using a '\'.
|
||||||
|
|
||||||
|
|
|
@ -22,7 +22,7 @@ attestations have a validity period so need to be periodically renewed.
|
||||||
If a user leaves (or gets kicked out of) a group, either side can still use
|
If a user leaves (or gets kicked out of) a group, either side can still use
|
||||||
their attestation to "prove" their membership, until the attestation expires.
|
their attestation to "prove" their membership, until the attestation expires.
|
||||||
Therefore attestations shouldn't be relied on to prove membership in important
|
Therefore attestations shouldn't be relied on to prove membership in important
|
||||||
cases, but can for less important situtations, e.g. showing a users membership
|
cases, but can for less important situations, e.g. showing a users membership
|
||||||
of groups on their profile, showing flairs, etc.
|
of groups on their profile, showing flairs, etc.
|
||||||
|
|
||||||
An attestation is a signed blob of json that looks like:
|
An attestation is a signed blob of json that looks like:
|
||||||
|
|
|
@ -113,7 +113,7 @@ class GroupsServerWorkerHandler:
|
||||||
entry = await self.room_list_handler.generate_room_entry(
|
entry = await self.room_list_handler.generate_room_entry(
|
||||||
room_id, len(joined_users), with_alias=False, allow_private=True
|
room_id, len(joined_users), with_alias=False, allow_private=True
|
||||||
)
|
)
|
||||||
entry = dict(entry) # so we don't change whats cached
|
entry = dict(entry) # so we don't change what's cached
|
||||||
entry.pop("room_id", None)
|
entry.pop("room_id", None)
|
||||||
|
|
||||||
room_entry["profile"] = entry
|
room_entry["profile"] = entry
|
||||||
|
@ -550,7 +550,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||||
group_id, room_id, is_public=is_public
|
group_id, room_id, is_public=is_public
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
raise SynapseError(400, "Uknown config option")
|
raise SynapseError(400, "Unknown config option")
|
||||||
|
|
||||||
return {}
|
return {}
|
||||||
|
|
||||||
|
|
|
@ -18,19 +18,22 @@ import email.utils
|
||||||
import logging
|
import logging
|
||||||
from email.mime.multipart import MIMEMultipart
|
from email.mime.multipart import MIMEMultipart
|
||||||
from email.mime.text import MIMEText
|
from email.mime.text import MIMEText
|
||||||
from typing import List
|
from typing import TYPE_CHECKING, List
|
||||||
|
|
||||||
from synapse.api.errors import StoreError
|
from synapse.api.errors import StoreError, SynapseError
|
||||||
from synapse.logging.context import make_deferred_yieldable
|
from synapse.logging.context import make_deferred_yieldable
|
||||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||||
from synapse.types import UserID
|
from synapse.types import UserID
|
||||||
from synapse.util import stringutils
|
from synapse.util import stringutils
|
||||||
|
|
||||||
|
if TYPE_CHECKING:
|
||||||
|
from synapse.app.homeserver import HomeServer
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class AccountValidityHandler:
|
class AccountValidityHandler:
|
||||||
def __init__(self, hs):
|
def __init__(self, hs: "HomeServer"):
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
self.config = hs.config
|
self.config = hs.config
|
||||||
self.store = self.hs.get_datastore()
|
self.store = self.hs.get_datastore()
|
||||||
|
@ -67,7 +70,7 @@ class AccountValidityHandler:
|
||||||
self.clock.looping_call(self._send_renewal_emails, 30 * 60 * 1000)
|
self.clock.looping_call(self._send_renewal_emails, 30 * 60 * 1000)
|
||||||
|
|
||||||
@wrap_as_background_process("send_renewals")
|
@wrap_as_background_process("send_renewals")
|
||||||
async def _send_renewal_emails(self):
|
async def _send_renewal_emails(self) -> None:
|
||||||
"""Gets the list of users whose account is expiring in the amount of time
|
"""Gets the list of users whose account is expiring in the amount of time
|
||||||
configured in the ``renew_at`` parameter from the ``account_validity``
|
configured in the ``renew_at`` parameter from the ``account_validity``
|
||||||
configuration, and sends renewal emails to all of these users as long as they
|
configuration, and sends renewal emails to all of these users as long as they
|
||||||
|
@ -81,11 +84,25 @@ class AccountValidityHandler:
|
||||||
user_id=user["user_id"], expiration_ts=user["expiration_ts_ms"]
|
user_id=user["user_id"], expiration_ts=user["expiration_ts_ms"]
|
||||||
)
|
)
|
||||||
|
|
||||||
async def send_renewal_email_to_user(self, user_id: str):
|
async def send_renewal_email_to_user(self, user_id: str) -> None:
|
||||||
|
"""
|
||||||
|
Send a renewal email for a specific user.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
user_id: The user ID to send a renewal email for.
|
||||||
|
|
||||||
|
Raises:
|
||||||
|
SynapseError if the user is not set to renew.
|
||||||
|
"""
|
||||||
expiration_ts = await self.store.get_expiration_ts_for_user(user_id)
|
expiration_ts = await self.store.get_expiration_ts_for_user(user_id)
|
||||||
|
|
||||||
|
# If this user isn't set to be expired, raise an error.
|
||||||
|
if expiration_ts is None:
|
||||||
|
raise SynapseError(400, "User has no expiration time: %s" % (user_id,))
|
||||||
|
|
||||||
await self._send_renewal_email(user_id, expiration_ts)
|
await self._send_renewal_email(user_id, expiration_ts)
|
||||||
|
|
||||||
async def _send_renewal_email(self, user_id: str, expiration_ts: int):
|
async def _send_renewal_email(self, user_id: str, expiration_ts: int) -> None:
|
||||||
"""Sends out a renewal email to every email address attached to the given user
|
"""Sends out a renewal email to every email address attached to the given user
|
||||||
with a unique link allowing them to renew their account.
|
with a unique link allowing them to renew their account.
|
||||||
|
|
||||||
|
|
|
@ -88,7 +88,7 @@ class AdminHandler(BaseHandler):
|
||||||
|
|
||||||
# We only try and fetch events for rooms the user has been in. If
|
# We only try and fetch events for rooms the user has been in. If
|
||||||
# they've been e.g. invited to a room without joining then we handle
|
# they've been e.g. invited to a room without joining then we handle
|
||||||
# those seperately.
|
# those separately.
|
||||||
rooms_user_has_been_in = await self.store.get_rooms_user_has_been_in(user_id)
|
rooms_user_has_been_in = await self.store.get_rooms_user_has_been_in(user_id)
|
||||||
|
|
||||||
for index, room in enumerate(rooms):
|
for index, room in enumerate(rooms):
|
||||||
|
@ -226,7 +226,7 @@ class ExfiltrationWriter:
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def finished(self):
|
def finished(self):
|
||||||
"""Called when all data has succesfully been exported and written.
|
"""Called when all data has successfully been exported and written.
|
||||||
|
|
||||||
This functions return value is passed to the caller of
|
This functions return value is passed to the caller of
|
||||||
`export_user_data`.
|
`export_user_data`.
|
||||||
|
|
|
@ -14,7 +14,7 @@
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
from typing import Dict, List, Optional
|
from typing import Dict, List, Optional, Union
|
||||||
|
|
||||||
from prometheus_client import Counter
|
from prometheus_client import Counter
|
||||||
|
|
||||||
|
@ -30,7 +30,10 @@ from synapse.metrics import (
|
||||||
event_processing_loop_counter,
|
event_processing_loop_counter,
|
||||||
event_processing_loop_room_count,
|
event_processing_loop_room_count,
|
||||||
)
|
)
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import (
|
||||||
|
run_as_background_process,
|
||||||
|
wrap_as_background_process,
|
||||||
|
)
|
||||||
from synapse.types import Collection, JsonDict, RoomStreamToken, UserID
|
from synapse.types import Collection, JsonDict, RoomStreamToken, UserID
|
||||||
from synapse.util.metrics import Measure
|
from synapse.util.metrics import Measure
|
||||||
|
|
||||||
|
@ -53,7 +56,7 @@ class ApplicationServicesHandler:
|
||||||
self.current_max = 0
|
self.current_max = 0
|
||||||
self.is_processing = False
|
self.is_processing = False
|
||||||
|
|
||||||
async def notify_interested_services(self, max_token: RoomStreamToken):
|
def notify_interested_services(self, max_token: RoomStreamToken):
|
||||||
"""Notifies (pushes) all application services interested in this event.
|
"""Notifies (pushes) all application services interested in this event.
|
||||||
|
|
||||||
Pushing is done asynchronously, so this method won't block for any
|
Pushing is done asynchronously, so this method won't block for any
|
||||||
|
@ -72,6 +75,12 @@ class ApplicationServicesHandler:
|
||||||
if self.is_processing:
|
if self.is_processing:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
# We only start a new background process if necessary rather than
|
||||||
|
# optimistically (to cut down on overhead).
|
||||||
|
self._notify_interested_services(max_token)
|
||||||
|
|
||||||
|
@wrap_as_background_process("notify_interested_services")
|
||||||
|
async def _notify_interested_services(self, max_token: RoomStreamToken):
|
||||||
with Measure(self.clock, "notify_interested_services"):
|
with Measure(self.clock, "notify_interested_services"):
|
||||||
self.is_processing = True
|
self.is_processing = True
|
||||||
try:
|
try:
|
||||||
|
@ -166,8 +175,11 @@ class ApplicationServicesHandler:
|
||||||
finally:
|
finally:
|
||||||
self.is_processing = False
|
self.is_processing = False
|
||||||
|
|
||||||
async def notify_interested_services_ephemeral(
|
def notify_interested_services_ephemeral(
|
||||||
self, stream_key: str, new_token: Optional[int], users: Collection[UserID] = [],
|
self,
|
||||||
|
stream_key: str,
|
||||||
|
new_token: Optional[int],
|
||||||
|
users: Collection[Union[str, UserID]] = [],
|
||||||
):
|
):
|
||||||
"""This is called by the notifier in the background
|
"""This is called by the notifier in the background
|
||||||
when a ephemeral event handled by the homeserver.
|
when a ephemeral event handled by the homeserver.
|
||||||
|
@ -183,13 +195,34 @@ class ApplicationServicesHandler:
|
||||||
new_token: The latest stream token
|
new_token: The latest stream token
|
||||||
users: The user(s) involved with the event.
|
users: The user(s) involved with the event.
|
||||||
"""
|
"""
|
||||||
|
if not self.notify_appservices:
|
||||||
|
return
|
||||||
|
|
||||||
|
if stream_key not in ("typing_key", "receipt_key", "presence_key"):
|
||||||
|
return
|
||||||
|
|
||||||
services = [
|
services = [
|
||||||
service
|
service
|
||||||
for service in self.store.get_app_services()
|
for service in self.store.get_app_services()
|
||||||
if service.supports_ephemeral
|
if service.supports_ephemeral
|
||||||
]
|
]
|
||||||
if not services or not self.notify_appservices:
|
if not services:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
# We only start a new background process if necessary rather than
|
||||||
|
# optimistically (to cut down on overhead).
|
||||||
|
self._notify_interested_services_ephemeral(
|
||||||
|
services, stream_key, new_token, users
|
||||||
|
)
|
||||||
|
|
||||||
|
@wrap_as_background_process("notify_interested_services_ephemeral")
|
||||||
|
async def _notify_interested_services_ephemeral(
|
||||||
|
self,
|
||||||
|
services: List[ApplicationService],
|
||||||
|
stream_key: str,
|
||||||
|
new_token: Optional[int],
|
||||||
|
users: Collection[Union[str, UserID]],
|
||||||
|
):
|
||||||
logger.info("Checking interested services for %s" % (stream_key))
|
logger.info("Checking interested services for %s" % (stream_key))
|
||||||
with Measure(self.clock, "notify_interested_services_ephemeral"):
|
with Measure(self.clock, "notify_interested_services_ephemeral"):
|
||||||
for service in services:
|
for service in services:
|
||||||
|
@ -237,14 +270,17 @@ class ApplicationServicesHandler:
|
||||||
return receipts
|
return receipts
|
||||||
|
|
||||||
async def _handle_presence(
|
async def _handle_presence(
|
||||||
self, service: ApplicationService, users: Collection[UserID]
|
self, service: ApplicationService, users: Collection[Union[str, UserID]]
|
||||||
) -> List[JsonDict]:
|
):
|
||||||
events = [] # type: List[JsonDict]
|
events = [] # type: List[JsonDict]
|
||||||
presence_source = self.event_sources.sources["presence"]
|
presence_source = self.event_sources.sources["presence"]
|
||||||
from_key = await self.store.get_type_stream_id_for_appservice(
|
from_key = await self.store.get_type_stream_id_for_appservice(
|
||||||
service, "presence"
|
service, "presence"
|
||||||
)
|
)
|
||||||
for user in users:
|
for user in users:
|
||||||
|
if isinstance(user, str):
|
||||||
|
user = UserID.from_string(user)
|
||||||
|
|
||||||
interested = await service.is_interested_in_presence(user, self.store)
|
interested = await service.is_interested_in_presence(user, self.store)
|
||||||
if not interested:
|
if not interested:
|
||||||
continue
|
continue
|
||||||
|
|
|
@ -470,9 +470,7 @@ class AuthHandler(BaseHandler):
|
||||||
# authentication flow.
|
# authentication flow.
|
||||||
await self.store.set_ui_auth_clientdict(sid, clientdict)
|
await self.store.set_ui_auth_clientdict(sid, clientdict)
|
||||||
|
|
||||||
user_agent = request.requestHeaders.getRawHeaders(b"User-Agent", default=[b""])[
|
user_agent = request.get_user_agent("")
|
||||||
0
|
|
||||||
].decode("ascii", "surrogateescape")
|
|
||||||
|
|
||||||
await self.store.add_user_agent_ip_to_ui_auth_session(
|
await self.store.add_user_agent_ip_to_ui_auth_session(
|
||||||
session.session_id, user_agent, clientip
|
session.session_id, user_agent, clientip
|
||||||
|
@ -692,7 +690,7 @@ class AuthHandler(BaseHandler):
|
||||||
Creates a new access token for the user with the given user ID.
|
Creates a new access token for the user with the given user ID.
|
||||||
|
|
||||||
The user is assumed to have been authenticated by some other
|
The user is assumed to have been authenticated by some other
|
||||||
machanism (e.g. CAS), and the user_id converted to the canonical case.
|
mechanism (e.g. CAS), and the user_id converted to the canonical case.
|
||||||
|
|
||||||
The device will be recorded in the table if it is not there already.
|
The device will be recorded in the table if it is not there already.
|
||||||
|
|
||||||
|
|
|
@ -212,9 +212,7 @@ class CasHandler:
|
||||||
else:
|
else:
|
||||||
if not registered_user_id:
|
if not registered_user_id:
|
||||||
# Pull out the user-agent and IP from the request.
|
# Pull out the user-agent and IP from the request.
|
||||||
user_agent = request.requestHeaders.getRawHeaders(
|
user_agent = request.get_user_agent("")
|
||||||
b"User-Agent", default=[b""]
|
|
||||||
)[0].decode("ascii", "surrogateescape")
|
|
||||||
ip_address = self.hs.get_ip_from_request(request)
|
ip_address = self.hs.get_ip_from_request(request)
|
||||||
|
|
||||||
registered_user_id = await self._registration_handler.register_user(
|
registered_user_id = await self._registration_handler.register_user(
|
||||||
|
|
|
@ -129,6 +129,11 @@ class E2eKeysHandler:
|
||||||
if user_id in local_query:
|
if user_id in local_query:
|
||||||
results[user_id] = keys
|
results[user_id] = keys
|
||||||
|
|
||||||
|
# Get cached cross-signing keys
|
||||||
|
cross_signing_keys = await self.get_cross_signing_keys_from_cache(
|
||||||
|
device_keys_query, from_user_id
|
||||||
|
)
|
||||||
|
|
||||||
# Now attempt to get any remote devices from our local cache.
|
# Now attempt to get any remote devices from our local cache.
|
||||||
remote_queries_not_in_cache = {}
|
remote_queries_not_in_cache = {}
|
||||||
if remote_queries:
|
if remote_queries:
|
||||||
|
@ -155,16 +160,28 @@ class E2eKeysHandler:
|
||||||
unsigned["device_display_name"] = device_display_name
|
unsigned["device_display_name"] = device_display_name
|
||||||
user_devices[device_id] = result
|
user_devices[device_id] = result
|
||||||
|
|
||||||
|
# check for missing cross-signing keys.
|
||||||
|
for user_id in remote_queries.keys():
|
||||||
|
cached_cross_master = user_id in cross_signing_keys["master_keys"]
|
||||||
|
cached_cross_selfsigning = (
|
||||||
|
user_id in cross_signing_keys["self_signing_keys"]
|
||||||
|
)
|
||||||
|
|
||||||
|
# check if we are missing only one of cross-signing master or
|
||||||
|
# self-signing key, but the other one is cached.
|
||||||
|
# as we need both, this will issue a federation request.
|
||||||
|
# if we don't have any of the keys, either the user doesn't have
|
||||||
|
# cross-signing set up, or the cached device list
|
||||||
|
# is not (yet) updated.
|
||||||
|
if cached_cross_master ^ cached_cross_selfsigning:
|
||||||
|
user_ids_not_in_cache.add(user_id)
|
||||||
|
|
||||||
|
# add those users to the list to fetch over federation.
|
||||||
for user_id in user_ids_not_in_cache:
|
for user_id in user_ids_not_in_cache:
|
||||||
domain = get_domain_from_id(user_id)
|
domain = get_domain_from_id(user_id)
|
||||||
r = remote_queries_not_in_cache.setdefault(domain, {})
|
r = remote_queries_not_in_cache.setdefault(domain, {})
|
||||||
r[user_id] = remote_queries[user_id]
|
r[user_id] = remote_queries[user_id]
|
||||||
|
|
||||||
# Get cached cross-signing keys
|
|
||||||
cross_signing_keys = await self.get_cross_signing_keys_from_cache(
|
|
||||||
device_keys_query, from_user_id
|
|
||||||
)
|
|
||||||
|
|
||||||
# Now fetch any devices that we don't have in our cache
|
# Now fetch any devices that we don't have in our cache
|
||||||
@trace
|
@trace
|
||||||
async def do_remote_query(destination):
|
async def do_remote_query(destination):
|
||||||
|
|
|
@ -112,7 +112,7 @@ class FederationHandler(BaseHandler):
|
||||||
"""Handles events that originated from federation.
|
"""Handles events that originated from federation.
|
||||||
Responsible for:
|
Responsible for:
|
||||||
a) handling received Pdus before handing them on as Events to the rest
|
a) handling received Pdus before handing them on as Events to the rest
|
||||||
of the homeserver (including auth and state conflict resoultion)
|
of the homeserver (including auth and state conflict resolutions)
|
||||||
b) converting events that were produced by local clients that may need
|
b) converting events that were produced by local clients that may need
|
||||||
to be sent to remote homeservers.
|
to be sent to remote homeservers.
|
||||||
c) doing the necessary dances to invite remote users and join remote
|
c) doing the necessary dances to invite remote users and join remote
|
||||||
|
@ -477,7 +477,7 @@ class FederationHandler(BaseHandler):
|
||||||
# ----
|
# ----
|
||||||
#
|
#
|
||||||
# Update richvdh 2018/09/18: There are a number of problems with timing this
|
# Update richvdh 2018/09/18: There are a number of problems with timing this
|
||||||
# request out agressively on the client side:
|
# request out aggressively on the client side:
|
||||||
#
|
#
|
||||||
# - it plays badly with the server-side rate-limiter, which starts tarpitting you
|
# - it plays badly with the server-side rate-limiter, which starts tarpitting you
|
||||||
# if you send too many requests at once, so you end up with the server carefully
|
# if you send too many requests at once, so you end up with the server carefully
|
||||||
|
@ -495,13 +495,13 @@ class FederationHandler(BaseHandler):
|
||||||
# we'll end up back here for the *next* PDU in the list, which exacerbates the
|
# we'll end up back here for the *next* PDU in the list, which exacerbates the
|
||||||
# problem.
|
# problem.
|
||||||
#
|
#
|
||||||
# - the agressive 10s timeout was introduced to deal with incoming federation
|
# - the aggressive 10s timeout was introduced to deal with incoming federation
|
||||||
# requests taking 8 hours to process. It's not entirely clear why that was going
|
# requests taking 8 hours to process. It's not entirely clear why that was going
|
||||||
# on; certainly there were other issues causing traffic storms which are now
|
# on; certainly there were other issues causing traffic storms which are now
|
||||||
# resolved, and I think in any case we may be more sensible about our locking
|
# resolved, and I think in any case we may be more sensible about our locking
|
||||||
# now. We're *certainly* more sensible about our logging.
|
# now. We're *certainly* more sensible about our logging.
|
||||||
#
|
#
|
||||||
# All that said: Let's try increasing the timout to 60s and see what happens.
|
# All that said: Let's try increasing the timeout to 60s and see what happens.
|
||||||
|
|
||||||
try:
|
try:
|
||||||
missing_events = await self.federation_client.get_missing_events(
|
missing_events = await self.federation_client.get_missing_events(
|
||||||
|
@ -1120,7 +1120,7 @@ class FederationHandler(BaseHandler):
|
||||||
logger.info(str(e))
|
logger.info(str(e))
|
||||||
continue
|
continue
|
||||||
except RequestSendFailed as e:
|
except RequestSendFailed as e:
|
||||||
logger.info("Falied to get backfill from %s because %s", dom, e)
|
logger.info("Failed to get backfill from %s because %s", dom, e)
|
||||||
continue
|
continue
|
||||||
except FederationDeniedError as e:
|
except FederationDeniedError as e:
|
||||||
logger.info(e)
|
logger.info(e)
|
||||||
|
@ -1545,7 +1545,7 @@ class FederationHandler(BaseHandler):
|
||||||
#
|
#
|
||||||
# The reasons we have the destination server rather than the origin
|
# The reasons we have the destination server rather than the origin
|
||||||
# server send it are slightly mysterious: the origin server should have
|
# server send it are slightly mysterious: the origin server should have
|
||||||
# all the neccessary state once it gets the response to the send_join,
|
# all the necessary state once it gets the response to the send_join,
|
||||||
# so it could send the event itself if it wanted to. It may be that
|
# so it could send the event itself if it wanted to. It may be that
|
||||||
# doing it this way reduces failure modes, or avoids certain attacks
|
# doing it this way reduces failure modes, or avoids certain attacks
|
||||||
# where a new server selectively tells a subset of the federation that
|
# where a new server selectively tells a subset of the federation that
|
||||||
|
@ -1649,7 +1649,7 @@ class FederationHandler(BaseHandler):
|
||||||
event.internal_metadata.outlier = True
|
event.internal_metadata.outlier = True
|
||||||
event.internal_metadata.out_of_band_membership = True
|
event.internal_metadata.out_of_band_membership = True
|
||||||
|
|
||||||
# Try the host that we succesfully called /make_leave/ on first for
|
# Try the host that we successfully called /make_leave/ on first for
|
||||||
# the /send_leave/ request.
|
# the /send_leave/ request.
|
||||||
host_list = list(target_hosts)
|
host_list = list(target_hosts)
|
||||||
try:
|
try:
|
||||||
|
|
|
@ -17,7 +17,7 @@
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
|
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
|
||||||
from synapse.types import get_domain_from_id
|
from synapse.types import GroupID, get_domain_from_id
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
@ -28,6 +28,9 @@ def _create_rerouter(func_name):
|
||||||
"""
|
"""
|
||||||
|
|
||||||
async def f(self, group_id, *args, **kwargs):
|
async def f(self, group_id, *args, **kwargs):
|
||||||
|
if not GroupID.is_valid(group_id):
|
||||||
|
raise SynapseError(400, "%s was not legal group ID" % (group_id,))
|
||||||
|
|
||||||
if self.is_mine_id(group_id):
|
if self.is_mine_id(group_id):
|
||||||
return await getattr(self.groups_server_handler, func_name)(
|
return await getattr(self.groups_server_handler, func_name)(
|
||||||
group_id, *args, **kwargs
|
group_id, *args, **kwargs
|
||||||
|
@ -346,7 +349,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
|
||||||
server_name=get_domain_from_id(group_id),
|
server_name=get_domain_from_id(group_id),
|
||||||
)
|
)
|
||||||
|
|
||||||
# TODO: Check that the group is public and we're being added publically
|
# TODO: Check that the group is public and we're being added publicly
|
||||||
is_publicised = content.get("publicise", False)
|
is_publicised = content.get("publicise", False)
|
||||||
|
|
||||||
token = await self.store.register_user_group_membership(
|
token = await self.store.register_user_group_membership(
|
||||||
|
@ -391,7 +394,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
|
||||||
server_name=get_domain_from_id(group_id),
|
server_name=get_domain_from_id(group_id),
|
||||||
)
|
)
|
||||||
|
|
||||||
# TODO: Check that the group is public and we're being added publically
|
# TODO: Check that the group is public and we're being added publicly
|
||||||
is_publicised = content.get("publicise", False)
|
is_publicised = content.get("publicise", False)
|
||||||
|
|
||||||
token = await self.store.register_user_group_membership(
|
token = await self.store.register_user_group_membership(
|
||||||
|
|
|
@ -657,7 +657,7 @@ class EventCreationHandler:
|
||||||
context: The event context.
|
context: The event context.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
The previous verion of the event is returned, if it is found in the
|
The previous version of the event is returned, if it is found in the
|
||||||
event context. Otherwise, None is returned.
|
event context. Otherwise, None is returned.
|
||||||
"""
|
"""
|
||||||
prev_state_ids = await context.get_prev_state_ids()
|
prev_state_ids = await context.get_prev_state_ids()
|
||||||
|
|
|
@ -217,7 +217,7 @@ class OidcHandler:
|
||||||
|
|
||||||
This is based on the requested scopes: if the scopes include
|
This is based on the requested scopes: if the scopes include
|
||||||
``openid``, the provider should give use an ID token containing the
|
``openid``, the provider should give use an ID token containing the
|
||||||
user informations. If not, we should fetch them using the
|
user information. If not, we should fetch them using the
|
||||||
``access_token`` with the ``userinfo_endpoint``.
|
``access_token`` with the ``userinfo_endpoint``.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
@ -426,7 +426,7 @@ class OidcHandler:
|
||||||
return resp
|
return resp
|
||||||
|
|
||||||
async def _fetch_userinfo(self, token: Token) -> UserInfo:
|
async def _fetch_userinfo(self, token: Token) -> UserInfo:
|
||||||
"""Fetch user informations from the ``userinfo_endpoint``.
|
"""Fetch user information from the ``userinfo_endpoint``.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
token: the token given by the ``token_endpoint``.
|
token: the token given by the ``token_endpoint``.
|
||||||
|
@ -695,9 +695,7 @@ class OidcHandler:
|
||||||
return
|
return
|
||||||
|
|
||||||
# Pull out the user-agent and IP from the request.
|
# Pull out the user-agent and IP from the request.
|
||||||
user_agent = request.requestHeaders.getRawHeaders(b"User-Agent", default=[b""])[
|
user_agent = request.get_user_agent("")
|
||||||
0
|
|
||||||
].decode("ascii", "surrogateescape")
|
|
||||||
ip_address = self.hs.get_ip_from_request(request)
|
ip_address = self.hs.get_ip_from_request(request)
|
||||||
|
|
||||||
# Call the mapper to register/login the user
|
# Call the mapper to register/login the user
|
||||||
|
@ -756,7 +754,7 @@ class OidcHandler:
|
||||||
Defaults to an hour.
|
Defaults to an hour.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
A signed macaroon token with the session informations.
|
A signed macaroon token with the session information.
|
||||||
"""
|
"""
|
||||||
macaroon = pymacaroons.Macaroon(
|
macaroon = pymacaroons.Macaroon(
|
||||||
location=self._server_name, identifier="key", key=self._macaroon_secret_key,
|
location=self._server_name, identifier="key", key=self._macaroon_secret_key,
|
||||||
|
|
|
@ -802,7 +802,7 @@ class PresenceHandler(BasePresenceHandler):
|
||||||
between the requested tokens due to the limit.
|
between the requested tokens due to the limit.
|
||||||
|
|
||||||
The token returned can be used in a subsequent call to this
|
The token returned can be used in a subsequent call to this
|
||||||
function to get further updatees.
|
function to get further updates.
|
||||||
|
|
||||||
The updates are a list of 2-tuples of stream ID and the row data
|
The updates are a list of 2-tuples of stream ID and the row data
|
||||||
"""
|
"""
|
||||||
|
@ -977,7 +977,7 @@ def should_notify(old_state, new_state):
|
||||||
new_state.last_active_ts - old_state.last_active_ts
|
new_state.last_active_ts - old_state.last_active_ts
|
||||||
> LAST_ACTIVE_GRANULARITY
|
> LAST_ACTIVE_GRANULARITY
|
||||||
):
|
):
|
||||||
# Only notify about last active bumps if we're not currently acive
|
# Only notify about last active bumps if we're not currently active
|
||||||
if not new_state.currently_active:
|
if not new_state.currently_active:
|
||||||
notify_reason_counter.labels("last_active_change_online").inc()
|
notify_reason_counter.labels("last_active_change_online").inc()
|
||||||
return True
|
return True
|
||||||
|
|
|
@ -98,11 +98,18 @@ class ProfileHandler(BaseHandler):
|
||||||
except RequestSendFailed as e:
|
except RequestSendFailed as e:
|
||||||
raise SynapseError(502, "Failed to fetch profile") from e
|
raise SynapseError(502, "Failed to fetch profile") from e
|
||||||
except HttpResponseException as e:
|
except HttpResponseException as e:
|
||||||
|
if e.code < 500 and e.code != 404:
|
||||||
|
# Other codes are not allowed in c2s API
|
||||||
|
logger.info(
|
||||||
|
"Server replied with wrong response: %s %s", e.code, e.msg
|
||||||
|
)
|
||||||
|
|
||||||
|
raise SynapseError(502, "Failed to fetch profile")
|
||||||
raise e.to_synapse_error()
|
raise e.to_synapse_error()
|
||||||
|
|
||||||
async def get_profile_from_cache(self, user_id: str) -> JsonDict:
|
async def get_profile_from_cache(self, user_id: str) -> JsonDict:
|
||||||
"""Get the profile information from our local cache. If the user is
|
"""Get the profile information from our local cache. If the user is
|
||||||
ours then the profile information will always be corect. Otherwise,
|
ours then the profile information will always be correct. Otherwise,
|
||||||
it may be out of date/missing.
|
it may be out of date/missing.
|
||||||
"""
|
"""
|
||||||
target_user = UserID.from_string(user_id)
|
target_user = UserID.from_string(user_id)
|
||||||
|
@ -124,7 +131,7 @@ class ProfileHandler(BaseHandler):
|
||||||
profile = await self.store.get_from_remote_profile_cache(user_id)
|
profile = await self.store.get_from_remote_profile_cache(user_id)
|
||||||
return profile or {}
|
return profile or {}
|
||||||
|
|
||||||
async def get_displayname(self, target_user: UserID) -> str:
|
async def get_displayname(self, target_user: UserID) -> Optional[str]:
|
||||||
if self.hs.is_mine(target_user):
|
if self.hs.is_mine(target_user):
|
||||||
try:
|
try:
|
||||||
displayname = await self.store.get_profile_displayname(
|
displayname = await self.store.get_profile_displayname(
|
||||||
|
@ -211,7 +218,7 @@ class ProfileHandler(BaseHandler):
|
||||||
|
|
||||||
await self._update_join_states(requester, target_user)
|
await self._update_join_states(requester, target_user)
|
||||||
|
|
||||||
async def get_avatar_url(self, target_user: UserID) -> str:
|
async def get_avatar_url(self, target_user: UserID) -> Optional[str]:
|
||||||
if self.hs.is_mine(target_user):
|
if self.hs.is_mine(target_user):
|
||||||
try:
|
try:
|
||||||
avatar_url = await self.store.get_profile_avatar_url(
|
avatar_url = await self.store.get_profile_avatar_url(
|
||||||
|
|
|
@ -1268,7 +1268,7 @@ class RoomShutdownHandler:
|
||||||
)
|
)
|
||||||
|
|
||||||
# We now wait for the create room to come back in via replication so
|
# We now wait for the create room to come back in via replication so
|
||||||
# that we can assume that all the joins/invites have propogated before
|
# that we can assume that all the joins/invites have propagated before
|
||||||
# we try and auto join below.
|
# we try and auto join below.
|
||||||
await self._replication.wait_for_stream_position(
|
await self._replication.wait_for_stream_position(
|
||||||
self.hs.config.worker.events_shard_config.get_instance(new_room_id),
|
self.hs.config.worker.events_shard_config.get_instance(new_room_id),
|
||||||
|
|
|
@ -216,9 +216,7 @@ class SamlHandler:
|
||||||
return
|
return
|
||||||
|
|
||||||
# Pull out the user-agent and IP from the request.
|
# Pull out the user-agent and IP from the request.
|
||||||
user_agent = request.requestHeaders.getRawHeaders(b"User-Agent", default=[b""])[
|
user_agent = request.get_user_agent("")
|
||||||
0
|
|
||||||
].decode("ascii", "surrogateescape")
|
|
||||||
ip_address = self.hs.get_ip_from_request(request)
|
ip_address = self.hs.get_ip_from_request(request)
|
||||||
|
|
||||||
# Call the mapper to register/login the user
|
# Call the mapper to register/login the user
|
||||||
|
|
|
@ -139,7 +139,7 @@ class SearchHandler(BaseHandler):
|
||||||
# Filter to apply to results
|
# Filter to apply to results
|
||||||
filter_dict = room_cat.get("filter", {})
|
filter_dict = room_cat.get("filter", {})
|
||||||
|
|
||||||
# What to order results by (impacts whether pagination can be doen)
|
# What to order results by (impacts whether pagination can be done)
|
||||||
order_by = room_cat.get("order_by", "rank")
|
order_by = room_cat.get("order_by", "rank")
|
||||||
|
|
||||||
# Return the current state of the rooms?
|
# Return the current state of the rooms?
|
||||||
|
|
|
@ -32,7 +32,7 @@ class StateDeltasHandler:
|
||||||
Returns:
|
Returns:
|
||||||
None if the field in the events either both match `public_value`
|
None if the field in the events either both match `public_value`
|
||||||
or if neither do, i.e. there has been no change.
|
or if neither do, i.e. there has been no change.
|
||||||
True if it didnt match `public_value` but now does
|
True if it didn't match `public_value` but now does
|
||||||
False if it did match `public_value` but now doesn't
|
False if it did match `public_value` but now doesn't
|
||||||
"""
|
"""
|
||||||
prev_event = None
|
prev_event = None
|
||||||
|
|
|
@ -755,7 +755,7 @@ class SyncHandler:
|
||||||
"""
|
"""
|
||||||
# TODO(mjark) Check if the state events were received by the server
|
# TODO(mjark) Check if the state events were received by the server
|
||||||
# after the previous sync, since we need to include those state
|
# after the previous sync, since we need to include those state
|
||||||
# updates even if they occured logically before the previous event.
|
# updates even if they occurred logically before the previous event.
|
||||||
# TODO(mjark) Check for new redactions in the state events.
|
# TODO(mjark) Check for new redactions in the state events.
|
||||||
|
|
||||||
with Measure(self.clock, "compute_state_delta"):
|
with Measure(self.clock, "compute_state_delta"):
|
||||||
|
@ -1883,7 +1883,7 @@ class SyncHandler:
|
||||||
# members (as the client otherwise doesn't have enough info to form
|
# members (as the client otherwise doesn't have enough info to form
|
||||||
# the name itself).
|
# the name itself).
|
||||||
if sync_config.filter_collection.lazy_load_members() and (
|
if sync_config.filter_collection.lazy_load_members() and (
|
||||||
# we recalulate the summary:
|
# we recalculate the summary:
|
||||||
# if there are membership changes in the timeline, or
|
# if there are membership changes in the timeline, or
|
||||||
# if membership has changed during a gappy sync, or
|
# if membership has changed during a gappy sync, or
|
||||||
# if this is an initial sync.
|
# if this is an initial sync.
|
||||||
|
|
|
@ -371,7 +371,7 @@ class TypingWriterHandler(FollowerTypingHandler):
|
||||||
between the requested tokens due to the limit.
|
between the requested tokens due to the limit.
|
||||||
|
|
||||||
The token returned can be used in a subsequent call to this
|
The token returned can be used in a subsequent call to this
|
||||||
function to get further updatees.
|
function to get further updates.
|
||||||
|
|
||||||
The updates are a list of 2-tuples of stream ID and the row data
|
The updates are a list of 2-tuples of stream ID and the row data
|
||||||
"""
|
"""
|
||||||
|
|
|
@ -31,7 +31,7 @@ class UserDirectoryHandler(StateDeltasHandler):
|
||||||
N.B.: ASSUMES IT IS THE ONLY THING THAT MODIFIES THE USER DIRECTORY
|
N.B.: ASSUMES IT IS THE ONLY THING THAT MODIFIES THE USER DIRECTORY
|
||||||
|
|
||||||
The user directory is filled with users who this server can see are joined to a
|
The user directory is filled with users who this server can see are joined to a
|
||||||
world_readable or publically joinable room. We keep a database table up to date
|
world_readable or publicly joinable room. We keep a database table up to date
|
||||||
by streaming changes of the current state and recalculating whether users should
|
by streaming changes of the current state and recalculating whether users should
|
||||||
be in the directory or not when necessary.
|
be in the directory or not when necessary.
|
||||||
"""
|
"""
|
||||||
|
|
|
@ -172,7 +172,7 @@ class WellKnownResolver:
|
||||||
had_valid_well_known = self._had_valid_well_known_cache.get(server_name, False)
|
had_valid_well_known = self._had_valid_well_known_cache.get(server_name, False)
|
||||||
|
|
||||||
# We do this in two steps to differentiate between possibly transient
|
# We do this in two steps to differentiate between possibly transient
|
||||||
# errors (e.g. can't connect to host, 503 response) and more permenant
|
# errors (e.g. can't connect to host, 503 response) and more permanent
|
||||||
# errors (such as getting a 404 response).
|
# errors (such as getting a 404 response).
|
||||||
response, body = await self._make_well_known_request(
|
response, body = await self._make_well_known_request(
|
||||||
server_name, retry=had_valid_well_known
|
server_name, retry=had_valid_well_known
|
||||||
|
|
|
@ -587,7 +587,7 @@ class MatrixFederationHttpClient:
|
||||||
"""
|
"""
|
||||||
Builds the Authorization headers for a federation request
|
Builds the Authorization headers for a federation request
|
||||||
Args:
|
Args:
|
||||||
destination (bytes|None): The desination homeserver of the request.
|
destination (bytes|None): The destination homeserver of the request.
|
||||||
May be None if the destination is an identity server, in which case
|
May be None if the destination is an identity server, in which case
|
||||||
destination_is must be non-None.
|
destination_is must be non-None.
|
||||||
method (bytes): The HTTP method of the request
|
method (bytes): The HTTP method of the request
|
||||||
|
@ -640,7 +640,7 @@ class MatrixFederationHttpClient:
|
||||||
backoff_on_404=False,
|
backoff_on_404=False,
|
||||||
try_trailing_slash_on_400=False,
|
try_trailing_slash_on_400=False,
|
||||||
):
|
):
|
||||||
""" Sends the specifed json data using PUT
|
""" Sends the specified json data using PUT
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
destination (str): The remote server to send the HTTP request
|
destination (str): The remote server to send the HTTP request
|
||||||
|
@ -729,7 +729,7 @@ class MatrixFederationHttpClient:
|
||||||
ignore_backoff=False,
|
ignore_backoff=False,
|
||||||
args={},
|
args={},
|
||||||
):
|
):
|
||||||
""" Sends the specifed json data using POST
|
""" Sends the specified json data using POST
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
destination (str): The remote server to send the HTTP request
|
destination (str): The remote server to send the HTTP request
|
||||||
|
|
|
@ -109,7 +109,7 @@ in_flight_requests_db_sched_duration = Counter(
|
||||||
# The set of all in flight requests, set[RequestMetrics]
|
# The set of all in flight requests, set[RequestMetrics]
|
||||||
_in_flight_requests = set()
|
_in_flight_requests = set()
|
||||||
|
|
||||||
# Protects the _in_flight_requests set from concurrent accesss
|
# Protects the _in_flight_requests set from concurrent access
|
||||||
_in_flight_requests_lock = threading.Lock()
|
_in_flight_requests_lock = threading.Lock()
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -182,7 +182,7 @@ class HttpServer:
|
||||||
""" Register a callback that gets fired if we receive a http request
|
""" Register a callback that gets fired if we receive a http request
|
||||||
with the given method for a path that matches the given regex.
|
with the given method for a path that matches the given regex.
|
||||||
|
|
||||||
If the regex contains groups these gets passed to the calback via
|
If the regex contains groups these gets passed to the callback via
|
||||||
an unpacked tuple.
|
an unpacked tuple.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
|
@ -241,7 +241,7 @@ class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):
|
||||||
|
|
||||||
async def _async_render(self, request: Request):
|
async def _async_render(self, request: Request):
|
||||||
"""Delegates to `_async_render_<METHOD>` methods, or returns a 400 if
|
"""Delegates to `_async_render_<METHOD>` methods, or returns a 400 if
|
||||||
no appropriate method exists. Can be overriden in sub classes for
|
no appropriate method exists. Can be overridden in sub classes for
|
||||||
different routing.
|
different routing.
|
||||||
"""
|
"""
|
||||||
# Treat HEAD requests as GET requests.
|
# Treat HEAD requests as GET requests.
|
||||||
|
@ -386,7 +386,7 @@ class JsonResource(DirectServeJsonResource):
|
||||||
async def _async_render(self, request):
|
async def _async_render(self, request):
|
||||||
callback, servlet_classname, group_dict = self._get_handler_for_request(request)
|
callback, servlet_classname, group_dict = self._get_handler_for_request(request)
|
||||||
|
|
||||||
# Make sure we have an appopriate name for this handler in prometheus
|
# Make sure we have an appropriate name for this handler in prometheus
|
||||||
# (rather than the default of JsonResource).
|
# (rather than the default of JsonResource).
|
||||||
request.request_metrics.name = servlet_classname
|
request.request_metrics.name = servlet_classname
|
||||||
|
|
||||||
|
|
|
@ -272,7 +272,6 @@ class RestServlet:
|
||||||
on_PUT
|
on_PUT
|
||||||
on_POST
|
on_POST
|
||||||
on_DELETE
|
on_DELETE
|
||||||
on_OPTIONS
|
|
||||||
|
|
||||||
Automatically handles turning CodeMessageExceptions thrown by these methods
|
Automatically handles turning CodeMessageExceptions thrown by these methods
|
||||||
into the appropriate HTTP response.
|
into the appropriate HTTP response.
|
||||||
|
@ -283,7 +282,7 @@ class RestServlet:
|
||||||
if hasattr(self, "PATTERNS"):
|
if hasattr(self, "PATTERNS"):
|
||||||
patterns = self.PATTERNS
|
patterns = self.PATTERNS
|
||||||
|
|
||||||
for method in ("GET", "PUT", "POST", "OPTIONS", "DELETE"):
|
for method in ("GET", "PUT", "POST", "DELETE"):
|
||||||
if hasattr(self, "on_%s" % (method,)):
|
if hasattr(self, "on_%s" % (method,)):
|
||||||
servlet_classname = self.__class__.__name__
|
servlet_classname = self.__class__.__name__
|
||||||
method_handler = getattr(self, "on_%s" % (method,))
|
method_handler = getattr(self, "on_%s" % (method,))
|
||||||
|
|
|
@ -109,8 +109,14 @@ class SynapseRequest(Request):
|
||||||
method = self.method.decode("ascii")
|
method = self.method.decode("ascii")
|
||||||
return method
|
return method
|
||||||
|
|
||||||
def get_user_agent(self):
|
def get_user_agent(self, default: str) -> str:
|
||||||
return self.requestHeaders.getRawHeaders(b"User-Agent", [None])[-1]
|
"""Return the last User-Agent header, or the given default.
|
||||||
|
"""
|
||||||
|
user_agent = self.requestHeaders.getRawHeaders(b"User-Agent", [None])[-1]
|
||||||
|
if user_agent is None:
|
||||||
|
return default
|
||||||
|
|
||||||
|
return user_agent.decode("ascii", "replace")
|
||||||
|
|
||||||
def render(self, resrc):
|
def render(self, resrc):
|
||||||
# this is called once a Resource has been found to serve the request; in our
|
# this is called once a Resource has been found to serve the request; in our
|
||||||
|
@ -161,7 +167,9 @@ class SynapseRequest(Request):
|
||||||
yield
|
yield
|
||||||
except Exception:
|
except Exception:
|
||||||
# this should already have been caught, and sent back to the client as a 500.
|
# this should already have been caught, and sent back to the client as a 500.
|
||||||
logger.exception("Asynchronous messge handler raised an uncaught exception")
|
logger.exception(
|
||||||
|
"Asynchronous message handler raised an uncaught exception"
|
||||||
|
)
|
||||||
finally:
|
finally:
|
||||||
# the request handler has finished its work and either sent the whole response
|
# the request handler has finished its work and either sent the whole response
|
||||||
# back, or handed over responsibility to a Producer.
|
# back, or handed over responsibility to a Producer.
|
||||||
|
@ -274,11 +282,7 @@ class SynapseRequest(Request):
|
||||||
# with maximum recursion trying to log errors about
|
# with maximum recursion trying to log errors about
|
||||||
# the charset problem.
|
# the charset problem.
|
||||||
# c.f. https://github.com/matrix-org/synapse/issues/3471
|
# c.f. https://github.com/matrix-org/synapse/issues/3471
|
||||||
user_agent = self.get_user_agent()
|
user_agent = self.get_user_agent("-")
|
||||||
if user_agent is not None:
|
|
||||||
user_agent = user_agent.decode("utf-8", "replace")
|
|
||||||
else:
|
|
||||||
user_agent = "-"
|
|
||||||
|
|
||||||
code = str(self.code)
|
code = str(self.code)
|
||||||
if not self.finished:
|
if not self.finished:
|
||||||
|
|
|
@ -317,7 +317,7 @@ def ensure_active_span(message, ret=None):
|
||||||
|
|
||||||
|
|
||||||
@contextlib.contextmanager
|
@contextlib.contextmanager
|
||||||
def _noop_context_manager(*args, **kwargs):
|
def noop_context_manager(*args, **kwargs):
|
||||||
"""Does exactly what it says on the tin"""
|
"""Does exactly what it says on the tin"""
|
||||||
yield
|
yield
|
||||||
|
|
||||||
|
@ -413,7 +413,7 @@ def start_active_span(
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if opentracing is None:
|
if opentracing is None:
|
||||||
return _noop_context_manager()
|
return noop_context_manager()
|
||||||
|
|
||||||
return opentracing.tracer.start_active_span(
|
return opentracing.tracer.start_active_span(
|
||||||
operation_name,
|
operation_name,
|
||||||
|
@ -428,7 +428,7 @@ def start_active_span(
|
||||||
|
|
||||||
def start_active_span_follows_from(operation_name, contexts):
|
def start_active_span_follows_from(operation_name, contexts):
|
||||||
if opentracing is None:
|
if opentracing is None:
|
||||||
return _noop_context_manager()
|
return noop_context_manager()
|
||||||
|
|
||||||
references = [opentracing.follows_from(context) for context in contexts]
|
references = [opentracing.follows_from(context) for context in contexts]
|
||||||
scope = start_active_span(operation_name, references=references)
|
scope = start_active_span(operation_name, references=references)
|
||||||
|
@ -459,7 +459,7 @@ def start_active_span_from_request(
|
||||||
# Also, twisted uses byte arrays while opentracing expects strings.
|
# Also, twisted uses byte arrays while opentracing expects strings.
|
||||||
|
|
||||||
if opentracing is None:
|
if opentracing is None:
|
||||||
return _noop_context_manager()
|
return noop_context_manager()
|
||||||
|
|
||||||
header_dict = {
|
header_dict = {
|
||||||
k.decode(): v[0].decode() for k, v in request.requestHeaders.getAllRawHeaders()
|
k.decode(): v[0].decode() for k, v in request.requestHeaders.getAllRawHeaders()
|
||||||
|
@ -497,7 +497,7 @@ def start_active_span_from_edu(
|
||||||
"""
|
"""
|
||||||
|
|
||||||
if opentracing is None:
|
if opentracing is None:
|
||||||
return _noop_context_manager()
|
return noop_context_manager()
|
||||||
|
|
||||||
carrier = json_decoder.decode(edu_content.get("context", "{}")).get(
|
carrier = json_decoder.decode(edu_content.get("context", "{}")).get(
|
||||||
"opentracing", {}
|
"opentracing", {}
|
||||||
|
|
|
@ -24,7 +24,7 @@ from prometheus_client.core import REGISTRY, Counter, Gauge
|
||||||
from twisted.internet import defer
|
from twisted.internet import defer
|
||||||
|
|
||||||
from synapse.logging.context import LoggingContext, PreserveLoggingContext
|
from synapse.logging.context import LoggingContext, PreserveLoggingContext
|
||||||
from synapse.logging.opentracing import start_active_span
|
from synapse.logging.opentracing import noop_context_manager, start_active_span
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
import resource
|
import resource
|
||||||
|
@ -167,7 +167,7 @@ class _BackgroundProcess:
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def run_as_background_process(desc: str, func, *args, **kwargs):
|
def run_as_background_process(desc: str, func, *args, bg_start_span=True, **kwargs):
|
||||||
"""Run the given function in its own logcontext, with resource metrics
|
"""Run the given function in its own logcontext, with resource metrics
|
||||||
|
|
||||||
This should be used to wrap processes which are fired off to run in the
|
This should be used to wrap processes which are fired off to run in the
|
||||||
|
@ -181,6 +181,9 @@ def run_as_background_process(desc: str, func, *args, **kwargs):
|
||||||
Args:
|
Args:
|
||||||
desc: a description for this background process type
|
desc: a description for this background process type
|
||||||
func: a function, which may return a Deferred or a coroutine
|
func: a function, which may return a Deferred or a coroutine
|
||||||
|
bg_start_span: Whether to start an opentracing span. Defaults to True.
|
||||||
|
Should only be disabled for processes that will not log to or tag
|
||||||
|
a span.
|
||||||
args: positional args for func
|
args: positional args for func
|
||||||
kwargs: keyword args for func
|
kwargs: keyword args for func
|
||||||
|
|
||||||
|
@ -199,7 +202,10 @@ def run_as_background_process(desc: str, func, *args, **kwargs):
|
||||||
with BackgroundProcessLoggingContext(desc) as context:
|
with BackgroundProcessLoggingContext(desc) as context:
|
||||||
context.request = "%s-%i" % (desc, count)
|
context.request = "%s-%i" % (desc, count)
|
||||||
try:
|
try:
|
||||||
with start_active_span(desc, tags={"request_id": context.request}):
|
ctx = noop_context_manager()
|
||||||
|
if bg_start_span:
|
||||||
|
ctx = start_active_span(desc, tags={"request_id": context.request})
|
||||||
|
with ctx:
|
||||||
result = func(*args, **kwargs)
|
result = func(*args, **kwargs)
|
||||||
|
|
||||||
if inspect.isawaitable(result):
|
if inspect.isawaitable(result):
|
||||||
|
@ -266,7 +272,7 @@ class BackgroundProcessLoggingContext(LoggingContext):
|
||||||
|
|
||||||
super().__exit__(type, value, traceback)
|
super().__exit__(type, value, traceback)
|
||||||
|
|
||||||
# The background process has finished. We explictly remove and manually
|
# The background process has finished. We explicitly remove and manually
|
||||||
# update the metrics here so that if nothing is scraping metrics the set
|
# update the metrics here so that if nothing is scraping metrics the set
|
||||||
# doesn't infinitely grow.
|
# doesn't infinitely grow.
|
||||||
with _bg_metrics_lock:
|
with _bg_metrics_lock:
|
||||||
|
|
|
@ -40,7 +40,6 @@ from synapse.handlers.presence import format_user_presence_state
|
||||||
from synapse.logging.context import PreserveLoggingContext
|
from synapse.logging.context import PreserveLoggingContext
|
||||||
from synapse.logging.utils import log_function
|
from synapse.logging.utils import log_function
|
||||||
from synapse.metrics import LaterGauge
|
from synapse.metrics import LaterGauge
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
|
||||||
from synapse.streams.config import PaginationConfig
|
from synapse.streams.config import PaginationConfig
|
||||||
from synapse.types import (
|
from synapse.types import (
|
||||||
Collection,
|
Collection,
|
||||||
|
@ -310,44 +309,37 @@ class Notifier:
|
||||||
"""
|
"""
|
||||||
|
|
||||||
# poke any interested application service.
|
# poke any interested application service.
|
||||||
run_as_background_process(
|
self._notify_app_services(max_room_stream_token)
|
||||||
"_notify_app_services", self._notify_app_services, max_room_stream_token
|
self._notify_pusher_pool(max_room_stream_token)
|
||||||
)
|
|
||||||
|
|
||||||
run_as_background_process(
|
|
||||||
"_notify_pusher_pool", self._notify_pusher_pool, max_room_stream_token
|
|
||||||
)
|
|
||||||
|
|
||||||
if self.federation_sender:
|
if self.federation_sender:
|
||||||
self.federation_sender.notify_new_events(max_room_stream_token)
|
self.federation_sender.notify_new_events(max_room_stream_token)
|
||||||
|
|
||||||
async def _notify_app_services(self, max_room_stream_token: RoomStreamToken):
|
def _notify_app_services(self, max_room_stream_token: RoomStreamToken):
|
||||||
try:
|
try:
|
||||||
await self.appservice_handler.notify_interested_services(
|
self.appservice_handler.notify_interested_services(max_room_stream_token)
|
||||||
max_room_stream_token
|
|
||||||
)
|
|
||||||
except Exception:
|
except Exception:
|
||||||
logger.exception("Error notifying application services of event")
|
logger.exception("Error notifying application services of event")
|
||||||
|
|
||||||
async def _notify_app_services_ephemeral(
|
def _notify_app_services_ephemeral(
|
||||||
self,
|
self,
|
||||||
stream_key: str,
|
stream_key: str,
|
||||||
new_token: Union[int, RoomStreamToken],
|
new_token: Union[int, RoomStreamToken],
|
||||||
users: Collection[UserID] = [],
|
users: Collection[Union[str, UserID]] = [],
|
||||||
):
|
):
|
||||||
try:
|
try:
|
||||||
stream_token = None
|
stream_token = None
|
||||||
if isinstance(new_token, int):
|
if isinstance(new_token, int):
|
||||||
stream_token = new_token
|
stream_token = new_token
|
||||||
await self.appservice_handler.notify_interested_services_ephemeral(
|
self.appservice_handler.notify_interested_services_ephemeral(
|
||||||
stream_key, stream_token, users
|
stream_key, stream_token, users
|
||||||
)
|
)
|
||||||
except Exception:
|
except Exception:
|
||||||
logger.exception("Error notifying application services of event")
|
logger.exception("Error notifying application services of event")
|
||||||
|
|
||||||
async def _notify_pusher_pool(self, max_room_stream_token: RoomStreamToken):
|
def _notify_pusher_pool(self, max_room_stream_token: RoomStreamToken):
|
||||||
try:
|
try:
|
||||||
await self._pusher_pool.on_new_notifications(max_room_stream_token)
|
self._pusher_pool.on_new_notifications(max_room_stream_token)
|
||||||
except Exception:
|
except Exception:
|
||||||
logger.exception("Error pusher pool of event")
|
logger.exception("Error pusher pool of event")
|
||||||
|
|
||||||
|
@ -384,16 +376,12 @@ class Notifier:
|
||||||
self.notify_replication()
|
self.notify_replication()
|
||||||
|
|
||||||
# Notify appservices
|
# Notify appservices
|
||||||
run_as_background_process(
|
self._notify_app_services_ephemeral(
|
||||||
"_notify_app_services_ephemeral",
|
stream_key, new_token, users,
|
||||||
self._notify_app_services_ephemeral,
|
|
||||||
stream_key,
|
|
||||||
new_token,
|
|
||||||
users,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def on_new_replication_data(self) -> None:
|
def on_new_replication_data(self) -> None:
|
||||||
"""Used to inform replication listeners that something has happend
|
"""Used to inform replication listeners that something has happened
|
||||||
without waking up any of the normal user event streams"""
|
without waking up any of the normal user event streams"""
|
||||||
self.notify_replication()
|
self.notify_replication()
|
||||||
|
|
||||||
|
|
|
@ -37,7 +37,7 @@ def list_with_base_rules(rawrules, use_new_defaults=False):
|
||||||
modified_base_rules = {r["rule_id"]: r for r in rawrules if r["priority_class"] < 0}
|
modified_base_rules = {r["rule_id"]: r for r in rawrules if r["priority_class"] < 0}
|
||||||
|
|
||||||
# Remove the modified base rules from the list, They'll be added back
|
# Remove the modified base rules from the list, They'll be added back
|
||||||
# in the default postions in the list.
|
# in the default positions in the list.
|
||||||
rawrules = [r for r in rawrules if r["priority_class"] >= 0]
|
rawrules = [r for r in rawrules if r["priority_class"] >= 0]
|
||||||
|
|
||||||
# shove the server default rules for each kind onto the end of each
|
# shove the server default rules for each kind onto the end of each
|
||||||
|
|
|
@ -390,12 +390,12 @@ class RulesForRoom:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# If a user has left a room we remove their push rule. If they
|
# If a user has left a room we remove their push rule. If they
|
||||||
# joined then we readd it later in _update_rules_with_member_event_ids
|
# joined then we re-add it later in _update_rules_with_member_event_ids
|
||||||
ret_rules_by_user.pop(user_id, None)
|
ret_rules_by_user.pop(user_id, None)
|
||||||
missing_member_event_ids[user_id] = event_id
|
missing_member_event_ids[user_id] = event_id
|
||||||
|
|
||||||
if missing_member_event_ids:
|
if missing_member_event_ids:
|
||||||
# If we have some memebr events we haven't seen, look them up
|
# If we have some member events we haven't seen, look them up
|
||||||
# and fetch push rules for them if appropriate.
|
# and fetch push rules for them if appropriate.
|
||||||
logger.debug("Found new member events %r", missing_member_event_ids)
|
logger.debug("Found new member events %r", missing_member_event_ids)
|
||||||
await self._update_rules_with_member_event_ids(
|
await self._update_rules_with_member_event_ids(
|
||||||
|
|
|
@ -24,7 +24,7 @@ from typing import Iterable, List, TypeVar
|
||||||
import bleach
|
import bleach
|
||||||
import jinja2
|
import jinja2
|
||||||
|
|
||||||
from synapse.api.constants import EventTypes
|
from synapse.api.constants import EventTypes, Membership
|
||||||
from synapse.api.errors import StoreError
|
from synapse.api.errors import StoreError
|
||||||
from synapse.config.emailconfig import EmailSubjectConfig
|
from synapse.config.emailconfig import EmailSubjectConfig
|
||||||
from synapse.logging.context import make_deferred_yieldable
|
from synapse.logging.context import make_deferred_yieldable
|
||||||
|
@ -317,9 +317,14 @@ class Mailer:
|
||||||
async def get_room_vars(
|
async def get_room_vars(
|
||||||
self, room_id, user_id, notifs, notif_events, room_state_ids
|
self, room_id, user_id, notifs, notif_events, room_state_ids
|
||||||
):
|
):
|
||||||
my_member_event_id = room_state_ids[("m.room.member", user_id)]
|
# Check if one of the notifs is an invite event for the user.
|
||||||
my_member_event = await self.store.get_event(my_member_event_id)
|
is_invite = False
|
||||||
is_invite = my_member_event.content["membership"] == "invite"
|
for n in notifs:
|
||||||
|
ev = notif_events[n["event_id"]]
|
||||||
|
if ev.type == EventTypes.Member and ev.state_key == user_id:
|
||||||
|
if ev.content.get("membership") == Membership.INVITE:
|
||||||
|
is_invite = True
|
||||||
|
break
|
||||||
|
|
||||||
room_name = await calculate_room_name(self.store, room_state_ids, user_id)
|
room_name = await calculate_room_name(self.store, room_state_ids, user_id)
|
||||||
|
|
||||||
|
@ -461,16 +466,26 @@ class Mailer:
|
||||||
self.store, room_state_ids[room_id], user_id, fallback_to_members=False
|
self.store, room_state_ids[room_id], user_id, fallback_to_members=False
|
||||||
)
|
)
|
||||||
|
|
||||||
my_member_event_id = room_state_ids[room_id][("m.room.member", user_id)]
|
# See if one of the notifs is an invite event for the user
|
||||||
my_member_event = await self.store.get_event(my_member_event_id)
|
invite_event = None
|
||||||
if my_member_event.content["membership"] == "invite":
|
for n in notifs_by_room[room_id]:
|
||||||
inviter_member_event_id = room_state_ids[room_id][
|
ev = notif_events[n["event_id"]]
|
||||||
("m.room.member", my_member_event.sender)
|
if ev.type == EventTypes.Member and ev.state_key == user_id:
|
||||||
]
|
if ev.content.get("membership") == Membership.INVITE:
|
||||||
inviter_member_event = await self.store.get_event(
|
invite_event = ev
|
||||||
inviter_member_event_id
|
break
|
||||||
|
|
||||||
|
if invite_event:
|
||||||
|
inviter_member_event_id = room_state_ids[room_id].get(
|
||||||
|
("m.room.member", invite_event.sender)
|
||||||
)
|
)
|
||||||
inviter_name = name_from_member_event(inviter_member_event)
|
inviter_name = invite_event.sender
|
||||||
|
if inviter_member_event_id:
|
||||||
|
inviter_member_event = await self.store.get_event(
|
||||||
|
inviter_member_event_id, allow_none=True
|
||||||
|
)
|
||||||
|
if inviter_member_event:
|
||||||
|
inviter_name = name_from_member_event(inviter_member_event)
|
||||||
|
|
||||||
if room_name is None:
|
if room_name is None:
|
||||||
return self.email_subjects.invite_from_person % {
|
return self.email_subjects.invite_from_person % {
|
||||||
|
|
|
@ -19,7 +19,10 @@ from typing import TYPE_CHECKING, Dict, Union
|
||||||
|
|
||||||
from prometheus_client import Gauge
|
from prometheus_client import Gauge
|
||||||
|
|
||||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
from synapse.metrics.background_process_metrics import (
|
||||||
|
run_as_background_process,
|
||||||
|
wrap_as_background_process,
|
||||||
|
)
|
||||||
from synapse.push import PusherConfigException
|
from synapse.push import PusherConfigException
|
||||||
from synapse.push.emailpusher import EmailPusher
|
from synapse.push.emailpusher import EmailPusher
|
||||||
from synapse.push.httppusher import HttpPusher
|
from synapse.push.httppusher import HttpPusher
|
||||||
|
@ -187,7 +190,7 @@ class PusherPool:
|
||||||
)
|
)
|
||||||
await self.remove_pusher(p["app_id"], p["pushkey"], p["user_name"])
|
await self.remove_pusher(p["app_id"], p["pushkey"], p["user_name"])
|
||||||
|
|
||||||
async def on_new_notifications(self, max_token: RoomStreamToken):
|
def on_new_notifications(self, max_token: RoomStreamToken):
|
||||||
if not self.pushers:
|
if not self.pushers:
|
||||||
# nothing to do here.
|
# nothing to do here.
|
||||||
return
|
return
|
||||||
|
@ -201,6 +204,17 @@ class PusherPool:
|
||||||
# Nothing to do
|
# Nothing to do
|
||||||
return
|
return
|
||||||
|
|
||||||
|
# We only start a new background process if necessary rather than
|
||||||
|
# optimistically (to cut down on overhead).
|
||||||
|
self._on_new_notifications(max_token)
|
||||||
|
|
||||||
|
@wrap_as_background_process("on_new_notifications")
|
||||||
|
async def _on_new_notifications(self, max_token: RoomStreamToken):
|
||||||
|
# We just use the minimum stream ordering and ignore the vector clock
|
||||||
|
# component. This is safe to do as long as we *always* ignore the vector
|
||||||
|
# clock components.
|
||||||
|
max_stream_id = max_token.stream
|
||||||
|
|
||||||
prev_stream_id = self._last_room_stream_id_seen
|
prev_stream_id = self._last_room_stream_id_seen
|
||||||
self._last_room_stream_id_seen = max_stream_id
|
self._last_room_stream_id_seen = max_stream_id
|
||||||
|
|
||||||
|
|
|
@ -166,7 +166,9 @@ class RedisSubscriber(txredisapi.SubscriberProtocol, AbstractConnection):
|
||||||
Args:
|
Args:
|
||||||
cmd (Command)
|
cmd (Command)
|
||||||
"""
|
"""
|
||||||
run_as_background_process("send-cmd", self._async_send_command, cmd)
|
run_as_background_process(
|
||||||
|
"send-cmd", self._async_send_command, cmd, bg_start_span=False
|
||||||
|
)
|
||||||
|
|
||||||
async def _async_send_command(self, cmd: Command):
|
async def _async_send_command(self, cmd: Command):
|
||||||
"""Encode a replication command and send it over our outbound connection"""
|
"""Encode a replication command and send it over our outbound connection"""
|
||||||
|
|
|
@ -31,7 +31,10 @@ from synapse.rest.admin.devices import (
|
||||||
DeviceRestServlet,
|
DeviceRestServlet,
|
||||||
DevicesRestServlet,
|
DevicesRestServlet,
|
||||||
)
|
)
|
||||||
from synapse.rest.admin.event_reports import EventReportsRestServlet
|
from synapse.rest.admin.event_reports import (
|
||||||
|
EventReportDetailRestServlet,
|
||||||
|
EventReportsRestServlet,
|
||||||
|
)
|
||||||
from synapse.rest.admin.groups import DeleteGroupAdminRestServlet
|
from synapse.rest.admin.groups import DeleteGroupAdminRestServlet
|
||||||
from synapse.rest.admin.media import ListMediaInRoom, register_servlets_for_media_repo
|
from synapse.rest.admin.media import ListMediaInRoom, register_servlets_for_media_repo
|
||||||
from synapse.rest.admin.purge_room_servlet import PurgeRoomServlet
|
from synapse.rest.admin.purge_room_servlet import PurgeRoomServlet
|
||||||
|
@ -50,6 +53,7 @@ from synapse.rest.admin.users import (
|
||||||
ResetPasswordRestServlet,
|
ResetPasswordRestServlet,
|
||||||
SearchUsersRestServlet,
|
SearchUsersRestServlet,
|
||||||
UserAdminServlet,
|
UserAdminServlet,
|
||||||
|
UserMediaRestServlet,
|
||||||
UserMembershipRestServlet,
|
UserMembershipRestServlet,
|
||||||
UserRegisterServlet,
|
UserRegisterServlet,
|
||||||
UserRestServletV2,
|
UserRestServletV2,
|
||||||
|
@ -215,6 +219,7 @@ def register_servlets(hs, http_server):
|
||||||
SendServerNoticeServlet(hs).register(http_server)
|
SendServerNoticeServlet(hs).register(http_server)
|
||||||
VersionServlet(hs).register(http_server)
|
VersionServlet(hs).register(http_server)
|
||||||
UserAdminServlet(hs).register(http_server)
|
UserAdminServlet(hs).register(http_server)
|
||||||
|
UserMediaRestServlet(hs).register(http_server)
|
||||||
UserMembershipRestServlet(hs).register(http_server)
|
UserMembershipRestServlet(hs).register(http_server)
|
||||||
UserRestServletV2(hs).register(http_server)
|
UserRestServletV2(hs).register(http_server)
|
||||||
UsersRestServletV2(hs).register(http_server)
|
UsersRestServletV2(hs).register(http_server)
|
||||||
|
@ -222,6 +227,7 @@ def register_servlets(hs, http_server):
|
||||||
DevicesRestServlet(hs).register(http_server)
|
DevicesRestServlet(hs).register(http_server)
|
||||||
DeleteDevicesRestServlet(hs).register(http_server)
|
DeleteDevicesRestServlet(hs).register(http_server)
|
||||||
EventReportsRestServlet(hs).register(http_server)
|
EventReportsRestServlet(hs).register(http_server)
|
||||||
|
EventReportDetailRestServlet(hs).register(http_server)
|
||||||
|
|
||||||
|
|
||||||
def register_servlets_for_client_rest_resource(hs, http_server):
|
def register_servlets_for_client_rest_resource(hs, http_server):
|
||||||
|
|
|
@ -119,7 +119,7 @@ class DevicesRestServlet(RestServlet):
|
||||||
raise NotFoundError("Unknown user")
|
raise NotFoundError("Unknown user")
|
||||||
|
|
||||||
devices = await self.device_handler.get_devices_by_user(target_user.to_string())
|
devices = await self.device_handler.get_devices_by_user(target_user.to_string())
|
||||||
return 200, {"devices": devices}
|
return 200, {"devices": devices, "total": len(devices)}
|
||||||
|
|
||||||
|
|
||||||
class DeleteDevicesRestServlet(RestServlet):
|
class DeleteDevicesRestServlet(RestServlet):
|
||||||
|
|
|
@ -15,7 +15,7 @@
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from synapse.api.errors import Codes, SynapseError
|
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
||||||
from synapse.http.servlet import RestServlet, parse_integer, parse_string
|
from synapse.http.servlet import RestServlet, parse_integer, parse_string
|
||||||
from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin
|
from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin
|
||||||
|
|
||||||
|
@ -86,3 +86,47 @@ class EventReportsRestServlet(RestServlet):
|
||||||
ret["next_token"] = start + len(event_reports)
|
ret["next_token"] = start + len(event_reports)
|
||||||
|
|
||||||
return 200, ret
|
return 200, ret
|
||||||
|
|
||||||
|
|
||||||
|
class EventReportDetailRestServlet(RestServlet):
|
||||||
|
"""
|
||||||
|
Get a specific reported event that is known to the homeserver. Results are returned
|
||||||
|
in a dictionary containing report information.
|
||||||
|
The requester must have administrator access in Synapse.
|
||||||
|
|
||||||
|
GET /_synapse/admin/v1/event_reports/<report_id>
|
||||||
|
returns:
|
||||||
|
200 OK with details report if success otherwise an error.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
The parameter `report_id` is the ID of the event report in the database.
|
||||||
|
Returns:
|
||||||
|
JSON blob of information about the event report
|
||||||
|
"""
|
||||||
|
|
||||||
|
PATTERNS = admin_patterns("/event_reports/(?P<report_id>[^/]*)$")
|
||||||
|
|
||||||
|
def __init__(self, hs):
|
||||||
|
self.hs = hs
|
||||||
|
self.auth = hs.get_auth()
|
||||||
|
self.store = hs.get_datastore()
|
||||||
|
|
||||||
|
async def on_GET(self, request, report_id):
|
||||||
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
|
message = (
|
||||||
|
"The report_id parameter must be a string representing a positive integer."
|
||||||
|
)
|
||||||
|
try:
|
||||||
|
report_id = int(report_id)
|
||||||
|
except ValueError:
|
||||||
|
raise SynapseError(400, message, errcode=Codes.INVALID_PARAM)
|
||||||
|
|
||||||
|
if report_id < 0:
|
||||||
|
raise SynapseError(400, message, errcode=Codes.INVALID_PARAM)
|
||||||
|
|
||||||
|
ret = await self.store.get_event_report(report_id)
|
||||||
|
if not ret:
|
||||||
|
raise NotFoundError("Event report not found")
|
||||||
|
|
||||||
|
return 200, ret
|
||||||
|
|
|
@ -16,9 +16,10 @@
|
||||||
|
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
from synapse.api.errors import AuthError
|
from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
|
||||||
from synapse.http.servlet import RestServlet, parse_integer
|
from synapse.http.servlet import RestServlet, parse_boolean, parse_integer
|
||||||
from synapse.rest.admin._base import (
|
from synapse.rest.admin._base import (
|
||||||
|
admin_patterns,
|
||||||
assert_requester_is_admin,
|
assert_requester_is_admin,
|
||||||
assert_user_is_admin,
|
assert_user_is_admin,
|
||||||
historical_admin_path_patterns,
|
historical_admin_path_patterns,
|
||||||
|
@ -150,6 +151,80 @@ class PurgeMediaCacheRestServlet(RestServlet):
|
||||||
return 200, ret
|
return 200, ret
|
||||||
|
|
||||||
|
|
||||||
|
class DeleteMediaByID(RestServlet):
|
||||||
|
"""Delete local media by a given ID. Removes it from this server.
|
||||||
|
"""
|
||||||
|
|
||||||
|
PATTERNS = admin_patterns("/media/(?P<server_name>[^/]+)/(?P<media_id>[^/]+)")
|
||||||
|
|
||||||
|
def __init__(self, hs):
|
||||||
|
self.store = hs.get_datastore()
|
||||||
|
self.auth = hs.get_auth()
|
||||||
|
self.server_name = hs.hostname
|
||||||
|
self.media_repository = hs.get_media_repository()
|
||||||
|
|
||||||
|
async def on_DELETE(self, request, server_name: str, media_id: str):
|
||||||
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
|
if self.server_name != server_name:
|
||||||
|
raise SynapseError(400, "Can only delete local media")
|
||||||
|
|
||||||
|
if await self.store.get_local_media(media_id) is None:
|
||||||
|
raise NotFoundError("Unknown media")
|
||||||
|
|
||||||
|
logging.info("Deleting local media by ID: %s", media_id)
|
||||||
|
|
||||||
|
deleted_media, total = await self.media_repository.delete_local_media(media_id)
|
||||||
|
return 200, {"deleted_media": deleted_media, "total": total}
|
||||||
|
|
||||||
|
|
||||||
|
class DeleteMediaByDateSize(RestServlet):
|
||||||
|
"""Delete local media and local copies of remote media by
|
||||||
|
timestamp and size.
|
||||||
|
"""
|
||||||
|
|
||||||
|
PATTERNS = admin_patterns("/media/(?P<server_name>[^/]+)/delete")
|
||||||
|
|
||||||
|
def __init__(self, hs):
|
||||||
|
self.store = hs.get_datastore()
|
||||||
|
self.auth = hs.get_auth()
|
||||||
|
self.server_name = hs.hostname
|
||||||
|
self.media_repository = hs.get_media_repository()
|
||||||
|
|
||||||
|
async def on_POST(self, request, server_name: str):
|
||||||
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
|
before_ts = parse_integer(request, "before_ts", required=True)
|
||||||
|
size_gt = parse_integer(request, "size_gt", default=0)
|
||||||
|
keep_profiles = parse_boolean(request, "keep_profiles", default=True)
|
||||||
|
|
||||||
|
if before_ts < 0:
|
||||||
|
raise SynapseError(
|
||||||
|
400,
|
||||||
|
"Query parameter before_ts must be a string representing a positive integer.",
|
||||||
|
errcode=Codes.INVALID_PARAM,
|
||||||
|
)
|
||||||
|
if size_gt < 0:
|
||||||
|
raise SynapseError(
|
||||||
|
400,
|
||||||
|
"Query parameter size_gt must be a string representing a positive integer.",
|
||||||
|
errcode=Codes.INVALID_PARAM,
|
||||||
|
)
|
||||||
|
|
||||||
|
if self.server_name != server_name:
|
||||||
|
raise SynapseError(400, "Can only delete local media")
|
||||||
|
|
||||||
|
logging.info(
|
||||||
|
"Deleting local media by timestamp: %s, size larger than: %s, keep profile media: %s"
|
||||||
|
% (before_ts, size_gt, keep_profiles)
|
||||||
|
)
|
||||||
|
|
||||||
|
deleted_media, total = await self.media_repository.delete_old_local_media(
|
||||||
|
before_ts, size_gt, keep_profiles
|
||||||
|
)
|
||||||
|
return 200, {"deleted_media": deleted_media, "total": total}
|
||||||
|
|
||||||
|
|
||||||
def register_servlets_for_media_repo(hs, http_server):
|
def register_servlets_for_media_repo(hs, http_server):
|
||||||
"""
|
"""
|
||||||
Media repo specific APIs.
|
Media repo specific APIs.
|
||||||
|
@ -159,3 +234,5 @@ def register_servlets_for_media_repo(hs, http_server):
|
||||||
QuarantineMediaByID(hs).register(http_server)
|
QuarantineMediaByID(hs).register(http_server)
|
||||||
QuarantineMediaByUser(hs).register(http_server)
|
QuarantineMediaByUser(hs).register(http_server)
|
||||||
ListMediaInRoom(hs).register(http_server)
|
ListMediaInRoom(hs).register(http_server)
|
||||||
|
DeleteMediaByID(hs).register(http_server)
|
||||||
|
DeleteMediaByDateSize(hs).register(http_server)
|
||||||
|
|
|
@ -16,6 +16,7 @@ import hashlib
|
||||||
import hmac
|
import hmac
|
||||||
import logging
|
import logging
|
||||||
from http import HTTPStatus
|
from http import HTTPStatus
|
||||||
|
from typing import Tuple
|
||||||
|
|
||||||
from synapse.api.constants import UserTypes
|
from synapse.api.constants import UserTypes
|
||||||
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
||||||
|
@ -27,13 +28,14 @@ from synapse.http.servlet import (
|
||||||
parse_json_object_from_request,
|
parse_json_object_from_request,
|
||||||
parse_string,
|
parse_string,
|
||||||
)
|
)
|
||||||
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.rest.admin._base import (
|
from synapse.rest.admin._base import (
|
||||||
admin_patterns,
|
admin_patterns,
|
||||||
assert_requester_is_admin,
|
assert_requester_is_admin,
|
||||||
assert_user_is_admin,
|
assert_user_is_admin,
|
||||||
historical_admin_path_patterns,
|
historical_admin_path_patterns,
|
||||||
)
|
)
|
||||||
from synapse.types import UserID
|
from synapse.types import JsonDict, UserID
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
@ -702,9 +704,73 @@ class UserMembershipRestServlet(RestServlet):
|
||||||
if not self.is_mine(UserID.from_string(user_id)):
|
if not self.is_mine(UserID.from_string(user_id)):
|
||||||
raise SynapseError(400, "Can only lookup local users")
|
raise SynapseError(400, "Can only lookup local users")
|
||||||
|
|
||||||
room_ids = await self.store.get_rooms_for_user(user_id)
|
user = await self.store.get_user_by_id(user_id)
|
||||||
if not room_ids:
|
if user is None:
|
||||||
raise NotFoundError("User not found")
|
raise NotFoundError("Unknown user")
|
||||||
|
|
||||||
|
room_ids = await self.store.get_rooms_for_user(user_id)
|
||||||
ret = {"joined_rooms": list(room_ids), "total": len(room_ids)}
|
ret = {"joined_rooms": list(room_ids), "total": len(room_ids)}
|
||||||
return 200, ret
|
return 200, ret
|
||||||
|
|
||||||
|
|
||||||
|
class UserMediaRestServlet(RestServlet):
|
||||||
|
"""
|
||||||
|
Gets information about all uploaded local media for a specific `user_id`.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
http://localhost:8008/_synapse/admin/v1/users/
|
||||||
|
@user:server/media
|
||||||
|
|
||||||
|
Args:
|
||||||
|
The parameters `from` and `limit` are required for pagination.
|
||||||
|
By default, a `limit` of 100 is used.
|
||||||
|
Returns:
|
||||||
|
A list of media and an integer representing the total number of
|
||||||
|
media that exist given for this user
|
||||||
|
"""
|
||||||
|
|
||||||
|
PATTERNS = admin_patterns("/users/(?P<user_id>[^/]+)/media$")
|
||||||
|
|
||||||
|
def __init__(self, hs):
|
||||||
|
self.is_mine = hs.is_mine
|
||||||
|
self.auth = hs.get_auth()
|
||||||
|
self.store = hs.get_datastore()
|
||||||
|
|
||||||
|
async def on_GET(
|
||||||
|
self, request: SynapseRequest, user_id: str
|
||||||
|
) -> Tuple[int, JsonDict]:
|
||||||
|
await assert_requester_is_admin(self.auth, request)
|
||||||
|
|
||||||
|
if not self.is_mine(UserID.from_string(user_id)):
|
||||||
|
raise SynapseError(400, "Can only lookup local users")
|
||||||
|
|
||||||
|
user = await self.store.get_user_by_id(user_id)
|
||||||
|
if user is None:
|
||||||
|
raise NotFoundError("Unknown user")
|
||||||
|
|
||||||
|
start = parse_integer(request, "from", default=0)
|
||||||
|
limit = parse_integer(request, "limit", default=100)
|
||||||
|
|
||||||
|
if start < 0:
|
||||||
|
raise SynapseError(
|
||||||
|
400,
|
||||||
|
"Query parameter from must be a string representing a positive integer.",
|
||||||
|
errcode=Codes.INVALID_PARAM,
|
||||||
|
)
|
||||||
|
|
||||||
|
if limit < 0:
|
||||||
|
raise SynapseError(
|
||||||
|
400,
|
||||||
|
"Query parameter limit must be a string representing a positive integer.",
|
||||||
|
errcode=Codes.INVALID_PARAM,
|
||||||
|
)
|
||||||
|
|
||||||
|
media, total = await self.store.get_local_media_by_user_paginate(
|
||||||
|
start, limit, user_id
|
||||||
|
)
|
||||||
|
|
||||||
|
ret = {"media": media, "total": total}
|
||||||
|
if (start + limit) < total:
|
||||||
|
ret["next_token"] = start + len(media)
|
||||||
|
|
||||||
|
return 200, ret
|
||||||
|
|
|
@ -67,9 +67,6 @@ class EventStreamRestServlet(RestServlet):
|
||||||
|
|
||||||
return 200, chunk
|
return 200, chunk
|
||||||
|
|
||||||
def on_OPTIONS(self, request):
|
|
||||||
return 200, {}
|
|
||||||
|
|
||||||
|
|
||||||
class EventRestServlet(RestServlet):
|
class EventRestServlet(RestServlet):
|
||||||
PATTERNS = client_patterns("/events/(?P<event_id>[^/]*)$", v1=True)
|
PATTERNS = client_patterns("/events/(?P<event_id>[^/]*)$", v1=True)
|
||||||
|
|
|
@ -114,9 +114,6 @@ class LoginRestServlet(RestServlet):
|
||||||
|
|
||||||
return 200, {"flows": flows}
|
return 200, {"flows": flows}
|
||||||
|
|
||||||
def on_OPTIONS(self, request: SynapseRequest):
|
|
||||||
return 200, {}
|
|
||||||
|
|
||||||
async def on_POST(self, request: SynapseRequest):
|
async def on_POST(self, request: SynapseRequest):
|
||||||
self._address_ratelimiter.ratelimit(request.getClientIP())
|
self._address_ratelimiter.ratelimit(request.getClientIP())
|
||||||
|
|
||||||
|
|
|
@ -30,9 +30,6 @@ class LogoutRestServlet(RestServlet):
|
||||||
self._auth_handler = hs.get_auth_handler()
|
self._auth_handler = hs.get_auth_handler()
|
||||||
self._device_handler = hs.get_device_handler()
|
self._device_handler = hs.get_device_handler()
|
||||||
|
|
||||||
def on_OPTIONS(self, request):
|
|
||||||
return 200, {}
|
|
||||||
|
|
||||||
async def on_POST(self, request):
|
async def on_POST(self, request):
|
||||||
requester = await self.auth.get_user_by_req(request, allow_expired=True)
|
requester = await self.auth.get_user_by_req(request, allow_expired=True)
|
||||||
|
|
||||||
|
@ -58,9 +55,6 @@ class LogoutAllRestServlet(RestServlet):
|
||||||
self._auth_handler = hs.get_auth_handler()
|
self._auth_handler = hs.get_auth_handler()
|
||||||
self._device_handler = hs.get_device_handler()
|
self._device_handler = hs.get_device_handler()
|
||||||
|
|
||||||
def on_OPTIONS(self, request):
|
|
||||||
return 200, {}
|
|
||||||
|
|
||||||
async def on_POST(self, request):
|
async def on_POST(self, request):
|
||||||
requester = await self.auth.get_user_by_req(request, allow_expired=True)
|
requester = await self.auth.get_user_by_req(request, allow_expired=True)
|
||||||
user_id = requester.user.to_string()
|
user_id = requester.user.to_string()
|
||||||
|
|
|
@ -86,9 +86,6 @@ class PresenceStatusRestServlet(RestServlet):
|
||||||
|
|
||||||
return 200, {}
|
return 200, {}
|
||||||
|
|
||||||
def on_OPTIONS(self, request):
|
|
||||||
return 200, {}
|
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs, http_server):
|
def register_servlets(hs, http_server):
|
||||||
PresenceStatusRestServlet(hs).register(http_server)
|
PresenceStatusRestServlet(hs).register(http_server)
|
||||||
|
|
|
@ -67,9 +67,6 @@ class ProfileDisplaynameRestServlet(RestServlet):
|
||||||
|
|
||||||
return 200, {}
|
return 200, {}
|
||||||
|
|
||||||
def on_OPTIONS(self, request, user_id):
|
|
||||||
return 200, {}
|
|
||||||
|
|
||||||
|
|
||||||
class ProfileAvatarURLRestServlet(RestServlet):
|
class ProfileAvatarURLRestServlet(RestServlet):
|
||||||
PATTERNS = client_patterns("/profile/(?P<user_id>[^/]*)/avatar_url", v1=True)
|
PATTERNS = client_patterns("/profile/(?P<user_id>[^/]*)/avatar_url", v1=True)
|
||||||
|
@ -118,9 +115,6 @@ class ProfileAvatarURLRestServlet(RestServlet):
|
||||||
|
|
||||||
return 200, {}
|
return 200, {}
|
||||||
|
|
||||||
def on_OPTIONS(self, request, user_id):
|
|
||||||
return 200, {}
|
|
||||||
|
|
||||||
|
|
||||||
class ProfileRestServlet(RestServlet):
|
class ProfileRestServlet(RestServlet):
|
||||||
PATTERNS = client_patterns("/profile/(?P<user_id>[^/]*)", v1=True)
|
PATTERNS = client_patterns("/profile/(?P<user_id>[^/]*)", v1=True)
|
||||||
|
|
|
@ -155,9 +155,6 @@ class PushRuleRestServlet(RestServlet):
|
||||||
else:
|
else:
|
||||||
raise UnrecognizedRequestError()
|
raise UnrecognizedRequestError()
|
||||||
|
|
||||||
def on_OPTIONS(self, request, path):
|
|
||||||
return 200, {}
|
|
||||||
|
|
||||||
def notify_user(self, user_id):
|
def notify_user(self, user_id):
|
||||||
stream_id = self.store.get_max_push_rules_stream_id()
|
stream_id = self.store.get_max_push_rules_stream_id()
|
||||||
self.notifier.on_new_event("push_rules_key", stream_id, users=[user_id])
|
self.notifier.on_new_event("push_rules_key", stream_id, users=[user_id])
|
||||||
|
|
|
@ -60,9 +60,6 @@ class PushersRestServlet(RestServlet):
|
||||||
|
|
||||||
return 200, {"pushers": filtered_pushers}
|
return 200, {"pushers": filtered_pushers}
|
||||||
|
|
||||||
def on_OPTIONS(self, _):
|
|
||||||
return 200, {}
|
|
||||||
|
|
||||||
|
|
||||||
class PushersSetRestServlet(RestServlet):
|
class PushersSetRestServlet(RestServlet):
|
||||||
PATTERNS = client_patterns("/pushers/set$", v1=True)
|
PATTERNS = client_patterns("/pushers/set$", v1=True)
|
||||||
|
@ -140,9 +137,6 @@ class PushersSetRestServlet(RestServlet):
|
||||||
|
|
||||||
return 200, {}
|
return 200, {}
|
||||||
|
|
||||||
def on_OPTIONS(self, _):
|
|
||||||
return 200, {}
|
|
||||||
|
|
||||||
|
|
||||||
class PushersRemoveRestServlet(RestServlet):
|
class PushersRemoveRestServlet(RestServlet):
|
||||||
"""
|
"""
|
||||||
|
@ -182,9 +176,6 @@ class PushersRemoveRestServlet(RestServlet):
|
||||||
)
|
)
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def on_OPTIONS(self, _):
|
|
||||||
return 200, {}
|
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs, http_server):
|
def register_servlets(hs, http_server):
|
||||||
PushersRestServlet(hs).register(http_server)
|
PushersRestServlet(hs).register(http_server)
|
||||||
|
|
|
@ -72,20 +72,6 @@ class RoomCreateRestServlet(TransactionRestServlet):
|
||||||
def register(self, http_server):
|
def register(self, http_server):
|
||||||
PATTERNS = "/createRoom"
|
PATTERNS = "/createRoom"
|
||||||
register_txn_path(self, PATTERNS, http_server)
|
register_txn_path(self, PATTERNS, http_server)
|
||||||
# define CORS for all of /rooms in RoomCreateRestServlet for simplicity
|
|
||||||
http_server.register_paths(
|
|
||||||
"OPTIONS",
|
|
||||||
client_patterns("/rooms(?:/.*)?$", v1=True),
|
|
||||||
self.on_OPTIONS,
|
|
||||||
self.__class__.__name__,
|
|
||||||
)
|
|
||||||
# define CORS for /createRoom[/txnid]
|
|
||||||
http_server.register_paths(
|
|
||||||
"OPTIONS",
|
|
||||||
client_patterns("/createRoom(?:/.*)?$", v1=True),
|
|
||||||
self.on_OPTIONS,
|
|
||||||
self.__class__.__name__,
|
|
||||||
)
|
|
||||||
|
|
||||||
def on_PUT(self, request, txn_id):
|
def on_PUT(self, request, txn_id):
|
||||||
set_tag("txn_id", txn_id)
|
set_tag("txn_id", txn_id)
|
||||||
|
@ -104,9 +90,6 @@ class RoomCreateRestServlet(TransactionRestServlet):
|
||||||
user_supplied_config = parse_json_object_from_request(request)
|
user_supplied_config = parse_json_object_from_request(request)
|
||||||
return user_supplied_config
|
return user_supplied_config
|
||||||
|
|
||||||
def on_OPTIONS(self, request):
|
|
||||||
return 200, {}
|
|
||||||
|
|
||||||
|
|
||||||
# TODO: Needs unit testing for generic events
|
# TODO: Needs unit testing for generic events
|
||||||
class RoomStateEventRestServlet(TransactionRestServlet):
|
class RoomStateEventRestServlet(TransactionRestServlet):
|
||||||
|
|
|
@ -69,9 +69,6 @@ class VoipRestServlet(RestServlet):
|
||||||
},
|
},
|
||||||
)
|
)
|
||||||
|
|
||||||
def on_OPTIONS(self, request):
|
|
||||||
return 200, {}
|
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs, http_server):
|
def register_servlets(hs, http_server):
|
||||||
VoipRestServlet(hs).register(http_server)
|
VoipRestServlet(hs).register(http_server)
|
||||||
|
|
|
@ -268,9 +268,6 @@ class PasswordRestServlet(RestServlet):
|
||||||
|
|
||||||
return 200, {}
|
return 200, {}
|
||||||
|
|
||||||
def on_OPTIONS(self, _):
|
|
||||||
return 200, {}
|
|
||||||
|
|
||||||
|
|
||||||
class DeactivateAccountRestServlet(RestServlet):
|
class DeactivateAccountRestServlet(RestServlet):
|
||||||
PATTERNS = client_patterns("/account/deactivate$")
|
PATTERNS = client_patterns("/account/deactivate$")
|
||||||
|
|
|
@ -176,9 +176,6 @@ class AuthRestServlet(RestServlet):
|
||||||
respond_with_html(request, 200, html)
|
respond_with_html(request, 200, html)
|
||||||
return None
|
return None
|
||||||
|
|
||||||
def on_OPTIONS(self, _):
|
|
||||||
return 200, {}
|
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs, http_server):
|
def register_servlets(hs, http_server):
|
||||||
AuthRestServlet(hs).register(http_server)
|
AuthRestServlet(hs).register(http_server)
|
||||||
|
|
|
@ -642,9 +642,6 @@ class RegisterRestServlet(RestServlet):
|
||||||
|
|
||||||
return 200, return_dict
|
return 200, return_dict
|
||||||
|
|
||||||
def on_OPTIONS(self, _):
|
|
||||||
return 200, {}
|
|
||||||
|
|
||||||
async def _do_appservice_registration(self, username, as_token, body):
|
async def _do_appservice_registration(self, username, as_token, body):
|
||||||
user_id = await self.registration_handler.appservice_register(
|
user_id = await self.registration_handler.appservice_register(
|
||||||
username, as_token
|
username, as_token
|
||||||
|
|
|
@ -69,6 +69,23 @@ class MediaFilePaths:
|
||||||
|
|
||||||
local_media_thumbnail = _wrap_in_base_path(local_media_thumbnail_rel)
|
local_media_thumbnail = _wrap_in_base_path(local_media_thumbnail_rel)
|
||||||
|
|
||||||
|
def local_media_thumbnail_dir(self, media_id: str) -> str:
|
||||||
|
"""
|
||||||
|
Retrieve the local store path of thumbnails of a given media_id
|
||||||
|
|
||||||
|
Args:
|
||||||
|
media_id: The media ID to query.
|
||||||
|
Returns:
|
||||||
|
Path of local_thumbnails from media_id
|
||||||
|
"""
|
||||||
|
return os.path.join(
|
||||||
|
self.base_path,
|
||||||
|
"local_thumbnails",
|
||||||
|
media_id[0:2],
|
||||||
|
media_id[2:4],
|
||||||
|
media_id[4:],
|
||||||
|
)
|
||||||
|
|
||||||
def remote_media_filepath_rel(self, server_name, file_id):
|
def remote_media_filepath_rel(self, server_name, file_id):
|
||||||
return os.path.join(
|
return os.path.join(
|
||||||
"remote_content", server_name, file_id[0:2], file_id[2:4], file_id[4:]
|
"remote_content", server_name, file_id[0:2], file_id[2:4], file_id[4:]
|
||||||
|
|
|
@ -18,7 +18,7 @@ import errno
|
||||||
import logging
|
import logging
|
||||||
import os
|
import os
|
||||||
import shutil
|
import shutil
|
||||||
from typing import IO, Dict, Optional, Tuple
|
from typing import IO, Dict, List, Optional, Tuple
|
||||||
|
|
||||||
import twisted.internet.error
|
import twisted.internet.error
|
||||||
import twisted.web.http
|
import twisted.web.http
|
||||||
|
@ -767,6 +767,76 @@ class MediaRepository:
|
||||||
|
|
||||||
return {"deleted": deleted}
|
return {"deleted": deleted}
|
||||||
|
|
||||||
|
async def delete_local_media(self, media_id: str) -> Tuple[List[str], int]:
|
||||||
|
"""
|
||||||
|
Delete the given local or remote media ID from this server
|
||||||
|
|
||||||
|
Args:
|
||||||
|
media_id: The media ID to delete.
|
||||||
|
Returns:
|
||||||
|
A tuple of (list of deleted media IDs, total deleted media IDs).
|
||||||
|
"""
|
||||||
|
return await self._remove_local_media_from_disk([media_id])
|
||||||
|
|
||||||
|
async def delete_old_local_media(
|
||||||
|
self, before_ts: int, size_gt: int = 0, keep_profiles: bool = True,
|
||||||
|
) -> Tuple[List[str], int]:
|
||||||
|
"""
|
||||||
|
Delete local or remote media from this server by size and timestamp. Removes
|
||||||
|
media files, any thumbnails and cached URLs.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
before_ts: Unix timestamp in ms.
|
||||||
|
Files that were last used before this timestamp will be deleted
|
||||||
|
size_gt: Size of the media in bytes. Files that are larger will be deleted
|
||||||
|
keep_profiles: Switch to delete also files that are still used in image data
|
||||||
|
(e.g user profile, room avatar)
|
||||||
|
If false these files will be deleted
|
||||||
|
Returns:
|
||||||
|
A tuple of (list of deleted media IDs, total deleted media IDs).
|
||||||
|
"""
|
||||||
|
old_media = await self.store.get_local_media_before(
|
||||||
|
before_ts, size_gt, keep_profiles,
|
||||||
|
)
|
||||||
|
return await self._remove_local_media_from_disk(old_media)
|
||||||
|
|
||||||
|
async def _remove_local_media_from_disk(
|
||||||
|
self, media_ids: List[str]
|
||||||
|
) -> Tuple[List[str], int]:
|
||||||
|
"""
|
||||||
|
Delete local or remote media from this server. Removes media files,
|
||||||
|
any thumbnails and cached URLs.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
media_ids: List of media_id to delete
|
||||||
|
Returns:
|
||||||
|
A tuple of (list of deleted media IDs, total deleted media IDs).
|
||||||
|
"""
|
||||||
|
removed_media = []
|
||||||
|
for media_id in media_ids:
|
||||||
|
logger.info("Deleting media with ID '%s'", media_id)
|
||||||
|
full_path = self.filepaths.local_media_filepath(media_id)
|
||||||
|
try:
|
||||||
|
os.remove(full_path)
|
||||||
|
except OSError as e:
|
||||||
|
logger.warning("Failed to remove file: %r: %s", full_path, e)
|
||||||
|
if e.errno == errno.ENOENT:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
continue
|
||||||
|
|
||||||
|
thumbnail_dir = self.filepaths.local_media_thumbnail_dir(media_id)
|
||||||
|
shutil.rmtree(thumbnail_dir, ignore_errors=True)
|
||||||
|
|
||||||
|
await self.store.delete_remote_media(self.server_name, media_id)
|
||||||
|
|
||||||
|
await self.store.delete_url_cache((media_id,))
|
||||||
|
await self.store.delete_url_cache_media((media_id,))
|
||||||
|
|
||||||
|
removed_media.append(media_id)
|
||||||
|
|
||||||
|
return removed_media, len(removed_media)
|
||||||
|
|
||||||
|
|
||||||
class MediaRepositoryResource(Resource):
|
class MediaRepositoryResource(Resource):
|
||||||
"""File uploading and downloading.
|
"""File uploading and downloading.
|
||||||
|
|
|
@ -104,7 +104,7 @@ class ConsentServerNotices:
|
||||||
|
|
||||||
|
|
||||||
def copy_with_str_subst(x: Any, substitutions: Any) -> Any:
|
def copy_with_str_subst(x: Any, substitutions: Any) -> Any:
|
||||||
"""Deep-copy a structure, carrying out string substitions on any strings
|
"""Deep-copy a structure, carrying out string substitutions on any strings
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
x (object): structure to be copied
|
x (object): structure to be copied
|
||||||
|
|
|
@ -547,7 +547,7 @@ class StateResolutionHandler:
|
||||||
event_map:
|
event_map:
|
||||||
a dict from event_id to event, for any events that we happen to
|
a dict from event_id to event, for any events that we happen to
|
||||||
have in flight (eg, those currently being persisted). This will be
|
have in flight (eg, those currently being persisted). This will be
|
||||||
used as a starting point fof finding the state we need; any missing
|
used as a starting point for finding the state we need; any missing
|
||||||
events will be requested via state_res_store.
|
events will be requested via state_res_store.
|
||||||
|
|
||||||
If None, all events will be fetched via state_res_store.
|
If None, all events will be fetched via state_res_store.
|
||||||
|
|
|
@ -56,7 +56,7 @@ async def resolve_events_with_store(
|
||||||
event_map:
|
event_map:
|
||||||
a dict from event_id to event, for any events that we happen to
|
a dict from event_id to event, for any events that we happen to
|
||||||
have in flight (eg, those currently being persisted). This will be
|
have in flight (eg, those currently being persisted). This will be
|
||||||
used as a starting point fof finding the state we need; any missing
|
used as a starting point for finding the state we need; any missing
|
||||||
events will be requested via state_map_factory.
|
events will be requested via state_map_factory.
|
||||||
|
|
||||||
If None, all events will be fetched via state_map_factory.
|
If None, all events will be fetched via state_map_factory.
|
||||||
|
|
|
@ -69,7 +69,7 @@ async def resolve_events_with_store(
|
||||||
event_map:
|
event_map:
|
||||||
a dict from event_id to event, for any events that we happen to
|
a dict from event_id to event, for any events that we happen to
|
||||||
have in flight (eg, those currently being persisted). This will be
|
have in flight (eg, those currently being persisted). This will be
|
||||||
used as a starting point fof finding the state we need; any missing
|
used as a starting point for finding the state we need; any missing
|
||||||
events will be requested via state_res_store.
|
events will be requested via state_res_store.
|
||||||
|
|
||||||
If None, all events will be fetched via state_res_store.
|
If None, all events will be fetched via state_res_store.
|
||||||
|
|
|
@ -182,7 +182,7 @@ matrixLogin.passwordLogin = function() {
|
||||||
};
|
};
|
||||||
|
|
||||||
/*
|
/*
|
||||||
* The onLogin function gets called after a succesful login.
|
* The onLogin function gets called after a successful login.
|
||||||
*
|
*
|
||||||
* It is expected that implementations override this to be notified when the
|
* It is expected that implementations override this to be notified when the
|
||||||
* login is complete. The response to the login call is provided as the single
|
* login is complete. The response to the login call is provided as the single
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue