This commit is contained in:
H-Shay 2022-05-13 19:33:14 +00:00
parent 627c0b43fb
commit 25f1916b6f
6 changed files with 94 additions and 4 deletions

View file

@ -3920,14 +3920,32 @@ with intermittent connections, at the cost of higher memory usage.
By default, this is zero, which means that sync responses are not cached
at all.</p>
</li>
<li>
<p><code>cache_autotuning</code> and its sub-options <code>max_cache_memory_usage</code>, <code>target_cache_memory_usage</code>, and
<code>min_cache_ttl</code> work in conjunction with each other to maintain a balance between cache memory
usage and cache entry availability. You must be using <a href="https://github.com/matrix-org/synapse#help-synapse-is-slow-and-eats-all-my-ramcpu">jemalloc</a>
to utilize this option, and all three of the options must be specified for this feature to work.</p>
<ul>
<li><code>max_cache_memory_usage</code> sets a ceiling on how much memory the cache can use before caches begin to be continuously evicted.
They will continue to be evicted until the memory usage drops below the <code>target_memory_usage</code>, set in
the flag below, or until the <code>min_cache_ttl</code> is hit.</li>
<li><code>target_memory_usage</code> sets a rough target for the desired memory usage of the caches.</li>
<li><code>min_cache_ttl</code> sets a limit under which newer cache entries are not evicted and is only applied when
caches are actively being evicted/<code>max_cache_memory_usage</code> has been exceeded. This is to protect hot caches
from being emptied while Synapse is evicting due to memory.</li>
</ul>
</li>
</ul>
<p>Example configuration:</p>
<pre><code class="language-yaml">caches:
global_factor: 1.0
per_cache_factors:
get_users_who_share_room_with_user: 2.0
expire_caches: false
sync_response_cache_duration: 2m
cache_autotuning:
max_cache_memory_usage: 1024M
target_cache_memory_usage: 758M
min_cache_ttl: 5m
</code></pre>
<h3 id="reloading-cache-factors"><a class="header" href="#reloading-cache-factors">Reloading cache factors</a></h3>
<p>The cache factors (i.e. <code>caches.global_factor</code> and <code>caches.per_cache_factors</code>) may be reloaded at any time by sending a
@ -6792,6 +6810,24 @@ caches:
#
#cache_entry_ttl: 30m
# This flag enables cache autotuning, and is further specified by the sub-options `max_cache_memory_usage`,
# `target_cache_memory_usage`, `min_cache_ttl`. These flags work in conjunction with each other to maintain
# a balance between cache memory usage and cache entry availability. You must be using jemalloc to utilize
# this option, and all three of the options must be specified for this feature to work.
#cache_autotuning:
# This flag sets a ceiling on much memory the cache can use before caches begin to be continuously evicted.
# They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
# the flag below, or until the `min_cache_ttl` is hit.
#max_cache_memory_usage: 1024M
# This flag sets a rough target for the desired memory usage of the caches.
#target_cache_memory_usage: 758M
# 'min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
# caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
# from being emptied while Synapse is evicting due to memory.
#min_cache_ttl: 5m
# Controls how long the results of a /sync request are cached for after
# a successful response is returned. A higher duration can help clients with
# intermittent connections, at the cost of higher memory usage.

View file

@ -784,6 +784,24 @@ caches:
#
#cache_entry_ttl: 30m
# This flag enables cache autotuning, and is further specified by the sub-options `max_cache_memory_usage`,
# `target_cache_memory_usage`, `min_cache_ttl`. These flags work in conjunction with each other to maintain
# a balance between cache memory usage and cache entry availability. You must be using jemalloc to utilize
# this option, and all three of the options must be specified for this feature to work.
#cache_autotuning:
# This flag sets a ceiling on much memory the cache can use before caches begin to be continuously evicted.
# They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
# the flag below, or until the `min_cache_ttl` is hit.
#max_cache_memory_usage: 1024M
# This flag sets a rough target for the desired memory usage of the caches.
#target_cache_memory_usage: 758M
# 'min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
# caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
# from being emptied while Synapse is evicting due to memory.
#min_cache_ttl: 5m
# Controls how long the results of a /sync request are cached for after
# a successful response is returned. A higher duration can help clients with
# intermittent connections, at the cost of higher memory usage.

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -1049,14 +1049,32 @@ with intermittent connections, at the cost of higher memory usage.
By default, this is zero, which means that sync responses are not cached
at all.</p>
</li>
<li>
<p><code>cache_autotuning</code> and its sub-options <code>max_cache_memory_usage</code>, <code>target_cache_memory_usage</code>, and
<code>min_cache_ttl</code> work in conjunction with each other to maintain a balance between cache memory
usage and cache entry availability. You must be using <a href="https://github.com/matrix-org/synapse#help-synapse-is-slow-and-eats-all-my-ramcpu">jemalloc</a>
to utilize this option, and all three of the options must be specified for this feature to work.</p>
<ul>
<li><code>max_cache_memory_usage</code> sets a ceiling on how much memory the cache can use before caches begin to be continuously evicted.
They will continue to be evicted until the memory usage drops below the <code>target_memory_usage</code>, set in
the flag below, or until the <code>min_cache_ttl</code> is hit.</li>
<li><code>target_memory_usage</code> sets a rough target for the desired memory usage of the caches.</li>
<li><code>min_cache_ttl</code> sets a limit under which newer cache entries are not evicted and is only applied when
caches are actively being evicted/<code>max_cache_memory_usage</code> has been exceeded. This is to protect hot caches
from being emptied while Synapse is evicting due to memory.</li>
</ul>
</li>
</ul>
<p>Example configuration:</p>
<pre><code class="language-yaml">caches:
global_factor: 1.0
per_cache_factors:
get_users_who_share_room_with_user: 2.0
expire_caches: false
sync_response_cache_duration: 2m
cache_autotuning:
max_cache_memory_usage: 1024M
target_cache_memory_usage: 758M
min_cache_ttl: 5m
</code></pre>
<h3 id="reloading-cache-factors"><a class="header" href="#reloading-cache-factors">Reloading cache factors</a></h3>
<p>The cache factors (i.e. <code>caches.global_factor</code> and <code>caches.per_cache_factors</code>) may be reloaded at any time by sending a

View file

@ -940,6 +940,24 @@ caches:
#
#cache_entry_ttl: 30m
# This flag enables cache autotuning, and is further specified by the sub-options `max_cache_memory_usage`,
# `target_cache_memory_usage`, `min_cache_ttl`. These flags work in conjunction with each other to maintain
# a balance between cache memory usage and cache entry availability. You must be using jemalloc to utilize
# this option, and all three of the options must be specified for this feature to work.
#cache_autotuning:
# This flag sets a ceiling on much memory the cache can use before caches begin to be continuously evicted.
# They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
# the flag below, or until the `min_cache_ttl` is hit.
#max_cache_memory_usage: 1024M
# This flag sets a rough target for the desired memory usage of the caches.
#target_cache_memory_usage: 758M
# 'min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
# caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
# from being emptied while Synapse is evicting due to memory.
#min_cache_ttl: 5m
# Controls how long the results of a /sync request are cached for after
# a successful response is returned. A higher duration can help clients with
# intermittent connections, at the cost of higher memory usage.