Details
-
Bug
-
Status: Open (View Workflow)
-
Major
-
Resolution: Unresolved
-
10.6.11
-
None
-
Debian 11
Description
Hello,
We have a bunch of servers where we notice a considerably higher amount of memory usage than expected. On a 32G instance this can be 5-8GB, on a 8GB server this can be 2GB of memory.
Troubleshooting the cause of this, it seems to be related to the thread_cache_size option. The default value of this is 256, however we have changed this to 20 already.
Over time, say 1-2 weeks, memory will slowly creep up and become much higher than we expect.
If we do a set global thread_cache_size=0, the memory is instantly released and we gain a few GB of RAM back.
In the documentation it states:
> Description: Number of threads server caches for re-use. If this limit hasn't been reached, when a client disconnects, its threads are put into the cache, and re-used where possible. In MariaDB 10.2.0 and newer the threads are freed after 5 minutes of idle time. Normally this setting has little effect, as the other aspects of the thread implementation are more important, but increasing it can help servers with high volumes of connections per second so that most can use a cached, rather than a new, thread. The cache miss rate can be calculated as the server status variables threads_created/connections. If the thread pool is active, thread_cache_size is ignored. If thread_cache_size is set to greater than the value of max_connections, thread_cache_size will be set to the max_connections value.
Reading this, I would expect queries that require a bit of RAM to be cached for 5 minutes, and then be freed again. However, this appears not to be the case. The output of "Show processlist" does not show any lingering connections, and yet the memory keeps slowly increasing over a couple of days.
Is there any way I can verify if the threads are being properly freed, and if not, why? Or anything else I can do to see "what" this memory is that is being held?
Note we had this issue in other 10.6 releases as well but just didn't discover yet until now that the cause is the thread_cache_size option.