[MDEV-23725] max_session_mem_used should kill session if query used more memory Created: 2020-09-12 Updated: 2021-09-30 Resolved: 2020-09-14 |
|
| Status: | Closed |
| Project: | MariaDB Server |
| Component/s: | Variables |
| Affects Version/s: | 10.2.21 |
| Fix Version/s: | N/A |
| Type: | Bug | Priority: | Major |
| Reporter: | Allen Lee (Inactive) | Assignee: | Oleksandr Byelkin |
| Resolution: | Not a Bug | Votes: | 0 |
| Labels: | None | ||
| Attachments: |
|
| Description |
|
Customer reported that with max_session_mem_used setting, query isn't killed automatically when it's memory usage exceeded it's configured value.
processlist
On the other hand, it wasn't even started with error.
So, when is max_session_mem_used estimated and when should it be terminated? |
| Comments |
| Comment by Sergei Golubchik [ 2020-09-14 ] | ||||||||||||
|
This is expected behavior. max_session_mem_used does kill queries, but the limit is approximate and killing is cooperative. It's normal if the query doesn't die at once. | ||||||||||||
| Comment by Allen Lee (Inactive) [ 2020-09-15 ] | ||||||||||||
|
serg customer wants to get more explanation on these. 1) my assumption is that the query will release memory and later query gets killed - is my assumption is correct? My understanding of this behavior is that planner would estimate the required memory to run the query and if it exceed max_session_mem_used, it would not run the query at the beginning. However, required memory estimation isn't close to max_session_mem_used, query should start run, but the query should be killed if it reaches max_session_mem_used value during the query. However, my test query wasn't killed even if it exceeds the max_session_mem_used value. | ||||||||||||
| Comment by Sergei Golubchik [ 2020-09-15 ] | ||||||||||||
|
1. yes No, the required memory is not estimated in advance. The query starts execution, allocating memory as needed. And when it goes over max_session_mem_used the query gets killed. | ||||||||||||
| Comment by Allen Lee (Inactive) [ 2020-09-17 ] | ||||||||||||
|
serg I've tested this again with Jmeter and would like to share the result.
So, does this variables work as expected? | ||||||||||||
| Comment by Sergei Golubchik [ 2020-09-17 ] | ||||||||||||
|
Mostly, yes. The limit is checked every time when a thread tries to allocate thread-local memory. And the query is killed when the limit is reached. There are three effects that can cause a query not to be killed: 1. every session has pre-allocated memory pool, all allocations use that first. More memory is allocated from the global heap only when this local pool is exhausted. If a query can be executed completely in this pre-allocated memory pool, it will never trigger "max_session_mem_used" check. 2. "max_session_mem_used" does not apply to global allocations. For example, if you do query_cache_size=2000000 then this memory will be technically allocated within the session, but it will not belong to this session, all session will use query cache, even when the first session will end. There are many other cases when a memory allocated within a session will later be used globally. Such allocations do not count towards "max_session_mem_used" and they are not shown in the information_schema.processlist.memory_used column. 3. There are cases when a memory is allocated within a session and is owned by the session, but is not correctly recognized as such. In this case it will not be counted towards "max_session_mem_used" and will be not shown in the information_schema.processlist.memory_used column. Your case is, probably, a combination of all these reasons. You see information_schema.processlist.memory_used larger than max_session_mem_used, because the starting pre-allocated memory pool size is already larger than max_session_mem_used. And no max_session_mem_used check is triggered because most allocations are either served from the pool or are global (take a table from a global shared table cache, put a new entry into a global lock-free hash, etc), and few truly local allocations are not properly marked as such (but they were small, ~24 bytes in my tests). | ||||||||||||
| Comment by Sergei Golubchik [ 2020-09-23 ] | ||||||||||||
|
To clarify, memory_used includes thread/connection context (THD), network buffer, various smaller allocations — all that is allocated when the connection is established and it is freed when the connection end or it is reused for the next connection. But it's not a memory that running statements can use directly. And memory_used includes "pre-allocated memory pool" as I mentioned above, its size is defined by query_prealloc_size and transaction_prealloc_size | ||||||||||||
| Comment by Allen Lee (Inactive) [ 2020-09-24 ] | ||||||||||||
|
serg once again, customer is asking this.
Thanks, | ||||||||||||
| Comment by Sergei Golubchik [ 2020-09-24 ] | ||||||||||||
|
There is no "precedence". There is per-query pool. It has at least query_prealloc_size bytes and used for various allocations during the query execution. It goes as needed, but at the end of each query it is reset to have, again, query_prealloc_size. There is also a separate per-transaction pool. It is reset at the end of every transaction, so it's only used for objects with the lifetime till the end of transaction. That is, list of tables changed in a transaction, savepoints, etc. This doesn't need much memory, so transaction_prealloc_size should be small. |