Details
-
Task
-
Status: Open (View Workflow)
-
Minor
-
Resolution: Unresolved
-
None
Description
A command like
SET @@GLOBAL.key_buffer_size=10000000000000000000; |
Cannot possibly succeed when a limited amount of free memory is available.
However, both optimized and debug builds will immediately commence executing the command and attempt memory allocation, thereby quickly and needlessly filling up all available memory, leading to OOM termination by the kernel of either this process or any other running on the server, and all consequences thereof.
This feature request for a simple pre-flight check. Pseudo code:
IF requested_alloc_amount > (free_mem * 1.3) THEN print_error AND do_not_execute_request;
|
ELSEIF requested_alloc_amount > (free_mem * 0.87) THEN print_warning AND execute_request;
|
ELSE execute_request;
|
Problems on "free memory" checks is it can be a far margin from what is measure to what is really free once normal operations begin. There's also a disparity between allocated (in a virtual memory space manner) and paged in. There's also the potential for VMs to be resized with more ram (balooning), and other effects.
With the amount of things that can go wrong, I think its over engineering to try to guess that amount upfrount.
I'd suggest a lazy allocation approach like MDEV-25340, lazily initialize each of the key blocks as needed. Its a quick check ahead of a moderately quick initialization and this would allow a very large allocation to not consume very much memory until its actually used. This would also help with MDEV-16607, by even with the default values, only consuming what is used rather than being wasted with amounts of initialized cache that won't be used.
Tied in with the mechanisms of
MDEV-24670, we could start to purge off the key buffer size down to a frequently used threshold as pressure rises.