Details
-
Task
-
Status: Open (View Workflow)
-
Major
-
Resolution: Unresolved
-
None
-
None
-
None
Description
Most of the MariaDB memory allocations goes through the MEM_ROOT interface.
MEM_ROOT primary object has been to reduce calls to malloc() by storing allocations in thread-local linked memory areas that are all freed at once.
This task has two purposes:
- To reduce malloc() calls even further by allocation slightly bigger blocks in some areas where the initial block size has been found to be too small.
- Always allocate the internal blocks in power of 2, which is something that will help reduce memory fragmentation with all (?) known memory allocators.
This should reduce memory fragmentation as malloc() does not anymore have to split it internal blocks in two, which can happen for example if one allocates a block of 8197 bytes. In this case the malloc() will use a block of size 16384 and give 8197 bytes to MEM_ROOT while using the 8195 byte block for other things. When the 8971 byte block is released, it is very unlikely it can be merged together to the 8195 byte block (as it probably used for other things) with results in memory fragmentation.
Attachments
Issue Links
- relates to
-
MDEV-35469 Heap tables are calling mallocs to often
-
- Closed
-
-
MDEV-35700 Consumes more and more memory until OOM. Consistently
-
- Open
-
From the commit message:
I tried the above changes on a complex select query with 12 tables.
The following shows the number of extra allocations that where used
to increase the size of the MEM_ROOT buffers.
Original code:
Max memory allocated for thd when using with heap table: 61,262,408
Max memory allocated for thd when using Aria tmp table: 419,464
After changes:
Connection to MariaDB: 0 allocations
Max memory allocated for thd when using with heap table: 61,347,424
Max memory allocated for thd when using Aria table: 529,168