Details
-
Task
-
Status: Open (View Workflow)
-
Major
-
Resolution: Unresolved
-
None
-
None
Description
NUMA hardware is becoming more common. Access to RAM that is not local to the CPU nodes is more expensive than accessing it locally. MariaDB should implement mechanisms to optimize the workload to keep CPUs of a node accessing their local memory.
example numa architecture:
$ numactl --hardware
|
available: 2 nodes (0,8)
|
node 0 cpus: 0 8 16 24 32 40 48 56 64 72
|
node 0 size: 130705 MB
|
node 0 free: 80310 MB
|
node 8 cpus: 80 88 96 104 112 120 128 136 144
|
node 8 size: 130649 MB
|
node 8 free: 81152 MB
|
node distances:
|
node 0 8
|
0: 10 40
|
8: 40 10
|
Components of the implementation include:
- A meaningful configuration that makes conflicts with existing settings obvious
- each innodb buffer pool instances to be constrained to a NUMA node
- SQL threads to be allocated by a user configurable map of one or more of user, connecting host, default database (based on initial connection)
- The user SQL thread will be pinned to CPUs associated with a node
- Innodb accesses by the SQL thread will be to/from the innodb buffer pool instances first
- Accounting of CPU/memory utilization for the mapping identifier to enable automated or configuration based of node to this mapping identifier.
- Innodb background threads to be per node to facilitate the innodb buffer pool instance processing locally
(Marko, Jan, et al. please edit with important design/implementation details)
I'm willing to mentor this (with help).
Attachments
Issue Links
- relates to
-
MDEV-5774 Enable numa interleaving by default when required conditions are met
- Open