Details
-
Task
-
Status: In Review (View Workflow)
-
Critical
-
Resolution: Unresolved
Description
copied from MDEV-25341:
- The buf_pool.free as well as the buffer pool blocks that are backing store for the AHI or lock_sys could be doubly linked with each other via bytes allocated within the page frame itself. We do not need a dummy buf_page_t for such blocks.
- We could allocate a contiguous virtual address range for the maximum supported size of buffer pool, and let the operating system physically allocate a subset of these addresses. The complicated logic of having multiple buffer pool chunks can be removed. On 32-bit architectures, the maximum size could be about 2GiB. On 64-bit architectures, the virtual address bus often is 48 bits (around 256 TiB). Perhaps we could shift some burden to the user and introduce a startup parameter innodb_buffer_pool_size_max.
Attachments
Issue Links
- blocks
-
MDEV-21203 Bad value for the variable "Buffer pool size"
-
- Open
-
-
MDEV-28805 SET GLOBAL innodb_buffer_pool_size=12*1024*1024 has different outcomes depending on version
-
- Closed
-
- is blocked by
-
MDEV-33559 matched_rec::block should be allocated from the buffer pool
-
- Closed
-
- relates to
-
MDEV-29432 innodb huge pages reclaim
-
- Open
-
-
MDEV-31976 buf_pool.unzip_LRU wastes memory and CPU
-
- Stalled
-
-
MDEV-32175 page_align() or page_offset() may cost some performance
-
- Closed
-
-
MDEV-32544 Setting innodb_buffer_pool_size to the maximum value can cause drastic performance degradation
-
- Open
-
-
MDEV-33588 buf::Block_hint is a performance hog
-
- Closed
-
-
MDEV-36061 Incorrect error handling on DDL with FULLTEXT INDEX
-
- Closed
-
-
MDEV-9236 Dramatically overallocation of InnoDB buffer pool leads to crash
-
- Open
-
-
MDEV-25341 innodb buffer pool soft decommit of memory
-
- Closed
-
-
MDEV-32339 decreasing innodb_buffer_pool_size at runtime does not release memory
-
- Open
-
-
MDEV-35485 The test innodb.innodb_buffer_pool_resize occasionally crashes
-
- Open
-
I ran a simple performance test on RAM disk on a dual Intel® Xeon® Gold 6230R (26×2 threads per socket), with innodb_buffer_pool_size=5G and innodb_log_file_size=5G:
sysbench oltp_update_index --tables=100 --table_size=10000 --threads=100 --time=120 --report-interval=5 --max-requests=0 run
Compared to the baseline, I observed a 2% regression in the average throughput. My first suspect would be the lazy initialization of the buffer pool (MDEV-25340), which is part of this change, but I did not analyze it deeper yet.
I also tested crash recovery by killing the workload about 115 seconds into it (5 seconds before it would end), and measuring the time to recover a copy of that data directory, using two settings for innodb_buffer_pool_size: 1 GiB (requiring 2 recovery batches) and 5 GiB (682,236,800 bytes of log processed in 1 batch). The times between baseline and the patch were very similar. I will have to repeat this experiment after diagnosing and addressing the performance regression during the workload.