[MDEV-31939] Adaptive flush recommendation ignores dirty ratio and checkpoint age Created: 2023-08-17 Updated: 2023-12-08 Resolved: 2023-12-08 |
|
| Status: | Closed |
| Project: | MariaDB Server |
| Component/s: | Storage Engine - InnoDB |
| Affects Version/s: | 10.6.15 |
| Fix Version/s: | 10.5.24, 10.6.17, 10.11.7, 11.0.5, 11.1.4, 11.2.3, 11.3.2 |
| Type: | Bug | Priority: | Minor |
| Reporter: | Andrew Drake | Assignee: | Marko Mäkelä |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | regression | ||
| Issue Links: |
|
||||||||
| Description |
|
The follow-up fix referenced in a comment on As far as I am able to tell from the code, there is no code path where pct_lwm can ever be anything but 0. Here is the relevant code as of 10.6.15, stripped down to relevant parts and control flow constructs:
These are the only assignments to pct_lwm, which is then passed to page_cleaner_flush_pages_recommendation shortly after this block. It appears the only way to have pct_lwm to be non-zero at the end of this block is: But #2 appears to be impossible: the inner if case does a goto to the body of the else case (which zeros the value), and the else if case jumps to idle_flush which bypasses the page_cleaner_flush_pages_recommendation call. Inside page_cleaner_flush_pages_recommendation we have:
If pct_lwm is always 0, then total_ratio is always 1 and dirty_pct and pct_for_lsn have no effect. I had a little bit of trouble tracking all the possible control flow paths in buf_flush_page_cleaner due to all the gotos, so I tested this theory by adding DBUG_ASSERT(pct_lwm == 0.0) immediately before the previous snippet. The debug build with that assert passes:
which I would expect to trip that assert fairly quickly? I started looking into this because I noticed that flushing on our databases (on 10.6.14) didn't seem to be speeding up as the checkpoint age approached max_modified_age_async, leading to periodic spikes of rapid flushing. I went reading through the code to try to understand what was going on and noticed this, but I'm not sure it's the only factor. |
| Comments |
| Comment by Marko Mäkelä [ 2023-12-05 ] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Thank you for the report. I can confirm this logic flaw. In this commit I eliminated the local variable bool idle_flush of buf_flush_page_cleaner(), trying to assign pct_lwm=0.0 whenever that variable would have been set. I am considering the following fix. I did not test it yet. I would appreciate your feedback.
Note that there is also another adaptive flushing based on the log checkpoint age, that is, the difference between the latest checkpoint LSN (more precisely, the oldest pending modification in the buffer pool) and the current LSN. The condition related to that would be af_needed_for_redo(oldest_lsn). | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2023-12-05 ] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
wlad, can you please review the above patch? I tested it with some additional output during a 60-second Sysbench oltp_update_index workload:
With the patch, all calls will display pct_lwm=10, which corresponds to the non-default parameter innodb_max_dirty_pages_pct_lwm=10 that I had specified. If I specify innodb_adaptive_flushing=OFF, then there will be no output. If I revert the patch, all output will be with pct_lwm=0, as reported, and the number of pages flushed will be smaller:
The buffer pool size was 2GiB of 16KiB pages (131072 pages), so we were nowhere close to writing 10% of the buffer pool per second. This was with the default innodb_io_capacity on a RAM disk, running Sysbench oltp_update_index at 256 connections, with the data set growing to 3GiB during the workload. I changed the instrumentation to dump a bit more information:
During the workload, the number of dirty/total pages in the buffer pool steadily grows after an initial dip until almost the end of the 60-second workload:
We can see that we actually wrote out the recommended number of pages n per second. At all times, something like 90% of the buffer pool was dirty, so the pct_lwm=10 threshold was exceeded and we should write out total_ratio*innodb_io_capacity pages per second, up to innodb_max_io_capacity pages, whose default value computes to 2000 pages per second. The default innodb_io_capacity is 200 pages per second. | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Comment by Vladislav Vaintroub [ 2023-12-06 ] | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Looks good to me |