[MDEV-23855] InnoDB log checkpointing causes regression for write-heavy OLTP Created: 2020-09-30 Updated: 2023-09-12 Resolved: 2020-10-26 |
|
| Status: | Closed |
| Project: | MariaDB Server |
| Component/s: | Storage Engine - InnoDB |
| Affects Version/s: | 10.5 |
| Fix Version/s: | 10.5.7 |
| Type: | Bug | Priority: | Major |
| Reporter: | Axel Schwenke | Assignee: | Marko Mäkelä |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | performance, regression-10.5 | ||
| Attachments: |
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Issue Links: |
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Epic Link: | adaptive flushing | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description |
|
Workload: sysbench OLTP write-only, i.e. as it is used by the regression benchmark suite in t_writes_innodb_multi. Setup: 16G buffer pool. 4G redo log. 4G or 8G data set. innodb_flush_neighbors = 0, innodb_io_capacity = 1000 (or 5000, 10000) Observation: after starting high, performance drops after ~ 1 minute. If waiting long enough, one can see oscillations in throughput. This seems to be related to Innodb_checkpoint_age reaching Innodb_checkpoint_max_age. There seems to be no LRU flushing at all, only flush_list flushing. |
| Comments |
| Comment by Axel Schwenke [ 2020-09-30 ] | ||||||||||||||||||||||
|
The problem can be seen with MariaDB-10.5.3 too. It happens only at higher thread counts. Up to 64 threads everything is fine. With 128 threads we can see first small oscillations. With 256 threads everything goes down the drain. | ||||||||||||||||||||||
| Comment by Vladislav Vaintroub [ 2020-09-30 ] | ||||||||||||||||||||||
|
@Axel, what is t_writes_innodb_multi ? Is there a sysbench description of it (I guess my.cnf can be derived from description) | ||||||||||||||||||||||
| Comment by Axel Schwenke [ 2020-09-30 ] | ||||||||||||||||||||||
|
wlad this is a test from the Regression Benchmark Suite. To be found on Github | ||||||||||||||||||||||
| Comment by Axel Schwenke [ 2020-09-30 ] | ||||||||||||||||||||||
|
Steps to reproduce Start MariaDB server with this my.cnf
use oltp_write_only.lua from sysbench 1.x with this cmdline
| ||||||||||||||||||||||
| Comment by Bernardo Perez [ 2020-10-02 ] | ||||||||||||||||||||||
|
Hello, we are tracking this report with great interest. Could you confirm if you think the fix for this issue will make it for 10.5.6? Thanks in advance. | ||||||||||||||||||||||
| Comment by Axel Schwenke [ 2020-10-05 ] | ||||||||||||||||||||||
|
Hi Bernardo Perez, this is hard to predict. We treat this issue with high priority. But the next 10.5 release will be due some time and we cannot delay it infinitely to wait for the fix for this issue. Or the other fixes that are already done will not make it to our users. If you are concerned that this issue might bite you, you can mitigate it's effect by increasing the redo log size and/or increase the innodb-io-capacity. | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-06 ] | ||||||||||||||||||||||
|
I can repeat random dips in the throughput (from 120ktps to as low as 55ktps) when the throughput is reported at 5-second intervals. In a 10-minute run I see one significant dip (below 100ktps) for a 5-second interval once about every 2 minutes. For the most recent run, the performance degraded at 110, 230, 460, 575 seconds into the benchmark. This was on /dev/shm, using (among other things) the settings innodb_io_capacity=1000 and innodb_flush_neighbors=0 (because the RAM disk does not qualify as SSD for | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-06 ] | ||||||||||||||||||||||
|
I think that I can repeat a similar anomaly when running oltp_update_index on a 10×2-core CPU at 16 threads on /dev/shm (RAM disk) with the following configuration changes:
The throughput would drop at 110s from 124ktps first to 74ktps, finally to as low as 2ktps, for almost 200 seconds. Some 100 seconds later, the throughput would drop again, before fully recovering to the previous maximum. For me, the innodb_flush_sync=0 was the trick; with the default setting ON, the performance would be rather good (only a couple minor 5-second stalls). I tried additionally setting innodb_adaptive_flushing_lwm=0 (which I think could be a good idea to set by default), but it did not “rescue” the situation. axel, can you generate the graphs for this scenario and see if they look similar to those that you reported? I was expecting that the oscillations would be caused by your rather high innodb_io_capacity_max/innodb_io_capacity ratio of 80 (by default, it is 2). I have seen performance tuning suggestions to use a ratio of 1 to minimize oscillation, but I am indeed seeing a performance drop on RAM disk when using the same (ridiculously low) value for both. | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-06 ] | ||||||||||||||||||||||
|
In my variation of the benchmark, I noticed that during the performance stall, 20 threads were executing inside log_checkpoint_margin(). This is expected when the log capacity is exceeded and innodb_flush_sync=OFF. But what was not expected that the threads would sleep 10ms and poll again. I changed that to a wait for a condition variable, which should be set by the page cleaner thread at least once per second. With innodb_flush_sync=ON I was not able to repeat the original claim. But, I improved the logic so that as long as buf_flush_sync_lsn has been set, it will remain set until MIN(oldest_modification) is below that target. Furthermore, the page cleaner thread will prioritize this mode of operation. With that change, I got rather long performance drops when running on RAM disk. I implemented another revision where the flush batch would be triggered directly by buf_flush_wait_flushed() instead of the page cleaner thread, which would reduce the throughput for less than 10 seconds at a time. Because an initial version of this variant performed very badly for axel, we might better let the page cleaner thread take care of the checkpoint-related synchronous flushing. | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-07 ] | ||||||||||||||||||||||
|
I think that it is best to let the dedicated page cleaner thread take care of the 'furious flushing' that is prompted by a log checkpoint. I tweaked the inter-thread communication and implemented an adaptive 'flush-ahead'. It is somewhat challenging to find a good balance between frequent flushing bursts (which will cause the throughput to drop noticeably) and permanently engaged 'furious flushing' mode (which will result in a degraded but fairly stable throughput), but I think I found something that works rather well on a RAM disk based test. Hopefully it will perform reasonably on real storage as well. | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-09 ] | ||||||||||||||||||||||
|
Yesterday, I implemented a new kind of flush-ahead: Once we reach a ‘soft’ limit, we will signal the page cleaner thread to flush the buffer pool up to the current LSN. This seemed to perform best on my system (tested on NVMe and RAM disk). If we set the limit too low, then the page cleaner would engage in furious flushing mode too frequently or almost constantly. Today, I made some experiments with
and identified some CPU hotspots. If neighbour flushing is disabled (I filed | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-12 ] | ||||||||||||||||||||||
|
I eliminated fil_system.LRU and frequent use of fil_system.mutex. Only when a file needs to be opened or closed (for enforcing a configured limit innodb_open_files), the mutex will come into the play. It will also be used around fsync() operations, but it was not visible in my profiling. That could probably be replaced too. The LRU policy was replaced with a FIFO policy: (re)opened files move to the end of the queue. The spinning in fil_system.mutex accounted for up to 5% of the sampled execution time of the page cleaner thread. It was acquired and released several times per I/O operation, not only in preparation (all done by the page cleaner thread), but also on I/O completion (executed by other threads and thus causing contention). I also tried parallelizing buf_flush_page() so that the checksum computation would be submitted to multiple threads, but the initial attempt did not work out at all. The maximum latency in my benchmark skyrocketed from less than 5 seconds to more than 200 seconds. One more thing to try would be to initiate persistent log writes asynchronously, so that the page cleaner thread would not be blocked waiting for log_write_up_to() at the start of each batch. | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-13 ] | ||||||||||||||||||||||
|
I eliminated even more use of fil_system.mutex and shrunk both fil_node_t and fil_space_t by consolidating things into a single std::atomic reference counter with a few flags. For testing, I use extreme parameters (such as innodb_io_capacity=100 and innodb_flush_log_at_trx_commit=0) to ensure that the page cleaner will sit mostly idle between checkpoints, to ensure that checkpoints that are triggered by log_free_check() will perform the maximum amount of I/O. When the test is running on RAM disk, the majority of the CPU cycles of the page cleaner thread will be spent in the Linux kernel, copying data from user space to kernel space. If I run the test on NVMe and use innodb_flush_method=O_DIRECT, the page checksum calculation will dominate. I did not try asynchronous log flushing yet. That might better be avoided in the 10.5 GA release. | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-14 ] | ||||||||||||||||||||||
|
I fine-tuned log_checkpoint_margin() and made the log_sys.max_checkpoint_age_async threshold be 7/8 of log_sys.max_checkpoint_age, instead of the original 31/32. That will give a little more "early warning". For the scenario that I tested, 7/8 worked best. 3/4 would trigger too frequent ‘furious flushing’. I think that the log_sys.max_modified_age_async should only matter for the adaptive page flushing in the background (when using higher innodb_io_capacity to avoid the ‘furious flushing’ scenario), and log_sys.max_modified_age_sync has no reason to exist at all. Finally, I made buf_flush_lists() trigger a log flush in a separate task, or wait for a pending task to finish before possibly scheduling a new one. This will allow the page cleaner thread to perform more actual work (writing out dirty data pages) when it is running in innodb_flush_sync=ON mode for a checkpoint flush. This improved the average throughput on my system. The maximum latency seems to be unaffected by my changes. Any thread that invokes log_free_check() (before initiating any write mini-transactions) may block in log_checkpoint_margin() until enough data has been written. | ||||||||||||||||||||||
| Comment by Bernardo Perez [ 2020-10-15 ] | ||||||||||||||||||||||
|
Hello Marko, I can see from your comments on MDEV-23399 that you are close to finish the work for this fix. Would you think this fix will make it for 10.5.7? Thanks in advance. | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-15 ] | ||||||||||||||||||||||
|
I rebased the commits into logical pieces for ease of review. The asynchronous log write for page flushing was part of the final I think that this work is now mostly done, but we may have to fine-tune some parameters or perhaps introduce some settable global variables. | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-16 ] | ||||||||||||||||||||||
|
I fixed some race conditions related to my removal of fil_system.mutex contention (and while doing that, apparently introduced a file handle leak somewhere, breaking the Windows bootstrap). I also tried to make the page cleaner thread perform log checkpoints, because in benchmarks we are seeing a pattern where the log checkpoint age grows from 0 to the maximum, then everything is halted, and the cycle repeats. If we perform checkpoints after every batch, progress will be guaranteed. Depending on the distribution of page modifications, skipping a checkpoint might be a bad idea, because during the next flush batch, one of the pages that we already cleaned could be locked or could have been changed again, forcing us to wait for the next batch to write the redo log. Apparently the calculations in page_cleaner_flush_pages_recommendation() (number of pages to flush per second in innodb_adaptive_flushing=ON) may be inaccurate. I think that we must make innodb_adaptive_flushing=OFF (the non-default setting) simply try to write out innodb_io_capacity pages per second. The inconsistent calculations can be fixed separately. | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-17 ] | ||||||||||||||||||||||
|
I rewrote the page cleaner so that it aims to run page flush batches once per second, ignoring any "activity count" that is more or less counting user transaction commits. This should guarantee that in innodb_adaptive_flushing=OFF mode we will write out innodb_io_capacity dirty pages per second. In the default innodb_adaptive_flushing=ON case, the rate should vary according to what page_cleaner_flush_pages_recommendation() returns. In both modes, a log checkpoint will be attempted at the end of each batch. The problem on Windows should now be fixed, by having fil_ibd_create() use the asynchronous I/O mode. Before my fil_system.mutex contention fixes, we used to create each .ibd file in normal mode, close it, and eventually reopen for asynchronous I/O. | ||||||||||||||||||||||
| Comment by Abdelmawla Gharieb [ 2020-10-21 ] | ||||||||||||||||||||||
|
I repeated the same OLTP write_only sysbnech test on 10.2.21 and 10.2.33 using incremental threads (1,2,4,8 .. 1024). Starting from threads 128 and above, 10.2.21 become much faster than 10.2.33. The most interesting part here is that 10.2.33 started high for each thread then the performance dropped and showed same oscillations in throughput as in this bug which is apparently related to the checkpointing. Could you please confirm that this bug does not affect other versions than 10.5? | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-21 ] | ||||||||||||||||||||||
|
abdelmawla, what you observe could be | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-21 ] | ||||||||||||||||||||||
|
Smaller batches of buf_flush_lists() will improve the maximum latency (on my GNU/Linux system with 2.3GB/s NVMe, from 4.6 seconds to 74 milliseconds). | ||||||||||||||||||||||
| Comment by Abdelmawla Gharieb [ 2020-10-21 ] | ||||||||||||||||||||||
|
Marko, yeah I also saw https://jira.mariadb.org/browse/MDEV-23475 but I was not sure which bug I'm hitting or I could be hitting both. | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-23 ] | ||||||||||||||||||||||
|
Today, I tested on a SATA 3.0 hard disk. Thanks to wlad’s suggestion, the page cleaner thread no longer uses synchronous I/O in the doublewrite buffer. The page cleaner thread may still wait for I/O on two occasions: at the start of each buf_flush_lists() batch, and whenever the next batch of page writes needs to be posted to the doublewrite buffer and the previous doublewrite batch is still in progress. | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-26 ] | ||||||||||||||||||||||
|
Fixing this has been a rather long journey with several dead ends. wlad’s comparison to 10.5 (containing the The key features of the fix are as follows:
I believe that on a fast SSD or when the maximum latency and recovery speed are not important, one could want to tune the page cleaner so that it does not write much (or anything) normally: innodb_max_dirty_pages_pct_lwm and innodb_max_dirty_pages_pct should be close to 100%, or innodb_io_capacity set close to the minimum value. The innodb_io_capacity_max could be set close to the actual hardware capability, so that upon reaching the log capacity, we will be able to advance the log checkpoint at least once per second. Such settings should minimize write amplification while possibly increasing latency when a log checkpoint must be initiated due to reaching the capacity of the circular redo log, or when a page needs to be written out by LRU eviction ( | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-26 ] | ||||||||||||||||||||||
|
The innodb_max_dirty_pages_pct_lwm that was changed by The remaining observed contention around fil_system.mutex is related to fil_system.unflushed_spaces when innodb_flush_method=O_DIRECT_NO_FSYNC is not being used, and to FILE_MODIFY records that we will write out during log checkpoints until In a future release, we might want to try to replace fil_system.unflushed_spaces with something lock-free, or simply change the default innodb_flush_method unless there are good reasons against it. Log checkpoints will flush pages of InnoDB temporary tables as ‘collateral damage’ until | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-10-31 ] | ||||||||||||||||||||||
|
What the source code refers to by "flush_list flushing" and "LRU flushing" should better be called checkpoint flushing (writing out data pages for a log checkpoint) and eviction flushing (writing out modified pages so that they can be replaced). The ultimate purpose of the checkpoint flushing is to avoid a situation where the circular log file becomes unrecoverable due to the tail of the log overwriting the head. The checkpoint would logically remove the head of the log, making the recovery start from a newer checkpoint LSN. This ticket ( The eviction flushing is necessary when we are running out of buffer pool, and there exist unwritten changes for the least recently used pages, possibly because the buffer pool is small and the log file is large (possibly much larger than the buffer pool; our recovery can consist of multiple batches without problems, ever since The further improvement in | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2020-11-04 ] | ||||||||||||||||||||||
|
Unfortunately, innodb_flush_sync=OFF is broken ( | ||||||||||||||||||||||
| Comment by Axel Schwenke [ 2020-11-04 ] | ||||||||||||||||||||||
|
In order to make the most out of the changes done in this task ( There are two timeseries plots attached. Innodb-max-dirty-pages-pct-lwm.png On the downside this also increases the InnoDB checkpoint age. When you hit the soft limit at ~80% of the redo log capacity, InnoDB will start furious flushing, resulting in oscillations. The plot shows this after 10800 seconds when innodb_max_dirty_pages_pct_lwm is changed from 50 to 60. Keep in mind that the best values depends on workload, data set size, redo log capacity and some more. The other variable is innodb_io_capacity, covered by the Innodb_io_capacity.png | ||||||||||||||||||||||
| Comment by Marko Mäkelä [ 2021-01-11 ] | ||||||||||||||||||||||
|
With
It is obvious that having the page cleaner thread perform eager ‘pre-flushing’ will reduce the age of the log checkpoint, and thus will reduce the likelihood that user threads will have to wait for a checkpoint flushing batch to successfully reduce the age of the checkpoint. As part of this task, the page cleaner was almost rewritten, and many CPU contention points were removed. The contention on fil_system.mutex was reduced, and the doublewrite buffer initiates asynchronous writes, instead of synchronous ones. Thanks to these changes, the maximum latency (even in the case that the ‘pre-flushing’ is disabled) should be lower. In One more thing worth noting is that the parameter innodb_io_capacity is not only the target number of pages per second to write during the ‘background pre-flushing’ (when it is enabled by innodb_max_dirty_pages_pct_lwm and other parameters), but also the number of pages to write per batch during the furious checkpoint flushing (enabled by default by innodb_flush_sync=ON). My intuition says that a DBA who wants to minimize ‘write amplification’ should try to disable the pre-flushing altogether (so that repeatedly modified pages will be rewritten to data files less frequently). In that kind of a scenario, I would expect that setting innodb_io_capacity to a small value will reduce the wait times in buf_flush_wait_flushed(). I think that we will need more benchmarking for this. One parameter that I do not think we have not measured is ‘average number of page writes per transaction’. I think that we should try to minimize that while simultaneously trying to keep the throughput and latency as stable as possible. Possibly, the function buf_flush_sync_for_checkpoint() will need a separate parameter for the number of pages written per batch, instead of reusing innodb_io_capacity. (After all, that function is executing in I/O bound mode, potentially writing many more than innodb_io_capacity pages per second.) |