Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-34515

Contention between secondary index UPDATE and purge due to large innodb_purge_batch_size

    XMLWordPrintable

Details

    Description

      When I run sysbench oltp_update_index on current 10.6-11.6 using very small amount of data (1 row in 1 table), performance regresses compared to 10.5

      The easiest way for me to reproduce it, is to run server with --innodb-flush-log-at-trx-commit=2 or 0 and a single benchmark user. I also see the same effect with --innodb-flush-log-at-trx-commit=1 and multiple users.

      Benchmark setup

      server:

      mysqld --innodb-flush-log-at-trx-commit=2 #all defaults otherwise
      

      sysbench prepare

      sysbench oltp_update_index --table-size=1 --tables=1 --threads=1 --mysql-user=root --mysql-db=mysql --report-interval=1 --time=120  --mysql-socket=/tmp/mysql.sock  --histogram prepare
      

      sysbench oltp_update_index --table-size=1 --tables=1 --threads=1 --mysql-user=root --mysql-db=mysql --report-interval=1 --time=120  --mysql-socket=/tmp/mysql.sock  --histogram run
      

      Results 10.5 vs 10.6

      version tps latency ms 95% max history length flamegraph
      10.5 16501.81 0.07 95 10_5_flame.svg
      10.6 592.19 5.28 70332 10_6_flame.svg

      As one can see from the attached 10_6_flame.svg , in 10.6 the purge is abnormally busy, with 85% of all CPU samples in purge . However, no matter how busy, purge does not achieve its goal, as Innodb_history_list_length grows continuously up to about 70K within 2 minutes. I think no record is actually purged. Additionally, foreground threads run into some sort of buffer pool contention, ssux_lock_impl<true>::wr_wait from buf_page_get_low being the most , and ssux_lock_impl<true>::wr_wait second most prominent places

      In contrast, as one can see from 10_5_flame.svg . in 10.5 the CPU time is spent roughly evenly between purge and foreground processing in do_command, and purge works, keeping history length tiny during the benchmark, with overall 25x better performance (although purge seems to have relatively high overhead to wake/wait for workers doing their tiny tasks, but this is for another day)

      What causes it, according to "git bisect"

      I ran manual bisect, and it points to aa719b5010c MDEV-32050: Do not copy undo records in purge from 10.6.16 .
      I had want to run the benchmark 2 or 3 times for this commit , from col start, on the first run it started slow, but unexplainably showed a better performance at about the middle of benchmark, on the second run it was slow all the way.

      So, the results for that commit vs previous commit 88733282fb15c80f0bd722df0041d06ad90c26b0 are below

      commit tps latency ms 95% history length max flamegraph
      aa719b5010c (bad) 592.96 5.37 69069 aa719b5010c9_flame2.svg
      88733282fb15(good) 19003.11 0.07 217 88733282fb15_flame.svg

      In fact, the "good" commit before regression was 20% better than 10.5, and purging uses only 7% CPU while managing to keep history length low.

      The "bad" flamegraph is pretty much the same as current 10.6, purge using up most of the CPU.

      Can it is reproduced differently ?

      I could also reproduce it by leaving innodb_flush_log_at_trx_commit=0 .
      I can also reproduce it by increasing concurrency (--threads=10 in sysbench), and leaving innodb_flush_log_at_trx_commit default (1).

      The bad effect seems to vanish once more rows are updated, but maybe I did not find a way to have a repro for that.

      Attachments

        Issue Links

          Activity

            People

              marko Marko Mäkelä
              wlad Vladislav Vaintroub
              Votes:
              1 Vote for this issue
              Watchers:
              12 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.