Details

    Description

      When InnoDB starts up, it can be limited by innodb_io_capacity (or innodb_io_capacity_max). Currently, users can increase these temporarily at startup, but this is not ideal, and neither should be left elevated.

      Based on the discussion at https://mariadb.zulipchat.com/#narrow/stream/118759-general/topic/InnoDB.20tuning/near/234486913, this task is to add a new parameter for use in buffer pool load (and read-ahead).

      Attachments

        1. hdd_1_thd.png
          hdd_1_thd.png
          15 kB
        2. hdd_32_thd.png
          hdd_32_thd.png
          23 kB
        3. ssd_1_thd.png
          ssd_1_thd.png
          12 kB
        4. ssd_32_thd.png
          ssd_32_thd.png
          17 kB

        Issue Links

          Activity

            The change (removing the function buf_load_throttle_if_needed()) looks OK to me, but I think that we need to test the performance before applying this change.

            I would not target 10.3, but rather 10.5 or 10.6. The buffer pool I/O layer works quite differently before 10.5 switched to a single buffer pool (MDEV-15058) and optimized the page cleaner thread (MDEV-23399, MDEV-23855).

            marko Marko Mäkelä added a comment - The change (removing the function buf_load_throttle_if_needed() ) looks OK to me, but I think that we need to test the performance before applying this change. I would not target 10.3, but rather 10.5 or 10.6. The buffer pool I/O layer works quite differently before 10.5 switched to a single buffer pool ( MDEV-15058 ) and optimized the page cleaner thread ( MDEV-23399 , MDEV-23855 ).
            danblack Daniel Black added a comment -

            Being single threaded in 10.3, 10.4 (before MDEV-26547) there was only page request in transit at a time and saw this as a lower impact. The original user in SO post was using 10.4 (based on other questions about the same time).

            danblack Daniel Black added a comment - Being single threaded in 10.3, 10.4 (before MDEV-26547 ) there was only page request in transit at a time and saw this as a lower impact. The original user in SO post was using 10.4 (based on other questions about the same time).
            axel Axel Schwenke added a comment - - edited

            Benchmark results. I have tested 3 setups:

            1. MariaDB-10.6 @ 6d40274f65b, no buffer pool preloading (A)
            2. same commit, preloading 90% of buffer pool (B)
            3. PR#2339 on top of that commit, preloading 90% of buffer pool (C)

            The buffer pool was 32G (with 16G redo) holding 128M rows in 32 tables, totaling 30GB of raw data. The pool was once created with slow shutdown, hence it was completely merged and purged. The pool was recreated from a tarfile for each run.

            I tested 4 different workload scenarios:

            1. how it takes to complete the loading of the buffer pool without load
            2. how long it takes to mysqldump the OLTP database (to /dev/null)
            3. throughput for OLTP read/write with 1 thread over 15 minutes
            4. throughput for OLTP read/write with 32 threads over 15 minutes

            Numbers:

            variant time to load buffer pool
            A n/a
            B 4:07 min
            C 4:05 min
            variant time to dump
            A 274 s
            B 213 s
            C 214 s

            sysbench OLTP read/write is better viewed as time series plot:

            axel Axel Schwenke added a comment - - edited Benchmark results. I have tested 3 setups: MariaDB-10.6 @ 6d40274f65b, no buffer pool preloading (A) same commit, preloading 90% of buffer pool (B) PR#2339 on top of that commit, preloading 90% of buffer pool (C) The buffer pool was 32G (with 16G redo) holding 128M rows in 32 tables, totaling 30GB of raw data. The pool was once created with slow shutdown, hence it was completely merged and purged. The pool was recreated from a tarfile for each run. I tested 4 different workload scenarios: how it takes to complete the loading of the buffer pool without load how long it takes to mysqldump the OLTP database (to /dev/null) throughput for OLTP read/write with 1 thread over 15 minutes throughput for OLTP read/write with 32 threads over 15 minutes Numbers: variant time to load buffer pool A n/a B 4:07 min C 4:05 min variant time to dump A 274 s B 213 s C 214 s sysbench OLTP read/write is better viewed as time series plot:
            axel Axel Schwenke added a comment -

            The same data for the datadir on SSD:

            variant time to load buffer pool
            A n/a
            B 33 s
            C 32 s
            variant time to dump
            A 204 s
            B 191 s
            C 190 s

            time series. For 32 thd only the first 5 min are shown

            Verdict: the server behaves very similar with or without PR#2339. OK to go.

            axel Axel Schwenke added a comment - The same data for the datadir on SSD: variant time to load buffer pool A n/a B 33 s C 32 s variant time to dump A 204 s B 191 s C 190 s time series. For 32 thd only the first 5 min are shown Verdict: the server behaves very similar with or without PR#2339. OK to go.

            Thank you, danblack and axel! In hdd_32_thd.png the right column is for loading the buffer pool with the patched version. Both those graphs and hdd_1_thd.png are only indicating an improvement, never any serious regression. So, removing the throttling seemed to be the right thing to do.

            marko Marko Mäkelä added a comment - Thank you, danblack and axel ! In hdd_32_thd.png the right column is for loading the buffer pool with the patched version. Both those graphs and hdd_1_thd.png are only indicating an improvement, never any serious regression. So, removing the throttling seemed to be the right thing to do.

            People

              danblack Daniel Black
              greenman Ian Gilfillan
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.