Details
-
Bug
-
Status: Closed (View Workflow)
-
Critical
-
Resolution: Fixed
-
10.5.4, 10.6.0, 10.6.8, 10.9.3
-
Arch Linux VM running on VMware ESXi
Description
We upgraded an MariaDB Galera Cluster from Version 10.5.13 to 10.9.3. Everything is up and running, from a client's point of view.
However we see a regression in I/O access when running mariadb-dump. A custom script (dumping and compressing every table in separate file) ran for about 45 minutes with MariaDB 10.5.13, the time increased to about two and a half hour with MariaDB 10.9.3.
A simple cp can do about 500MB/s on the machines, with mariadb-dump we get about 25MB/s - limited by disk I/O (where mariadbd is in I/O wait / state "D"). MariaDB 10.5.13 did something about 100MB/s - limited by compression, so could probably do a lot more.
This happens with all available I/O methods, tried with aio, uring and native. Also tried different filesystems (ext4 & btrfs) which does not make a difference. Same results with a cluster node and standalone (non-Galera) installation.
No idea if I/O for regular SQL queries regressed as well. The machines have a reasonable amount of RAM, so no delay is noticeable there. Running mariabackup for Galera state transfer reaches expected transfer rates.
Attachments
Issue Links
- blocks
-
MDEV-31227 innodb_flush_method=O_DIRECT causes 3x regression in workload
-
- Closed
-
- is caused by
-
MDEV-15053 Reduce buf_pool_t::mutex contention
-
- Closed
-
- relates to
-
MDEV-26055 Adaptive flushing is still not getting invoked in 10.5.11
-
- Closed
-
-
MDEV-30400 Assertion `height == btr_page_get_level(page_cur_get_page(page_cursor))' failed in btr_cur_search_to_nth_level on INSERT
-
- Closed
-
-
MDEV-31254 InnoDB: Trying to read doublewrite buffer page
-
- Closed
-
-
MDEV-29343 MariaDB 10.6.x slower mysqldump etc.
-
- Closed
-
-
MDEV-30986 Slow full index scan in 10.6 vs 10.5 for the (slow) I/O-bound case
-
- Closed
-
Activity
Field | Original Value | New Value |
---|---|---|
Labels | regression |
Description |
We upgraded an MariaDB Galera Cluster from Version 10.5.13 to 10.9.3. Everything is up and running, from a client's point of view.
However we see a regression in I/O access when running `mariadb-dump`. A custom script (dumping and compressing every table in separate file) ran for about 45 minutes with MariaDB 10.5.13, the time increased to about two and a half hour with MariaDB 10.9.3. A simple `cp` can do about 500MB/s on the machines, with `mariadb-dump` we get about 25MB/s - limited by disk I/O (where `mariadbd` is in I/O wait / state "D"). MariaDB 10.5.13 did something about 100MB/s - limited by compression, so could probably do a lot more. This happens with all available I/O methods, tried with `aio`, `uring` and native. Also tried different filesystems (`ext4` & `btrfs`) which does not make a difference. Same results with a cluster node and standalone (non-Galera) installation. No idea if I/O for regular SQL queries regressed as well. The machines have a reasonable amount of RAM, so no delay is noticeable there. Running `mariabackup` for Galera state transfer reaches expected transfer rates. |
We upgraded an MariaDB Galera Cluster from Version 10.5.13 to 10.9.3. Everything is up and running, from a client's point of view.
However we see a regression in I/O access when running {{mariadb-dump}}. A custom script (dumping and compressing every table in separate file) ran for about 45 minutes with MariaDB 10.5.13, the time increased to about two and a half hour with MariaDB 10.9.3. A simple {{cp}} can do about 500MB/s on the machines, with {{mariadb-dump}} we get about 25MB/s - limited by disk I/O (where {{mariadbd}} is in I/O wait / state "D"). MariaDB 10.5.13 did something about 100MB/s - limited by compression, so could probably do a lot more. This happens with all available I/O methods, tried with {{aio}}, {{uring}} and native. Also tried different filesystems ({{ext4}} & {{btrfs}}) which does not make a difference. Same results with a cluster node and standalone (non-Galera) installation. No idea if I/O for regular SQL queries regressed as well. The machines have a reasonable amount of RAM, so no delay is noticeable there. Running {{mariabackup}} for Galera state transfer reaches expected transfer rates. |
Fix Version/s | 10.5 [ 23123 ] |
Assignee | Marko Mäkelä [ marko ] |
Link |
This issue relates to |
Summary | I/O regression with mariadb-dump | I/O regression with mariadb-dump - innodb read_aheads{_linear?} |
Affects Version/s | 10.6.8 [ 27506 ] |
Link |
This issue is caused by |
Priority | Major [ 3 ] | Critical [ 2 ] |
Link |
This issue relates to |
Link | This issue relates to MDEV-16402 [ MDEV-16402 ] |
Component/s | Storage Engine - InnoDB [ 10129 ] | |
Fix Version/s | 10.6 [ 24028 ] | |
Fix Version/s | 10.7 [ 24805 ] | |
Fix Version/s | 10.8 [ 26121 ] | |
Fix Version/s | 10.9 [ 26905 ] | |
Fix Version/s | 10.10 [ 27530 ] | |
Fix Version/s | 10.11 [ 27614 ] | |
Fix Version/s | 11.0 [ 28320 ] | |
Fix Version/s | 10.5 [ 23123 ] |
Link | This issue is blocked by MDEV-16402 [ MDEV-16402 ] |
Link | This issue relates to MDEV-16402 [ MDEV-16402 ] |
Link |
This issue relates to |
Fix Version/s | 10.7 [ 24805 ] |
Assignee | Marko Mäkelä [ marko ] | Julien Fritsch [ julien.fritsch ] |
Link |
This issue relates to |
Assignee | Julien Fritsch [ julien.fritsch ] | Marko Mäkelä [ marko ] |
Link | This issue is blocked by MDEV-16402 [ MDEV-16402 ] |
Fix Version/s | 10.8 [ 26121 ] |
Link |
This issue blocks |
Fix Version/s | 10.5 [ 23123 ] | |
Affects Version/s | 10.6.0 [ 24431 ] | |
Affects Version/s | 10.5.4 [ 24264 ] | |
Labels | regression | performance regression |
Summary | I/O regression with mariadb-dump - innodb read_aheads{_linear?} | innodb_read_ahead_threshold (linear read-ahead) does not work |
Status | Open [ 1 ] | In Progress [ 3 ] |
Status | In Progress [ 3 ] | In Testing [ 10301 ] |
Assignee | Marko Mäkelä [ marko ] | Matthias Leich [ mleich ] |
Assignee | Matthias Leich [ mleich ] | Marko Mäkelä [ marko ] |
Status | In Testing [ 10301 ] | Stalled [ 10000 ] |
issue.field.resolutiondate | 2023-05-11 10:58:57.0 | 2023-05-11 10:58:57.824 |
Fix Version/s | 11.0.2 [ 28706 ] | |
Fix Version/s | 11.1.1 [ 28704 ] | |
Fix Version/s | 10.5.21 [ 28913 ] | |
Fix Version/s | 10.6.14 [ 28914 ] | |
Fix Version/s | 10.8.9 [ 28915 ] | |
Fix Version/s | 10.9.7 [ 28916 ] | |
Fix Version/s | 10.10.5 [ 28917 ] | |
Fix Version/s | 10.11.4 [ 28918 ] | |
Fix Version/s | 10.5 [ 23123 ] | |
Fix Version/s | 10.6 [ 24028 ] | |
Fix Version/s | 10.9 [ 26905 ] | |
Fix Version/s | 10.10 [ 27530 ] | |
Fix Version/s | 10.11 [ 27614 ] | |
Fix Version/s | 11.0 [ 28320 ] | |
Resolution | Fixed [ 1 ] | |
Status | Stalled [ 10000 ] | Closed [ 6 ] |
Fix Version/s | 10.8.9 [ 28915 ] |
Link |
This issue relates to |
Link | This issue blocks MENT-1823 [ MENT-1823 ] |
Fix Version/s | 10.5.22 [ 29011 ] | |
Fix Version/s | 10.6.15 [ 29013 ] | |
Fix Version/s | 10.9.8 [ 29015 ] | |
Fix Version/s | 10.10.6 [ 29017 ] | |
Fix Version/s | 10.11.5 [ 29019 ] | |
Fix Version/s | 11.0.3 [ 28920 ] | |
Fix Version/s | 11.1.2 [ 28921 ] | |
Fix Version/s | 11.1.1 [ 28704 ] | |
Fix Version/s | 11.0.2 [ 28706 ] | |
Fix Version/s | 10.5.21 [ 28913 ] | |
Fix Version/s | 10.6.14 [ 28914 ] | |
Fix Version/s | 10.9.7 [ 28916 ] | |
Fix Version/s | 10.10.5 [ 28917 ] | |
Fix Version/s | 10.11.4 [ 28918 ] |
Zendesk Related Tickets | 112128 |