Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-29343

MariaDB 10.6.x slower mysqldump etc.

Details

    • Bug
    • Status: Closed (View Workflow)
    • Major
    • Resolution: Incomplete
    • 10.6.7, 10.6.9
    • N/A
    • Backup
    • #server1 - CentOS 7.9, DirectAdmin 1.642, MariaDB 10.6.9
      #server2 - CentOS 8.5 (AlmaLinux 8.5), DirectAdmin 1.642, MariaDB 10.6.7

    Description

      Hi,
      We have over 2000 databases (innodb)

      In April 2022 we moved from #server 1 to #server 2 (which is few times faster than #server1) and we have some troubles:

      1. On #server1 there was mariadb 10.4.x and any backups/upgrades were creating really fast (30-60 seconds)

      2. We uploaded backups to #server2 but installed there MariaDB 10.6.7 and.... dumping databases is taking ~60-70 minutes + after for example do command on SSH like "service mariadb restart" is making some crash recovery etc.... which is taking 1-2 hours also.... on #server1 with mariadb 10.4.x there wasn't any problems like that, I could restart mariadb anytime and there wasn't any problems...

      3. Today (20.08.2022) I logged to #server1 and thinking "hmmm maybe i'll upgrade mariadb 10.4.x to 10.6.x to check if anything will happened like in #server2" - and what? If there were mariadb 10.4.x mysqldump make dumps in 30-60seconds (2100 databases).... after successfully upgrade to mariadb 10.6.x, I tried to reinstall mariadb from 10.6.9 to 10.6.9 (to check how fast is mysqldump) and that stupid mariadb 10.6.9 is doing mysqldump like 1 database every 2 seconds...so 2100 databases give us 60-70 minutes - why it's so much slower than 10.4.x? What should I do? It's terrible problem for us and I don't know how to fix that.... I'm also scared to upgrade mariadb 10.6.7 to 10.6.9 on #server2 (production server) because of recovery crashes after restart etc.

      Any ideas??

      Attachments

        Issue Links

          Activity

            emil89 Emilian Lanowski created issue -
            emil89 Emilian Lanowski made changes -
            Field Original Value New Value
            Description Hi,
            We have over 2000 databases (innodb)

            In April 2022 we moved from #server 1 to #server 2 (which is few times faster than #server1) and we have some troubles:

            1. On #server1 there was mariadb 10.4.x and any backups/upgrades were creating really fast (30-60 seconds)

            2. We uploaded backups to #server2 but installed there MariaDB 10.6.7 and.... dumping databases is taking few hours + after for example do command like "service mariadb restart" is making some crash recovery etc.... which is taking 1-2 hours also.... on #server1 with mariadb 10.4.x there wasn't any problems like that, I could restart mariadb anytime and there wasn't any problems...

            3. Today (20.08.2022) I logged to #server1 and thinking "hmmm maybe i'll upgrade mariadb 10.4.x to 10.6.x to check if anything will happened like in #server2" - and what? If there were mariadb 10.4.x mysqldump make dumps in 30-60seconds (2100 databases).... after successfully upgrade to mariadb 10.6.x, I tried to reinstall mariadb from 10.6.9 to 10.6.9 (to check how fast is mysqldump) and that stupid mariadb 10.6.9 is doing mysqldump like 1 database every 2 seconds...so 2100 databases give us 60-70 minutes - why it's so much slower than 10.4.x? What should I do? It's terrible problem for us and I don't know how to fix that.... I'm also scared to upgrade mariadb 10.6.7 to 10.6.9 on #server2 (production server) because of recovery crashes after restart etc.

            Any ideas??
            Hi,
            We have over 2000 databases (innodb)

            In April 2022 we moved from #server 1 to #server 2 (which is few times faster than #server1) and we have some troubles:

            1. On #server1 there was mariadb 10.4.x and any backups/upgrades were creating really fast (30-60 seconds)

            2. We uploaded backups to #server2 but installed there MariaDB 10.6.7 and.... dumping databases is taking ~60-70 minutes + after for example do command on SSH like "service mariadb restart" is making some crash recovery etc.... which is taking 1-2 hours also.... on #server1 with mariadb 10.4.x there wasn't any problems like that, I could restart mariadb anytime and there wasn't any problems...

            3. Today (20.08.2022) I logged to #server1 and thinking "hmmm maybe i'll upgrade mariadb 10.4.x to 10.6.x to check if anything will happened like in #server2" - and what? If there were mariadb 10.4.x mysqldump make dumps in 30-60seconds (2100 databases).... after successfully upgrade to mariadb 10.6.x, I tried to reinstall mariadb from 10.6.9 to 10.6.9 (to check how fast is mysqldump) and that stupid mariadb 10.6.9 is doing mysqldump like 1 database every 2 seconds...so 2100 databases give us 60-70 minutes - why it's so much slower than 10.4.x? What should I do? It's terrible problem for us and I don't know how to fix that.... I'm also scared to upgrade mariadb 10.6.7 to 10.6.9 on #server2 (production server) because of recovery crashes after restart etc.

            Any ideas??

            Anyone?! Do you need any more details for that problem?

            emil89 Emilian Lanowski added a comment - Anyone?! Do you need any more details for that problem?
            serg Sergei Golubchik made changes -
            Epic/Theme Performance

            how do you backups? Does "dumping" mean you're using mysqldump?

            Are you saying 1) mysqldump is ~60-140 times slower and 2) server restart is ~60-140 times slower?

            serg Sergei Golubchik added a comment - how do you backups? Does "dumping" mean you're using mysqldump? Are you saying 1) mysqldump is ~60-140 times slower and 2) server restart is ~60-140 times slower?
            serg Sergei Golubchik made changes -
            Status Open [ 1 ] Needs Feedback [ 10501 ]
            emil89 Emilian Lanowski added a comment - - edited

            DirectAdmin doing backups with mysqldump if there's any upgrade/update to newer version of mariadb - but there wasn't any speed problems with MariaDB <=10.4.x (I didn't use mariadb 10.5.x so I don't know if there's such a problem too)

            1. Yes... when I was writing that post backups were still in progress but I calculated it correctly... it was done after 60 minutes (#server1) .... while on the same server few minutes earlier if there was mariadb 10.4.x it took ~60 SECONDS lol (and it always took something around ~1 minute before mariadb 10.6.x)

            2. Yes because there is some crash and there is recovery crash after restart mariadb (#server2) (I have many ram on that server (768gb) but innodb pool is using around 90-100gb [because I thought it's maybe because of that I gave to mysql ~300-400gb ram at first time after install], after that I changed it to 90-100gb)...
            I don't know why it's like that and I didn't restart mariadb again on #server2 (after sucessfully backup, because recovery crash didn't go well...) because it's production server.... maybe I should try if it will happen on old #server1

            emil89 Emilian Lanowski added a comment - - edited DirectAdmin doing backups with mysqldump if there's any upgrade/update to newer version of mariadb - but there wasn't any speed problems with MariaDB <=10.4.x (I didn't use mariadb 10.5.x so I don't know if there's such a problem too) 1. Yes... when I was writing that post backups were still in progress but I calculated it correctly... it was done after 60 minutes (#server1) .... while on the same server few minutes earlier if there was mariadb 10.4.x it took ~60 SECONDS lol (and it always took something around ~1 minute before mariadb 10.6.x) 2. Yes because there is some crash and there is recovery crash after restart mariadb (#server2) (I have many ram on that server (768gb) but innodb pool is using around 90-100gb [because I thought it's maybe because of that I gave to mysql ~300-400gb ram at first time after install] , after that I changed it to 90-100gb)... I don't know why it's like that and I didn't restart mariadb again on #server2 (after sucessfully backup, because recovery crash didn't go well...) because it's production server.... maybe I should try if it will happen on old #server1

            Another example:
            On SSH I could do command like that

            "/usr/bin/mysqlcheck -uda_admin -p`grep "^passwd=" /usr/local/directadmin/conf/mysql.conf | cut -d= -f2` --skip-write-binlog --optimize --all-in-1 --all-databases"

            And before MariaDB 10.6.x it was doing all my tables in 2100 databases maybe 10 minutes maximum.... now it's taking HOURS !! It's crazy.... I checked it now again...and WOW it's like 3 minutes now after start and it did only 5 databases lol.... normally it was doing after that time probably something around 500-1000 databases

            emil89 Emilian Lanowski added a comment - Another example: On SSH I could do command like that "/usr/bin/mysqlcheck -uda_admin -p`grep "^passwd=" /usr/local/directadmin/conf/mysql.conf | cut -d= -f2` --skip-write-binlog --optimize --all-in-1 --all-databases" And before MariaDB 10.6.x it was doing all my tables in 2100 databases maybe 10 minutes maximum.... now it's taking HOURS !! It's crazy.... I checked it now again...and WOW it's like 3 minutes now after start and it did only 5 databases lol.... normally it was doing after that time probably something around 500-1000 databases
            emil89 Emilian Lanowski added a comment - - edited

            I tried something now:

            1. Like I mentioned - #server1 is few times slower than #server2.... now mysqldump is doing one database every 5-40 seconds, depends how many data is there... it's done after ~35-40minutes
            2. Deleted MariaDB 10.6.9, reinstalled 10.4.26
            3. Start Upgrade from 10.4.26 to 10.6.9 so mysqldump is working on 10.4.26... and what? Backups with timer in my hand are done within 3 minutes.... what a difference....

            It's crazy, don't you think?

            emil89 Emilian Lanowski added a comment - - edited I tried something now: 1. Like I mentioned - #server1 is few times slower than #server2.... now mysqldump is doing one database every 5-40 seconds, depends how many data is there... it's done after ~35-40minutes 2. Deleted MariaDB 10.6.9, reinstalled 10.4.26 3. Start Upgrade from 10.4.26 to 10.6.9 so mysqldump is working on 10.4.26... and what? Backups with timer in my hand are done within 3 minutes.... what a difference.... It's crazy, don't you think?

            likely this is caused by changes in InnoDB default settings that have such a dramatic effect particularly in your setup. try to compare show variables in your 10.4 and 10.6 servers. In particular, look at innodb_adaptive_hash_index, innodb_change_buffering, innodb_flush_method. But other variables that differ might matter too. You can try to set them in 10.6 the way they are in 10.4 and see if it'll make a difference.

            serg Sergei Golubchik added a comment - likely this is caused by changes in InnoDB default settings that have such a dramatic effect particularly in your setup. try to compare show variables in your 10.4 and 10.6 servers. In particular, look at innodb_adaptive_hash_index, innodb_change_buffering, innodb_flush_method. But other variables that differ might matter too. You can try to set them in 10.6 the way they are in 10.4 and see if it'll make a difference.
            emil89 Emilian Lanowski added a comment - - edited

            Ok, I got it, so here you have InnoDB variables which are different between those servers:

            How will you configure that to be fast as it was previously (or faster...because server 2 is really faster than server 1 as you can see under)?
            We have for now 2750 databases which are only 4.5GB space... so I guess our problems are just because of "quantity"? How to configure /etc/my.cnf properly?

            Servers Components

            Server 1 Server 2
            System: CentOS 7.9 System: CentOS 8.6 (Alma Linux)
            Directadmin: 1.642 Directadmin: 1.642
            CPU: Intel Xeon 8 cores(16 threads) CPU: 2xIntel Xeon= 48 cores (96 threads)
            RAM: 32 GB DDR4 RAM: 768GB DDR4
            Storage: SSD Storage: NVME
            Server1 - MariaDB 10.4.26 Server2 - MariaDB 10.6.7
            innodb_adaptive_hash_index=ON innodb_adaptive_hash_index=OFF
            innodb_change_buffering=all innodb_change_buffering=none
            innodb_checksum_algorithm=crc32 innodb_checksum_algorithm=full_crc32
            innodb_flush_method=fsync innodb_flush_method=O_DIRECT
            innodb_io_capacity=200 innodb_io_capacity=500
            innodb_lru_scan_depth=1024 innodb_lru_scan_depth=1536
            innodb_max_dirty_pages_pct=75.000000 innodb_max_dirty_pages_pct=90.000000
            innodb_read_io_threads=4 innodb_read_io_threads=48
            innodb_write_io_threads=4 innodb_write_io_threads=48

            #server2 /etc/my.cnf Changed variables (I changed some variables because before that it was already the same SLOW.. so I guess they were default (probably the same as in #server1 with mariadb 10.4.x?))
            innodb_flush_method=O_DIRECT
            innodb_io_capacity=500
            innodb_read_io_threads=48
            innodb_write_io_threads=48
            thread_pool_size=90
            thread_handling=pool-of-threads

            LITTLE TEST
            #server 1
            1. mysqldump | DEFAULT VARIABLES TOOK ~7.5-8 minutes

            2. firstable I changed "innodb_flush_method" from "fsync" to "O_DIRECT" mysqldump took 10 minutes
            3. after add another variable innodb_adaptive_hash_index=OFF mysqldump took 10 minutes also
            4. after add another variable "innodb_change_buffering=none" mysqldump took 10 minutes also
            5. after add another variable "innodb_checksum_algorithm=full_crc32" mysqldump took 9:20 minutes...so little bit faster than previous 3 commands
            6. after add another variable "innodb_lru_scan_depth=1536" mysqldump took 9:30 minutes
            7. after add another variable "innodb_max_dirty_pages_pct=90.000000" mysqldump took 10 minutes
            8. after changing first variable innodb_flush_method from "O_DIRECT" to "FSYNC" it took around 9:10 minutes (I thought it'll be ~8 minutes like in first test)
            9. BUT when I changed ALL variables to default in MariaDB 10.4.26 it took 8:20 minutes lol... weird.

            10. As you can see default variables from 10.4.26 make it a LITTLE faster...but it's still not that options which make it so slow like in 10.6.x.... ANY OTHER IDEAS WHICH OPTIONS I HAVE TO CHECK? Because after upgrade from 10.4.26 to 10.6.9 it took again around 50-60 minutes....

            emil89 Emilian Lanowski added a comment - - edited Ok, I got it, so here you have InnoDB variables which are different between those servers: How will you configure that to be fast as it was previously (or faster...because server 2 is really faster than server 1 as you can see under)? We have for now 2750 databases which are only 4.5GB space... so I guess our problems are just because of "quantity"? How to configure /etc/my.cnf properly? Servers Components Server 1 Server 2 System: CentOS 7.9 System: CentOS 8.6 (Alma Linux) Directadmin: 1.642 Directadmin: 1.642 CPU: Intel Xeon 8 cores(16 threads) CPU: 2xIntel Xeon= 48 cores (96 threads) RAM: 32 GB DDR4 RAM: 768GB DDR4 Storage: SSD Storage: NVME Server1 - MariaDB 10.4.26 Server2 - MariaDB 10.6.7 innodb_adaptive_hash_index=ON innodb_adaptive_hash_index=OFF innodb_change_buffering=all innodb_change_buffering=none innodb_checksum_algorithm=crc32 innodb_checksum_algorithm=full_crc32 innodb_flush_method=fsync innodb_flush_method=O_DIRECT innodb_io_capacity=200 innodb_io_capacity=500 innodb_lru_scan_depth=1024 innodb_lru_scan_depth=1536 innodb_max_dirty_pages_pct=75.000000 innodb_max_dirty_pages_pct=90.000000 innodb_read_io_threads=4 innodb_read_io_threads=48 innodb_write_io_threads=4 innodb_write_io_threads=48 #server2 /etc/my.cnf Changed variables (I changed some variables because before that it was already the same SLOW.. so I guess they were default (probably the same as in #server1 with mariadb 10.4.x?)) innodb_flush_method=O_DIRECT innodb_io_capacity=500 innodb_read_io_threads=48 innodb_write_io_threads=48 thread_pool_size=90 thread_handling=pool-of-threads LITTLE TEST #server 1 1. mysqldump | DEFAULT VARIABLES TOOK ~7.5-8 minutes 2. firstable I changed " innodb_flush_method " from "fsync" to "O_DIRECT" mysqldump took 10 minutes 3. after add another variable innodb_adaptive_hash_index=OFF mysqldump took 10 minutes also 4. after add another variable " innodb_change_buffering=none " mysqldump took 10 minutes also 5. after add another variable " innodb_checksum_algorithm=full_crc32 " mysqldump took 9:20 minutes...so little bit faster than previous 3 commands 6. after add another variable " innodb_lru_scan_depth=1536 " mysqldump took 9:30 minutes 7. after add another variable " innodb_max_dirty_pages_pct=90.000000 " mysqldump took 10 minutes 8. after changing first variable innodb_flush_method from "O_DIRECT" to "FSYNC" it took around 9:10 minutes (I thought it'll be ~8 minutes like in first test) 9. BUT when I changed ALL variables to default in MariaDB 10.4.26 it took 8:20 minutes lol... weird. 10. As you can see default variables from 10.4.26 make it a LITTLE faster...but it's still not that options which make it so slow like in 10.6.x.... ANY OTHER IDEAS WHICH OPTIONS I HAVE TO CHECK? Because after upgrade from 10.4.26 to 10.6.9 it took again around 50-60 minutes....
            emil89 Emilian Lanowski made changes -
            Comment [ But it's two different servers and two different system versions so... I guess it's not only "my setup" ? :D

            What do you mean by "show variables" ?
            I have to log in SSH and log in to mariadb and do that command? ]
            serg Sergei Golubchik made changes -
            Status Needs Feedback [ 10501 ] Open [ 1 ]
            serg Sergei Golubchik made changes -

            If you need some better/deeper tests just tell me how to do that, I'm not professional "tester" with mysql/mariadb so I need some instructions

            emil89 Emilian Lanowski added a comment - If you need some better/deeper tests just tell me how to do that, I'm not professional "tester" with mysql/mariadb so I need some instructions

            The 10.6 default setting innodb_flush_method=O_DIRECT (MDEV-24854) should make sense when the innodb_buffer_pool_size is a reasonable proportion of the available RAM size. The default buffer pool size is tiny, only 128 megabytes. When the file system cache is not disabled (innodb_flush_method=fsync), the Linux kernel would use any available RAM for caching reads. So, if you run the test twice in a row with innodb_flush_method=fsync, on the second run, some of the data could be read from RAM instead of the storage.

            To my understanding, the two most important parameters for performance are innodb_buffer_pool_size and innodb_log_file_size. The log file size should not matter much for read workloads. Using a larger log (even several times the buffer pool should be acceptable starting with 10.5) can optimize away many page writes.

            marko Marko Mäkelä added a comment - The 10.6 default setting innodb_flush_method=O_DIRECT ( MDEV-24854 ) should make sense when the innodb_buffer_pool_size is a reasonable proportion of the available RAM size. The default buffer pool size is tiny, only 128 megabytes. When the file system cache is not disabled ( innodb_flush_method=fsync ), the Linux kernel would use any available RAM for caching reads. So, if you run the test twice in a row with innodb_flush_method=fsync , on the second run, some of the data could be read from RAM instead of the storage. To my understanding, the two most important parameters for performance are innodb_buffer_pool_size and innodb_log_file_size . The log file size should not matter much for read workloads. Using a larger log (even several times the buffer pool should be acceptable starting with 10.5) can optimize away many page writes.
            emil89 Emilian Lanowski added a comment - - edited

            But I have innodb_buffer_pool_size and innodb_log_file_size ... check my file /etc/my.cnf:

            It's from #server2 (I upgraded mariadb from 10.6.7 to 10.6.9)

            [mysqld]
            default-storage-engine=InnoDB
            collation-server=utf8mb4_unicode_520_ci
            init-connect='SET NAMES utf8mb4'
            character-set-server=utf8mb4
            performance_schema
            sql_mode = ""
            max_connections = 6000
            local-infile = 0
            port = 3306
            socket = /var/lib/mysql/mysql.sock
            skip-external-locking
            skip-name-resolve
            key_buffer_size = 5M
            max_allowed_packet = 64M
            #open_files_limit=1100000
            #table_open_cache = 1000000
            table_definition_cache=1000000
            sort_buffer_size = 8M
            read_buffer_size = 8M
            read_rnd_buffer_size = 16M
            #myisam_sort_buffer_size = 64M
            thread_cache_size = 8
            query_cache_size= 80M
            query_cache_type = 1
            join_buffer_size=8M
            tmp_table_size=128M
            max_heap_table_size=128M
            innodb_buffer_pool_size = 64G
            #innodb_buffer_pool_instances=500
            innodb_log_file_size=16G
            #innodb_flush_log_at_trx_commit=0
            innodb_flush_method=O_DIRECT
            #innodb_file_per_table=0
            innodb_io_capacity=500
            innodb_read_io_threads=48
            innodb_write_io_threads=48
            #innodb_thread_concurrency=100
            thread_pool_size=90
            thread_handling=pool-of-threads
            #innodb_force_recovery=1
            [mysqldump]
            quick
            max_allowed_packet = 512M

            I want to give more RAM to buffer_pool_size and log_file_size but... I don't know why after restart service mysqld(mariadb) it's doing something nearly 1 hour (loading buffer pools) and while that Databases are so f**king slow.... and I'm scared to improve RAM because I don't know if it will not be doing it so much longer after restart?
            +Any ideas why it's LOADING BUFFER POOLS so long? I have on server 96 threads and it was using just 2 threads (100%) to do that.... - how to improve that operation? +

            I guess it's something like that - https://jira.mariadb.org/browse/MDEV-9930

            sie 30 03:03:16 mydomain.com systemd[1]: Starting MariaDB database server...
            sie 30 03:03:16 mydomain.com mysqld[155737]: 2022-08-30 3:03:16 0 [Note] /usr/sbin/mysqld (server 10.6.9-MariaDB) starting as process 155737 ...
            sie 30 03:03:16 mydomain.com mysqld[155737]: 2022-08-30 3:03:16 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
            sie 30 03:03:16 mydomain.com mysqld[155737]: 2022-08-30 3:03:16 0 [Note] InnoDB: Number of pools: 1
            sie 30 03:03:16 mydomain.com mysqld[155737]: 2022-08-30 3:03:16 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
            sie 30 03:03:16 mydomain.com mysqld[155737]: 2022-08-30 3:03:16 0 [Note] InnoDB: Using Linux native AIO
            sie 30 03:03:16 mydomain.com mysqld[155737]: 2022-08-30 3:03:16 0 [Note] InnoDB: Initializing buffer pool, total size = 68719476736, chunk size = 134217728
            sie 30 03:03:16 mydomain.com mysqld[155737]: 2022-08-30 3:03:16 0 [Note] InnoDB: Completed initialization of buffer pool
            sie 30 03:03:16 mydomain.com mysqld[155737]: 2022-08-30 3:03:16 0 [Note] InnoDB: 128 rollback segments are active.
            sie 30 03:03:19 mydomain.com mysqld[155737]: 2022-08-30 3:03:19 0 [Note] InnoDB: Creating shared tablespace for temporary tables
            sie 30 03:03:19 mydomain.com mysqld[155737]: 2022-08-30 3:03:19 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
            sie 30 03:03:19 mydomain.com mysqld[155737]: 2022-08-30 3:03:19 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
            sie 30 03:03:19 mydomain.com mysqld[155737]: 2022-08-30 3:03:19 0 [Note] InnoDB: 10.6.9 started; log sequence number 54551959918; transaction id 1591999159
            sie 30 03:03:19 mydomain.com mysqld[155737]: 2022-08-30 3:03:19 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
            sie 30 03:03:19 mydomain.com mysqld[155737]: 2022-08-30 3:03:19 0 [Note] Plugin 'FEEDBACK' is disabled.
            sie 30 03:03:19 mydomain.com mysqld[155737]: 2022-08-30 3:03:19 0 [Note] Server socket created on IP: '0.0.0.0'.
            sie 30 03:03:19 mydomain.com mysqld[155737]: 2022-08-30 3:03:19 0 [Note] Server socket created on IP: '::'.
            sie 30 03:03:19 mydomain.com mysqld[155737]: 2022-08-30 3:03:19 0 [Note] /usr/sbin/mysqld: ready for connections.
            sie 30 03:03:19 mydomain.com mysqld[155737]: Version: '10.6.9-MariaDB' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server
            sie 30 03:03:19 mydomain.com systemd[1]: Started MariaDB database server.
            sie 30 04:02:41 mydomain.com mysqld[155737]: 2022-08-30 4:02:41 0 [Note] InnoDB: Buffer pool(s) load completed at 220830 4:02:41

            emil89 Emilian Lanowski added a comment - - edited But I have innodb_buffer_pool_size and innodb_log_file_size ... check my file /etc/my.cnf: It's from #server2 (I upgraded mariadb from 10.6.7 to 10.6.9) [mysqld] default-storage-engine=InnoDB collation-server=utf8mb4_unicode_520_ci init-connect='SET NAMES utf8mb4' character-set-server=utf8mb4 performance_schema sql_mode = "" max_connections = 6000 local-infile = 0 port = 3306 socket = /var/lib/mysql/mysql.sock skip-external-locking skip-name-resolve key_buffer_size = 5M max_allowed_packet = 64M #open_files_limit=1100000 #table_open_cache = 1000000 table_definition_cache=1000000 sort_buffer_size = 8M read_buffer_size = 8M read_rnd_buffer_size = 16M #myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size= 80M query_cache_type = 1 join_buffer_size=8M tmp_table_size=128M max_heap_table_size=128M innodb_buffer_pool_size = 64G #innodb_buffer_pool_instances=500 innodb_log_file_size=16G #innodb_flush_log_at_trx_commit=0 innodb_flush_method=O_DIRECT #innodb_file_per_table=0 innodb_io_capacity=500 innodb_read_io_threads=48 innodb_write_io_threads=48 #innodb_thread_concurrency=100 thread_pool_size=90 thread_handling=pool-of-threads #innodb_force_recovery=1 [mysqldump] quick max_allowed_packet = 512M I want to give more RAM to buffer_pool_size and log_file_size but... I don't know why after restart service mysqld(mariadb) it's doing something nearly 1 hour (loading buffer pools) and while that Databases are so f**king slow.... and I'm scared to improve RAM because I don't know if it will not be doing it so much longer after restart? +Any ideas why it's LOADING BUFFER POOLS so long? I have on server 96 threads and it was using just 2 threads (100%) to do that.... - how to improve that operation? + I guess it's something like that - https://jira.mariadb.org/browse/MDEV-9930 sie 30 03:03:16 mydomain.com systemd [1] : Starting MariaDB database server... sie 30 03:03:16 mydomain.com mysqld [155737] : 2022-08-30 3:03:16 0 [Note] /usr/sbin/mysqld (server 10.6.9-MariaDB) starting as process 155737 ... sie 30 03:03:16 mydomain.com mysqld [155737] : 2022-08-30 3:03:16 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 sie 30 03:03:16 mydomain.com mysqld [155737] : 2022-08-30 3:03:16 0 [Note] InnoDB: Number of pools: 1 sie 30 03:03:16 mydomain.com mysqld [155737] : 2022-08-30 3:03:16 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions sie 30 03:03:16 mydomain.com mysqld [155737] : 2022-08-30 3:03:16 0 [Note] InnoDB: Using Linux native AIO sie 30 03:03:16 mydomain.com mysqld [155737] : 2022-08-30 3:03:16 0 [Note] InnoDB: Initializing buffer pool, total size = 68719476736, chunk size = 134217728 sie 30 03:03:16 mydomain.com mysqld [155737] : 2022-08-30 3:03:16 0 [Note] InnoDB: Completed initialization of buffer pool sie 30 03:03:16 mydomain.com mysqld [155737] : 2022-08-30 3:03:16 0 [Note] InnoDB: 128 rollback segments are active. sie 30 03:03:19 mydomain.com mysqld [155737] : 2022-08-30 3:03:19 0 [Note] InnoDB: Creating shared tablespace for temporary tables sie 30 03:03:19 mydomain.com mysqld [155737] : 2022-08-30 3:03:19 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... sie 30 03:03:19 mydomain.com mysqld [155737] : 2022-08-30 3:03:19 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB. sie 30 03:03:19 mydomain.com mysqld [155737] : 2022-08-30 3:03:19 0 [Note] InnoDB: 10.6.9 started; log sequence number 54551959918; transaction id 1591999159 sie 30 03:03:19 mydomain.com mysqld [155737] : 2022-08-30 3:03:19 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool sie 30 03:03:19 mydomain.com mysqld [155737] : 2022-08-30 3:03:19 0 [Note] Plugin 'FEEDBACK' is disabled. sie 30 03:03:19 mydomain.com mysqld [155737] : 2022-08-30 3:03:19 0 [Note] Server socket created on IP: '0.0.0.0'. sie 30 03:03:19 mydomain.com mysqld [155737] : 2022-08-30 3:03:19 0 [Note] Server socket created on IP: '::'. sie 30 03:03:19 mydomain.com mysqld [155737] : 2022-08-30 3:03:19 0 [Note] /usr/sbin/mysqld: ready for connections. sie 30 03:03:19 mydomain.com mysqld [155737] : Version: '10.6.9-MariaDB' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server sie 30 03:03:19 mydomain.com systemd [1] : Started MariaDB database server. sie 30 04:02:41 mydomain.com mysqld [155737] : 2022-08-30 4:02:41 0 [Note] InnoDB: Buffer pool(s) load completed at 220830 4:02:41

            Loading the buffer pool was single-threaded until MDEV-26547 (10.5.13, 10.6.5). However, loading the buffer pool should be throttled by innodb_io_capacity. What is the speed of your storage? On mine (2300 MB/s writes and 2600 MB/s reads), I think that the sensible value of innodb_io_capacity should be around 80,000.

            marko Marko Mäkelä added a comment - Loading the buffer pool was single-threaded until MDEV-26547 (10.5.13, 10.6.5). However, loading the buffer pool should be throttled by innodb_io_capacity . What is the speed of your storage? On mine (2300 MB/s writes and 2600 MB/s reads), I think that the sensible value of innodb_io_capacity should be around 80,000.

            So why in my case that were only 2 threads? How to set it higher?

            About innodb_io_capacity - So high values like 50 000 - 80 000 will not destroy fastly my disk on server? (I have SSD there and NVME, I have to test them to check what speed they have, now mariadb is on SSD, but I can move it to NVME)

            emil89 Emilian Lanowski added a comment - So why in my case that were only 2 threads? How to set it higher? About innodb_io_capacity - So high values like 50 000 - 80 000 will not destroy fastly my disk on server? (I have SSD there and NVME, I have to test them to check what speed they have, now mariadb is on SSD, but I can move it to NVME)

            emil89, will the buffer pool be populated faster if you set a larger innodb_io_capacity?

            As far as I understand, a longevity issue with solid state devices is associated with the number of writes, not reads. If you are worried about writes, you can reduce the amount of InnoDB data page writes by making the innodb_log_file_size several times larger than innodb_buffer_pool_size. There could be a cost of longer crash recovery times.

            marko Marko Mäkelä added a comment - emil89 , will the buffer pool be populated faster if you set a larger innodb_io_capacity ? As far as I understand, a longevity issue with solid state devices is associated with the number of writes, not reads. If you are worried about writes, you can reduce the amount of InnoDB data page writes by making the innodb_log_file_size several times larger than innodb_buffer_pool_size . There could be a cost of longer crash recovery times.
            marko Marko Mäkelä made changes -
            Status Open [ 1 ] Needs Feedback [ 10501 ]
            emil89 Emilian Lanowski added a comment - - edited

            @Marko Mäkelä but if I have innodb_io_capacity=500
            innodb_read_io_threads=48
            innodb_write_io_threads=48

            It's then not like 48*500=24000 ?
            If I change innodb_log_file_size to higher and don't have recovery crash after restart mariadb (or something) it will not do anything slower?

            emil89 Emilian Lanowski added a comment - - edited @Marko Mäkelä but if I have innodb_io_capacity=500 innodb_read_io_threads=48 innodb_write_io_threads=48 It's then not like 48*500=24000 ? If I change innodb_log_file_size to higher and don't have recovery crash after restart mariadb (or something) it will not do anything slower?

            emil89, it does not work in that way. The innodb_io_capacity mainly limits the submitted page writes in buf_flush_page_cleaner() in ‘background flushing’ or ‘idle flushing’ (see MDEV-27295). If a log checkpoint is more urgently needed because the latest checkpoint was close to innodb_log_file_size bytes ago, then the innodb_io_capacity will be ignored, and ‘furious flushing’ will take place. If you set innodb_flush_sync=OFF, then instead of ‘furious flushing’ the SQL workload will be throttled.

            If you are concerned about the wear of the persistent storage, you should set both innodb_buffer_pool_size and innodb_log_file_size as large as possible. A rule of thumb used to be to allocate â…˜ of the available memory to the buffer pool. The log file size can be several times the buffer pool size if needed by your workload. In that way, data page writes will be deferred. This would help if many pages are being changed over and over again.

            When innodb_use_native_aio=ON, the parameters innodb_read_io_threads and innodb_write_io_threads do not directly control the number of any threads. They limit the amount of pending asynchronous read or write operations. Read or write completion callbacks will be invoked by a thread pool. Starting with MDEV-11026 in MariaDB Server 10.11, these parameters can be changed without restarting the server.

            The parameter innodb_io_capacity also happens to throttle the loading of the buffer pool at startup. Did you try increasing it to a more reasonable value?

            marko Marko Mäkelä added a comment - emil89 , it does not work in that way. The innodb_io_capacity mainly limits the submitted page writes in buf_flush_page_cleaner() in ‘background flushing’ or ‘idle flushing’ (see MDEV-27295 ). If a log checkpoint is more urgently needed because the latest checkpoint was close to innodb_log_file_size bytes ago, then the innodb_io_capacity will be ignored, and ‘furious flushing’ will take place. If you set innodb_flush_sync=OFF , then instead of ‘furious flushing’ the SQL workload will be throttled. If you are concerned about the wear of the persistent storage, you should set both innodb_buffer_pool_size and innodb_log_file_size as large as possible. A rule of thumb used to be to allocate ⅘ of the available memory to the buffer pool. The log file size can be several times the buffer pool size if needed by your workload. In that way, data page writes will be deferred. This would help if many pages are being changed over and over again. When innodb_use_native_aio=ON , the parameters innodb_read_io_threads and innodb_write_io_threads do not directly control the number of any threads. They limit the amount of pending asynchronous read or write operations. Read or write completion callbacks will be invoked by a thread pool. Starting with MDEV-11026 in MariaDB Server 10.11, these parameters can be changed without restarting the server. The parameter innodb_io_capacity also happens to throttle the loading of the buffer pool at startup. Did you try increasing it to a more reasonable value?
            serg Sergei Golubchik made changes -
            Fix Version/s N/A [ 14700 ]
            Resolution Incomplete [ 4 ]
            Status Needs Feedback [ 10501 ] Closed [ 6 ]
            marko Marko Mäkelä made changes -

            MDEV-29967 is a more detailed report of something similar.

            marko Marko Mäkelä added a comment - MDEV-29967 is a more detailed report of something similar.

            I'll check it tommorrow because I'll create replication slave server etc. and I'll change some my.cnf variables like innodb_io_capacity to 10 000 for test and check how it gonna be.

            emil89 Emilian Lanowski added a comment - I'll check it tommorrow because I'll create replication slave server etc. and I'll change some my.cnf variables like innodb_io_capacity to 10 000 for test and check how it gonna be.
            emil89 Emilian Lanowski added a comment - - edited

            I don't know what you did after mariadb 10.4.x but there's many things so slow....

            A) Mariadb service was stopping 20 minutes!! Earlier it was like 5 seconds.... in logs i can see: "Status: "Waiting to flush 411090 pages" so it were nearly 500 000 pages to flush while stopping mariadb... why? How to reduce or disable it?
            B) After start mariadb there's in status *[Note] InnoDB: Loading buffer pool(s) from /home/mysql/ib_buffer_pool * and it's loading 60 minutes in single thread (while my server has 96 threads...)
            C) AND why while it's loading buffer pools my websites are SLOW like a sh*t? after completed loading buffer pools websites are again fast as before.... why??? Why it can't be fast while it's loading buffer pools? Where's problem? (it looks like it's similar to https://jira.mariadb.org/browse/MDEV-9930 )

            After:
            1. change innodb_io_capacity from 500 to 10 000 @Mäkelä
            2. move mariadb from SSD to NVME

            Again mariadb is getting up so long, I mean loading buffer pools.... it's completed after 58 minutes so I guess innodb_io_capacity didn't do anything to this, because earlier it was also 50-60 minutes.......... it getting me nervous, really.... because websites are so slow while this....

            Can you help me or fix that inconveniences?

            emil89 Emilian Lanowski added a comment - - edited I don't know what you did after mariadb 10.4.x but there's many things so slow.... A) Mariadb service was stopping 20 minutes!! Earlier it was like 5 seconds.... in logs i can see: "Status: "Waiting to flush 411090 pages" so it were nearly 500 000 pages to flush while stopping mariadb... why? How to reduce or disable it? B) After start mariadb there's in status * [Note] InnoDB: Loading buffer pool(s) from /home/mysql/ib_buffer_pool * and it's loading 60 minutes in single thread (while my server has 96 threads...) C) AND why while it's loading buffer pools my websites are SLOW like a sh*t? after completed loading buffer pools websites are again fast as before.... why??? Why it can't be fast while it's loading buffer pools? Where's problem? (it looks like it's similar to https://jira.mariadb.org/browse/MDEV-9930 ) After: 1. change innodb_io_capacity from 500 to 10 000 @Mäkelä 2. move mariadb from SSD to NVME Again mariadb is getting up so long, I mean loading buffer pools.... it's completed after 58 minutes so I guess innodb_io_capacity didn't do anything to this, because earlier it was also 50-60 minutes.......... it getting me nervous, really.... because websites are so slow while this.... Can you help me or fix that inconveniences?
            danblack Daniel Black added a comment -

            Note updates to MDEV-29967, it seems related to read ahead. What is your show global status like "innodb_buffer_pool_read_ahead%"? What is it a few seconds later?

            A) not sure, seems like a different problem

            B) MDEV-25417, https://github.com/MariaDB/server/blob/04aab8283022b9a37969df90f44f0c95b98e01d1/storage/innobase/buf/buf0dump.cc#L424 is throttled concurrent activity. The sleep seems disproportionate and not related to concurrent activity.

            C) Could use mydumper or something to rapidly pull tables into the buffer pool.

            danblack Daniel Black added a comment - Note updates to MDEV-29967 , it seems related to read ahead. What is your show global status like "innodb_buffer_pool_read_ahead%" ? What is it a few seconds later? A) not sure, seems like a different problem B) MDEV-25417 , https://github.com/MariaDB/server/blob/04aab8283022b9a37969df90f44f0c95b98e01d1/storage/innobase/buf/buf0dump.cc#L424 is throttled concurrent activity. The sleep seems disproportionate and not related to concurrent activity. C) Could use mydumper or something to rapidly pull tables into the buffer pool.
            danblack Daniel Black made changes -

            emil89, I see that danblack already answered your questions B) and C).

            A) Mariadb service was stopping 20 minutes!! Earlier it was like 5 seconds.... in logs i can see: "Status: "Waiting to flush 411090 pages" so it were nearly 500 000 pages to flush while stopping mariadb... why? How to reduce or disable it?

            See MDEV-27295 and MDEV-24537. You will want to enable "background flushing":

            SET GLOBAL innodb_max_dirty_pages_pct_lwm=0.001;
            

            I think that we must consider enabling it by default after evaluating the performance impact in extensive tests of various scenarios. There is also an open ticket MDEV-26055 that is waiting for testing. For backup, MDEV-30000 was recently filed.

            The motivation to delay page writes as long as possible is to reduce "write amplification": If some data pages are modified over and over again, say, only a few bytes are being modified, then by deferring the page writes we will only be writing a few bytes to ib_logfile0, instead of additionally writing the data page (innodb_page_size bytes) and computing page checksums over and over again.

            marko Marko Mäkelä added a comment - emil89 , I see that danblack already answered your questions B) and C). A) Mariadb service was stopping 20 minutes!! Earlier it was like 5 seconds.... in logs i can see: "Status: "Waiting to flush 411090 pages" so it were nearly 500 000 pages to flush while stopping mariadb... why? How to reduce or disable it? See MDEV-27295 and MDEV-24537 . You will want to enable "background flushing": SET GLOBAL innodb_max_dirty_pages_pct_lwm=0.001; I think that we must consider enabling it by default after evaluating the performance impact in extensive tests of various scenarios. There is also an open ticket MDEV-26055 that is waiting for testing. For backup, MDEV-30000 was recently filed. The motivation to delay page writes as long as possible is to reduce " write amplification ": If some data pages are modified over and over again, say, only a few bytes are being modified, then by deferring the page writes we will only be writing a few bytes to ib_logfile0 , instead of additionally writing the data page ( innodb_page_size bytes) and computing page checksums over and over again.
            emil89 Emilian Lanowski added a comment - - edited

            A) Ok i'll try that
            B) Ok it was 2 threads not one... 2 threads with 100% CPU, but I have 96 threads on server and 94 threads were not used for that
            C) I don't understand? How to improve that exactly? .... maybe it's because I have here 200 000+ tables and 2500+ databases? I don't know... but I know that on mariadb 10.4.x it was few seconds

            My all innodb variables right now:

            innodb_adaptive_flushing                 | ON                     |
            | innodb_adaptive_flushing_lwm             | 10.000000              |
            | innodb_adaptive_hash_index               | OFF                    |
            | innodb_adaptive_hash_index_parts         | 8                      |
            | innodb_autoextend_increment              | 64                     |
            | innodb_autoinc_lock_mode                 | 1                      |
            | innodb_buf_dump_status_frequency         | 0                      |
            | innodb_buffer_pool_chunk_size            | 134217728              |
            | innodb_buffer_pool_dump_at_shutdown      | ON                     |
            | innodb_buffer_pool_dump_now              | OFF                    |
            | innodb_buffer_pool_dump_pct              | 25                     |
            | innodb_buffer_pool_filename              | ib_buffer_pool         |
            | innodb_buffer_pool_load_abort            | OFF                    |
            | innodb_buffer_pool_load_at_startup       | ON                     |
            | innodb_buffer_pool_load_now              | OFF                    |
            | innodb_buffer_pool_size                  | 126701535232           |
            | innodb_change_buffer_max_size            | 25                     |
            | innodb_change_buffering                  | none                   |
            | innodb_checksum_algorithm                | full_crc32             |
            | innodb_cmp_per_index_enabled             | OFF                    |
            | innodb_compression_algorithm             | zlib                   |
            | innodb_compression_default               | OFF                    |
            | innodb_compression_failure_threshold_pct | 5                      |
            | innodb_compression_level                 | 6                      |
            | innodb_compression_pad_pct_max           | 50                     |
            | innodb_data_file_path                    | ibdata1:12M:autoextend |
            | innodb_data_home_dir                     |                        |
            | innodb_deadlock_detect                   | ON                     |
            | innodb_deadlock_report                   | full                   |
            | innodb_default_encryption_key_id         | 1                      |
            | innodb_default_row_format                | dynamic                |
            | innodb_defragment                        | OFF                    |
            | innodb_defragment_fill_factor            | 0.900000               |
            | innodb_defragment_fill_factor_n_recs     | 20                     |
            | innodb_defragment_frequency              | 40                     |
            | innodb_defragment_n_pages                | 7                      |
            | innodb_defragment_stats_accuracy         | 0                      |
            | innodb_disable_sort_file_cache           | OFF                    |
            | innodb_doublewrite                       | ON                     |
            | innodb_encrypt_log                       | OFF                    |
            | innodb_encrypt_tables                    | OFF                    |
            | innodb_encrypt_temporary_tables          | OFF                    |
            | innodb_encryption_rotate_key_age         | 1                      |
            | innodb_encryption_rotation_iops          | 100                    |
            | innodb_encryption_threads                | 0                      |
            | innodb_fast_shutdown                     | 1                      |
            | innodb_fatal_semaphore_wait_threshold    | 600                    |
            | innodb_file_per_table                    | ON                     |
            | innodb_fill_factor                       | 100                    |
            | innodb_flush_log_at_timeout              | 1                      |
            | innodb_flush_log_at_trx_commit           | 1                      |
            | innodb_flush_method                      | O_DIRECT               |
            | innodb_flush_neighbors                   | 1                      |
            | innodb_flush_sync                        | ON                     |
            | innodb_flushing_avg_loops                | 30                     |
            | innodb_force_primary_key                 | OFF                    |
            | innodb_force_recovery                    | 0                      |
            | innodb_ft_aux_table                      |                        |
            | innodb_ft_cache_size                     | 8000000                |
            | innodb_ft_enable_diag_print              | OFF                    |
            | innodb_ft_enable_stopword                | ON                     |
            | innodb_ft_max_token_size                 | 84                     |
            | innodb_ft_min_token_size                 | 3                      |
            | innodb_ft_num_word_optimize              | 2000                   |
            | innodb_ft_result_cache_limit             | 2000000000             |
            | innodb_ft_server_stopword_table          |                        |
            | innodb_ft_sort_pll_degree                | 2                      |
            | innodb_ft_total_cache_size               | 640000000              |
            | innodb_ft_user_stopword_table            |                        |
            | innodb_immediate_scrub_data_uncompressed | OFF                    |
            | innodb_instant_alter_column_allowed      | add_drop_reorder       |
            | innodb_io_capacity                       | 10000                  |
            | innodb_io_capacity_max                   | 20000                  |
            | innodb_lock_wait_timeout                 | 50                     |
            | innodb_log_buffer_size                   | 16777216               |
            | innodb_log_file_size                     | 68719476736            |
            | innodb_log_group_home_dir                | ./                     |
            | innodb_log_write_ahead_size              | 8192                   |
            | innodb_lru_flush_size                    | 32                     |
            | innodb_lru_scan_depth                    | 1536                   |
            | innodb_max_dirty_pages_pct               | 90.000000              |
            | innodb_max_dirty_pages_pct_lwm           | 0.001000               |
            | innodb_max_purge_lag                     | 0                      |
            | innodb_max_purge_lag_delay               | 0                      |
            | innodb_max_purge_lag_wait                | 4294967295             |
            | innodb_max_undo_log_size                 | 10485760               |
            | innodb_monitor_disable                   |                        |
            | innodb_monitor_enable                    |                        |
            | innodb_monitor_reset                     |                        |
            | innodb_monitor_reset_all                 |                        |
            | innodb_old_blocks_pct                    | 37                     |
            | innodb_old_blocks_time                   | 1000                   |
            | innodb_online_alter_log_max_size         | 134217728              |
            | innodb_open_files                        | 2000                   |
            | innodb_optimize_fulltext_only            | OFF                    |
            | innodb_page_size                         | 16384                  |
            | innodb_prefix_index_cluster_optimization | OFF                    |
            | innodb_print_all_deadlocks               | OFF                    |
            | innodb_purge_batch_size                  | 300                    |
            | innodb_purge_rseg_truncate_frequency     | 128                    |
            | innodb_purge_threads                     | 4                      |
            | innodb_random_read_ahead                 | OFF                    |
            | innodb_read_ahead_threshold              | 56                     |
            | innodb_read_io_threads                   | 48                     |
            | innodb_read_only                         | OFF                    |
            | innodb_read_only_compressed              | OFF                    |
            | innodb_rollback_on_timeout               | OFF                    |
            | innodb_sort_buffer_size                  | 1048576                |
            | innodb_spin_wait_delay                   | 4                      |
            | innodb_stats_auto_recalc                 | ON                     |
            | innodb_stats_include_delete_marked       | OFF                    |
            | innodb_stats_method                      | nulls_equal            |
            | innodb_stats_modified_counter            | 0                      |
            | innodb_stats_on_metadata                 | OFF                    |
            | innodb_stats_persistent                  | ON                     |
            | innodb_stats_persistent_sample_pages     | 20                     |
            | innodb_stats_traditional                 | ON                     |
            | innodb_stats_transient_sample_pages      | 8                      |
            | innodb_status_output                     | OFF                    |
            | innodb_status_output_locks               | OFF                    |
            | innodb_strict_mode                       | ON                     |
            | innodb_sync_spin_loops                   | 30                     |
            | innodb_table_locks                       | ON                     |
            | innodb_temp_data_file_path               | ibtmp1:12M:autoextend  |
            | innodb_tmpdir                            |                        |
            | innodb_undo_directory                    | ./                     |
            | innodb_undo_log_truncate                 | OFF                    |
            | innodb_undo_tablespaces                  | 0                      |
            | innodb_use_atomic_writes                 | ON                     |
            | innodb_use_native_aio                    | ON                     |
            | innodb_version                           | 10.6.10                |
            | innodb_write_io_threads                  | 48
            

            emil89 Emilian Lanowski added a comment - - edited A) Ok i'll try that B) Ok it was 2 threads not one... 2 threads with 100% CPU, but I have 96 threads on server and 94 threads were not used for that C) I don't understand? How to improve that exactly? .... maybe it's because I have here 200 000+ tables and 2500+ databases? I don't know... but I know that on mariadb 10.4.x it was few seconds My all innodb variables right now: innodb_adaptive_flushing | ON | | innodb_adaptive_flushing_lwm | 10.000000 | | innodb_adaptive_hash_index | OFF | | innodb_adaptive_hash_index_parts | 8 | | innodb_autoextend_increment | 64 | | innodb_autoinc_lock_mode | 1 | | innodb_buf_dump_status_frequency | 0 | | innodb_buffer_pool_chunk_size | 134217728 | | innodb_buffer_pool_dump_at_shutdown | ON | | innodb_buffer_pool_dump_now | OFF | | innodb_buffer_pool_dump_pct | 25 | | innodb_buffer_pool_filename | ib_buffer_pool | | innodb_buffer_pool_load_abort | OFF | | innodb_buffer_pool_load_at_startup | ON | | innodb_buffer_pool_load_now | OFF | | innodb_buffer_pool_size | 126701535232 | | innodb_change_buffer_max_size | 25 | | innodb_change_buffering | none | | innodb_checksum_algorithm | full_crc32 | | innodb_cmp_per_index_enabled | OFF | | innodb_compression_algorithm | zlib | | innodb_compression_default | OFF | | innodb_compression_failure_threshold_pct | 5 | | innodb_compression_level | 6 | | innodb_compression_pad_pct_max | 50 | | innodb_data_file_path | ibdata1:12M:autoextend | | innodb_data_home_dir | | | innodb_deadlock_detect | ON | | innodb_deadlock_report | full | | innodb_default_encryption_key_id | 1 | | innodb_default_row_format | dynamic | | innodb_defragment | OFF | | innodb_defragment_fill_factor | 0.900000 | | innodb_defragment_fill_factor_n_recs | 20 | | innodb_defragment_frequency | 40 | | innodb_defragment_n_pages | 7 | | innodb_defragment_stats_accuracy | 0 | | innodb_disable_sort_file_cache | OFF | | innodb_doublewrite | ON | | innodb_encrypt_log | OFF | | innodb_encrypt_tables | OFF | | innodb_encrypt_temporary_tables | OFF | | innodb_encryption_rotate_key_age | 1 | | innodb_encryption_rotation_iops | 100 | | innodb_encryption_threads | 0 | | innodb_fast_shutdown | 1 | | innodb_fatal_semaphore_wait_threshold | 600 | | innodb_file_per_table | ON | | innodb_fill_factor | 100 | | innodb_flush_log_at_timeout | 1 | | innodb_flush_log_at_trx_commit | 1 | | innodb_flush_method | O_DIRECT | | innodb_flush_neighbors | 1 | | innodb_flush_sync | ON | | innodb_flushing_avg_loops | 30 | | innodb_force_primary_key | OFF | | innodb_force_recovery | 0 | | innodb_ft_aux_table | | | innodb_ft_cache_size | 8000000 | | innodb_ft_enable_diag_print | OFF | | innodb_ft_enable_stopword | ON | | innodb_ft_max_token_size | 84 | | innodb_ft_min_token_size | 3 | | innodb_ft_num_word_optimize | 2000 | | innodb_ft_result_cache_limit | 2000000000 | | innodb_ft_server_stopword_table | | | innodb_ft_sort_pll_degree | 2 | | innodb_ft_total_cache_size | 640000000 | | innodb_ft_user_stopword_table | | | innodb_immediate_scrub_data_uncompressed | OFF | | innodb_instant_alter_column_allowed | add_drop_reorder | | innodb_io_capacity | 10000 | | innodb_io_capacity_max | 20000 | | innodb_lock_wait_timeout | 50 | | innodb_log_buffer_size | 16777216 | | innodb_log_file_size | 68719476736 | | innodb_log_group_home_dir | ./ | | innodb_log_write_ahead_size | 8192 | | innodb_lru_flush_size | 32 | | innodb_lru_scan_depth | 1536 | | innodb_max_dirty_pages_pct | 90.000000 | | innodb_max_dirty_pages_pct_lwm | 0.001000 | | innodb_max_purge_lag | 0 | | innodb_max_purge_lag_delay | 0 | | innodb_max_purge_lag_wait | 4294967295 | | innodb_max_undo_log_size | 10485760 | | innodb_monitor_disable | | | innodb_monitor_enable | | | innodb_monitor_reset | | | innodb_monitor_reset_all | | | innodb_old_blocks_pct | 37 | | innodb_old_blocks_time | 1000 | | innodb_online_alter_log_max_size | 134217728 | | innodb_open_files | 2000 | | innodb_optimize_fulltext_only | OFF | | innodb_page_size | 16384 | | innodb_prefix_index_cluster_optimization | OFF | | innodb_print_all_deadlocks | OFF | | innodb_purge_batch_size | 300 | | innodb_purge_rseg_truncate_frequency | 128 | | innodb_purge_threads | 4 | | innodb_random_read_ahead | OFF | | innodb_read_ahead_threshold | 56 | | innodb_read_io_threads | 48 | | innodb_read_only | OFF | | innodb_read_only_compressed | OFF | | innodb_rollback_on_timeout | OFF | | innodb_sort_buffer_size | 1048576 | | innodb_spin_wait_delay | 4 | | innodb_stats_auto_recalc | ON | | innodb_stats_include_delete_marked | OFF | | innodb_stats_method | nulls_equal | | innodb_stats_modified_counter | 0 | | innodb_stats_on_metadata | OFF | | innodb_stats_persistent | ON | | innodb_stats_persistent_sample_pages | 20 | | innodb_stats_traditional | ON | | innodb_stats_transient_sample_pages | 8 | | innodb_status_output | OFF | | innodb_status_output_locks | OFF | | innodb_strict_mode | ON | | innodb_sync_spin_loops | 30 | | innodb_table_locks | ON | | innodb_temp_data_file_path | ibtmp1:12M:autoextend | | innodb_tmpdir | | | innodb_undo_directory | ./ | | innodb_undo_log_truncate | OFF | | innodb_undo_tablespaces | 0 | | innodb_use_atomic_writes | ON | | innodb_use_native_aio | ON | | innodb_version | 10.6 . 10 | | innodb_write_io_threads | 48
            valerii Valerii Kravchuk made changes -
            Resolution Incomplete [ 4 ]
            Status Closed [ 6 ] Stalled [ 10000 ]
            marko Marko Mäkelä made changes -
            Status Stalled [ 10000 ] Needs Feedback [ 10501 ]

            emil89, can you please test the recently released MariaDB Server 10.6.13? It includes some performance fixes, such as MDEV-26055 and MDEV-26827. Those fixes seemed to help a lot when I tried to reproduce MDEV-30000.

            There is one more fix MDEV-29967 that missed the 10.6.13 release and could be relevant here. If 10.6.13 does not fix your problem for you, it would be nice if you could test a development snapshot of 10.6 from https://ci.mariadb.org/35012/. Because I do not know how long these builds will be retained, that particular build might not be available when you are ready to try it. I got the number by navigating to end of the page https://buildbot.mariadb.org/#/grid?branch=10.6 and clicking on the leftmost "build successful" link next to "tarball-docker". The URL and the heading of that page ends in the build number (35012).

            If you use such a snapshot, please indicate the version_source_revision; it should be part of the startup message:

            2023-05-11 11:06:30 0 [Note] Starting MariaDB 10.6.14-MariaDB-log source revision c271057288f71746d1816824f338f2d9c47f67c1 as process 24114
            

            marko Marko Mäkelä added a comment - emil89 , can you please test the recently released MariaDB Server 10.6.13? It includes some performance fixes, such as MDEV-26055 and MDEV-26827 . Those fixes seemed to help a lot when I tried to reproduce MDEV-30000 . There is one more fix MDEV-29967 that missed the 10.6.13 release and could be relevant here. If 10.6.13 does not fix your problem for you, it would be nice if you could test a development snapshot of 10.6 from https://ci.mariadb.org/35012/ . Because I do not know how long these builds will be retained, that particular build might not be available when you are ready to try it. I got the number by navigating to end of the page https://buildbot.mariadb.org/#/grid?branch=10.6 and clicking on the leftmost "build successful" link next to "tarball-docker". The URL and the heading of that page ends in the build number (35012). If you use such a snapshot, please indicate the version_source_revision ; it should be part of the startup message: 2023-05-11 11:06:30 0 [Note] Starting MariaDB 10.6.14-MariaDB-log source revision c271057288f71746d1816824f338f2d9c47f67c1 as process 24114

            @marko - I can't test your fixes outside of official releases because this is my production server. I can't even update these kinds of things too often, because it upsets clients when there are frequent technical interruptions. At the moment, I still have version 10.6.10 installed... But when I update in some time to 10.6.13 or maybe even 10.6.14, then I'll let you know what the situation looks like on my end However, from another server, I can see that 10.6.13 does seem to be faster, even post-installation.

            emil89 Emilian Lanowski added a comment - @marko - I can't test your fixes outside of official releases because this is my production server. I can't even update these kinds of things too often, because it upsets clients when there are frequent technical interruptions. At the moment, I still have version 10.6.10 installed... But when I update in some time to 10.6.13 or maybe even 10.6.14, then I'll let you know what the situation looks like on my end However, from another server, I can see that 10.6.13 does seem to be faster, even post-installation.

            Ok, I updated 10.6.10 to 10.6.13 and I can confirm two things:

            1. MariaDB stops now within few seconds
            2. "InnoDB: Buffer pool(s) load" - it took now 1 minute, earlier it was 60 minutes lol

            What not improved:
            1. Backups - still taking more than in earlier versions like 10.4.x

            What I didn't checked:
            1. Command "/usr/bin/mysqlcheck -uda_admin -p`grep "^passwd=" /usr/local/directadmin/conf/mysql.conf | cut -d= -f2` --skip-write-binlog --optimize --all-in-1 --all-databases"

            emil89 Emilian Lanowski added a comment - Ok, I updated 10.6.10 to 10.6.13 and I can confirm two things: 1. MariaDB stops now within few seconds 2. "InnoDB: Buffer pool(s) load" - it took now 1 minute, earlier it was 60 minutes lol What not improved: 1. Backups - still taking more than in earlier versions like 10.4.x What I didn't checked: 1. Command "/usr/bin/mysqlcheck -uda_admin -p`grep "^passwd=" /usr/local/directadmin/conf/mysql.conf | cut -d= -f2` --skip-write-binlog --optimize --all-in-1 --all-databases"

            I checked command "/usr/bin/mysqlcheck -uda_admin -p`grep "^passwd=" /usr/local/directadmin/conf/mysql.conf | cut -d= -f2` --skip-write-binlog --optimize --all-in-1 --all-databases" and it works faster!

            About backups - immediately it's slower ~5minutes per backup for me than before upgrade.

            emil89 Emilian Lanowski added a comment - I checked command "/usr/bin/mysqlcheck -uda_admin -p`grep "^passwd=" /usr/local/directadmin/conf/mysql.conf | cut -d= -f2` --skip-write-binlog --optimize --all-in-1 --all-databases" and it works faster! About backups - immediately it's slower ~5minutes per backup for me than before upgrade.

            In MDEV-30986 I just found out that setting innodb_random_read_ahead=ON may improve read performance for some workloads. It is disabled by default. However, I would not expect that setting to speed up any logical backups, which basically are SELECT * FROM tablename (sequential scan of the InnoDB clustered index pages).

            Related to MDEV-30986, I have implemented a prototype of logical read ahead, which could augment or replace the current linear read-ahead. That would hopefully speed up index or table scans. I believe that the linear read-ahead could only work well if the clustered index leaf pages are in sequential order. OPTIMIZE TABLE should help in achieving that. I suppose that it is what your mysqlcheck command would invoke.

            Do you get acceptable read performance when setting innodb_flush_method=fsync? Note that with that setting, the InnoDB buffer pool will typically be extended by the Linux file system cache. It might be better to allocate more memory to the InnoDB buffer pool. One of the purported use cases of InnoDB buffer pool resizing was to temporarily increase the buffer pool size for some heavy shifting, without having to restart the server.

            marko Marko Mäkelä added a comment - In MDEV-30986 I just found out that setting innodb_random_read_ahead=ON may improve read performance for some workloads. It is disabled by default. However, I would not expect that setting to speed up any logical backups, which basically are SELECT * FROM tablename (sequential scan of the InnoDB clustered index pages). Related to MDEV-30986 , I have implemented a prototype of logical read ahead, which could augment or replace the current linear read-ahead. That would hopefully speed up index or table scans. I believe that the linear read-ahead could only work well if the clustered index leaf pages are in sequential order. OPTIMIZE TABLE should help in achieving that. I suppose that it is what your mysqlcheck command would invoke. Do you get acceptable read performance when setting innodb_flush_method=fsync ? Note that with that setting, the InnoDB buffer pool will typically be extended by the Linux file system cache. It might be better to allocate more memory to the InnoDB buffer pool. One of the purported use cases of InnoDB buffer pool resizing was to temporarily increase the buffer pool size for some heavy shifting, without having to restart the server.
            serg Sergei Golubchik made changes -
            Resolution Incomplete [ 4 ]
            Status Needs Feedback [ 10501 ] Closed [ 6 ]

            People

              Unassigned Unassigned
              emil89 Emilian Lanowski
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.