Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-16617

After upgrading from 5.5.52 to 10.1.32 getting [Warning] InnoDB: Difficult to find free blocks in the buffer pool

Details

    Description

      The full warning is [Warning] InnoDB: Difficult to find free blocks in the buffer pool (21 search iterations)! 0 failed attempts to flush a page!

      The four databases reporting this show the exact warning including the 21 search iterations.

      We have increased the buffer pool size on these servers and are still getting the warnings

      Buffer pool sizes on these 5 databases
      server 1:
      innodb-buffer-pool-size=1024M
      innodb_buffer_pool_size=1024M #Set to 70% of available RAM

      server 2:
      innodb-buffer-pool-size=1024M
      innodb_buffer_pool_size=2100M #Set to 70% of available RAM

      server 3:
      innodb-buffer-pool-size=2048M
      innodb_buffer_pool_size=2048M #Set to 70% of available RAM
      server 4:
      innodb-buffer-pool-size=1024M
      innodb_buffer_pool_size=2450M #Set to 70% of available RAM
      server 5:
      innodb-buffer-pool-size=35695M
      innodb_buffer_pool_size=90000M #Set to 70% of available RAM

      The first buffer pool size listed was the size of the buffer pool used during the upgrade and when we first detected the warning in the logs. The second buffer pool setting is what we increased the value to as the warning recommends...yet it did not resolve the warning on any of those databases that had an increase in buffer pool size.

      After the upgrade whenever we restart the database we will get this error on some of the servers within 24 hours. A couple of databases have generated the warning more than a week after the last database restart. Some of these databases have many gigabytes of buffer_pool assigned.

      Maybe related to https://jira.mariadb.org/browse/MDEV-16339

      Please advise on the significance of this warning and how to resolve it.

      Attachments

        Issue Links

          Activity

            It looks like the warning didn't exist in 5.5 at all, so it's not in itself surprising that it only started appearing with 10.1.
            I'll re-address other questions to marko.

            elenst Elena Stepanova added a comment - It looks like the warning didn't exist in 5.5 at all, so it's not in itself surprising that it only started appearing with 10.1. I'll re-address other questions to marko .

            The message was recently reformatted in MDEV-14776. It was introduced in MySQL 3.23.44 already (October 30, 2001). It used to be this monstrosity:

            InnoDB: Warning: difficult to find free blocks from
            InnoDB: the buffer pool (%lu search iterations)! Consider
            InnoDB: increasing the buffer pool size.
            

            The InnoDB buffer pool is also used for storing information about transactional InnoDB locks.

            In the upgrade, could it be possible that some transactions were recovered in XA PREPARE state and never committed or rolled back? In that case, the buffer pool could be filled with record locks for the lock waits of new connections waiting for the recovered transactions to end.

            I do not know where you got the 70% figure from. It is only a rule of thumb.

            If you are doing a huge UPDATE or DELETE transaction, that would create a large number of record locks. MDEV-16232 should eventually fix those to use implicit locking when there are no conflicts, just like INSERT has always worked.

            marko Marko Mäkelä added a comment - The message was recently reformatted in MDEV-14776 . It was introduced in MySQL 3.23.44 already (October 30, 2001). It used to be this monstrosity: InnoDB: Warning: difficult to find free blocks from InnoDB: the buffer pool (%lu search iterations)! Consider InnoDB: increasing the buffer pool size. The InnoDB buffer pool is also used for storing information about transactional InnoDB locks. In the upgrade, could it be possible that some transactions were recovered in XA PREPARE state and never committed or rolled back? In that case, the buffer pool could be filled with record locks for the lock waits of new connections waiting for the recovered transactions to end. I do not know where you got the 70% figure from. It is only a rule of thumb. If you are doing a huge UPDATE or DELETE transaction, that would create a large number of record locks. MDEV-16232 should eventually fix those to use implicit locking when there are no conflicts, just like INSERT has always worked.

            Hello marko,

            In response to your post
            In the upgrade, could it be possible that some transactions were recovered in XA PREPARE state and never committed or rolled back? In that case, the buffer pool could be filled with record locks for the lock waits of new connections waiting for the recovered transactions to end.

            Yes it is possible. I believe these databases did not use set GLOBAL innodb_fast_shutdown=0 prior to being upgraded. Also, they were not being restricted from client connection prior to shutdown. How can we tell if these databases suffer from what you suggest. Is there a way to flush the buffer pool completely? What do you recommend?

            Thanks,

            Joe Oreste

            joe.oreste@xpressbet.com Joseph Oreste (Inactive) added a comment - Hello marko, In response to your post In the upgrade, could it be possible that some transactions were recovered in XA PREPARE state and never committed or rolled back? In that case, the buffer pool could be filled with record locks for the lock waits of new connections waiting for the recovered transactions to end . Yes it is possible. I believe these databases did not use set GLOBAL innodb_fast_shutdown=0 prior to being upgraded. Also, they were not being restricted from client connection prior to shutdown. How can we tell if these databases suffer from what you suggest. Is there a way to flush the buffer pool completely? What do you recommend? Thanks, Joe Oreste

            joe.oreste@xpressbet.com, sorry, I missed your feedback.

            The buffer pool should by default be empty at a startup that follows a clean shutdown. (Crash recovery would read pages into the buffer pool).

            InnoDB uses its buffer pool also for some other things than data pages. If I remember correctly, some data structures for the adaptive hash index can be allocated in the buffer pool. And the InnoDB table and record locks are alllocated from the buffer pool.

            The innodb_adaptive_hash_index is often causing problems, so it might be helpful to disable it.

            If any incomplete transactions were recovered from the undo logs at startup, these transactions would hold table IX locks until they are committed or rolled back. If some new transaction is attempting to acquire a lock that conflicts with one of the recovered transactions, then two sets of record locks would be created for each conflict: one for the lock that the recovered transaction is holding, and a wait lock request for the new transaction.

            Normally, recovered incomplete transactions are rolled back by a background thread. If any transactions were recovered in the XA PREPARE state, those would remain in the system until an explicit XA COMMIT or XA ROLLBACK is executed for the transaction.

            To analyze the problem, I would need more information. Possibly, the output from the following could help to see what is occupying the buffer pool.

            SELECT * FROM INFORMATION_SCHEMA.INNODB_BUFFER_POOL_STATS;
            

            marko Marko Mäkelä added a comment - joe.oreste@xpressbet.com , sorry, I missed your feedback. The buffer pool should by default be empty at a startup that follows a clean shutdown. (Crash recovery would read pages into the buffer pool). InnoDB uses its buffer pool also for some other things than data pages. If I remember correctly, some data structures for the adaptive hash index can be allocated in the buffer pool. And the InnoDB table and record locks are alllocated from the buffer pool. The innodb_adaptive_hash_index is often causing problems, so it might be helpful to disable it. If any incomplete transactions were recovered from the undo logs at startup, these transactions would hold table IX locks until they are committed or rolled back. If some new transaction is attempting to acquire a lock that conflicts with one of the recovered transactions, then two sets of record locks would be created for each conflict: one for the lock that the recovered transaction is holding, and a wait lock request for the new transaction. Normally, recovered incomplete transactions are rolled back by a background thread. If any transactions were recovered in the  XA PREPARE state, those would remain in the system until an explicit XA COMMIT or XA ROLLBACK is executed for the transaction. To analyze the problem, I would need more information. Possibly, the output from the following could help to see what is occupying the buffer pool. SELECT * FROM INFORMATION_SCHEMA.INNODB_BUFFER_POOL_STATS;

            This looks like it may be a duplicate of MDEV-19176.

            GeoffMontee Geoff Montee (Inactive) added a comment - This looks like it may be a duplicate of MDEV-19176 .

            GeoffMontee, I do not think that this is a duplicate of MDEV-19176, because I see no mention of a hang during startup. It looks like we have an explanation of this in MDEV-16339, related to an XtraDB parameter innodb_empty_free_list_algorithm=BACKOFF. That parameter looks like it could cause buf_LRU_get_free_block() to sleep, relying on page cleaners to free up more blocks. That parameter is not supported by MariaDB 10.2 or later.

            marko Marko Mäkelä added a comment - GeoffMontee , I do not think that this is a duplicate of MDEV-19176 , because I see no mention of a hang during startup. It looks like we have an explanation of this in MDEV-16339 , related to an XtraDB parameter innodb_empty_free_list_algorithm=BACKOFF . That parameter looks like it could cause buf_LRU_get_free_block() to sleep, relying on page cleaners to free up more blocks. That parameter is not supported by MariaDB 10.2 or later.

            Ah, OK. My mistake. Thanks!

            GeoffMontee Geoff Montee (Inactive) added a comment - Ah, OK. My mistake. Thanks!

            People

              marko Marko Mäkelä
              joe.oreste@xpressbet.com Joseph Oreste (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.