[MDEV-16617] After upgrading from 5.5.52 to 10.1.32 getting [Warning] InnoDB: Difficult to find free blocks in the buffer pool Created: 2018-06-28  Updated: 2020-02-28  Resolved: 2019-04-18

Status: Closed
Project: MariaDB Server
Component/s: Configuration, Storage Engine - InnoDB, Storage Engine - XtraDB
Affects Version/s: 10.3.4, 10.1.31, 10.2.13
Fix Version/s: N/A

Type: Bug Priority: Major
Reporter: Joseph Oreste (Inactive) Assignee: Marko Mäkelä
Resolution: Incomplete Votes: 0
Labels: None
Environment:

Development


Issue Links:
Relates
relates to MDEV-14776 InnoDB Monitor output generated by sp... Closed
relates to MDEV-16675 Unnecessary explicit lock acquisition... Closed
relates to MDEV-11215 Several locks taken to same record in... Stalled
relates to MDEV-16339 Upgrading to 10.1.32 shows innodb_emp... Closed

 Description   

The full warning is [Warning] InnoDB: Difficult to find free blocks in the buffer pool (21 search iterations)! 0 failed attempts to flush a page!

The four databases reporting this show the exact warning including the 21 search iterations.

We have increased the buffer pool size on these servers and are still getting the warnings

Buffer pool sizes on these 5 databases
server 1:
innodb-buffer-pool-size=1024M
innodb_buffer_pool_size=1024M #Set to 70% of available RAM

server 2:
innodb-buffer-pool-size=1024M
innodb_buffer_pool_size=2100M #Set to 70% of available RAM

server 3:
innodb-buffer-pool-size=2048M
innodb_buffer_pool_size=2048M #Set to 70% of available RAM
server 4:
innodb-buffer-pool-size=1024M
innodb_buffer_pool_size=2450M #Set to 70% of available RAM
server 5:
innodb-buffer-pool-size=35695M
innodb_buffer_pool_size=90000M #Set to 70% of available RAM

The first buffer pool size listed was the size of the buffer pool used during the upgrade and when we first detected the warning in the logs. The second buffer pool setting is what we increased the value to as the warning recommends...yet it did not resolve the warning on any of those databases that had an increase in buffer pool size.

After the upgrade whenever we restart the database we will get this error on some of the servers within 24 hours. A couple of databases have generated the warning more than a week after the last database restart. Some of these databases have many gigabytes of buffer_pool assigned.

Maybe related to https://jira.mariadb.org/browse/MDEV-16339

Please advise on the significance of this warning and how to resolve it.



 Comments   
Comment by Elena Stepanova [ 2018-07-02 ]

It looks like the warning didn't exist in 5.5 at all, so it's not in itself surprising that it only started appearing with 10.1.
I'll re-address other questions to marko.

Comment by Marko Mäkelä [ 2018-07-02 ]

The message was recently reformatted in MDEV-14776. It was introduced in MySQL 3.23.44 already (October 30, 2001). It used to be this monstrosity:

InnoDB: Warning: difficult to find free blocks from
InnoDB: the buffer pool (%lu search iterations)! Consider
InnoDB: increasing the buffer pool size.

The InnoDB buffer pool is also used for storing information about transactional InnoDB locks.

In the upgrade, could it be possible that some transactions were recovered in XA PREPARE state and never committed or rolled back? In that case, the buffer pool could be filled with record locks for the lock waits of new connections waiting for the recovered transactions to end.

I do not know where you got the 70% figure from. It is only a rule of thumb.

If you are doing a huge UPDATE or DELETE transaction, that would create a large number of record locks. MDEV-16232 should eventually fix those to use implicit locking when there are no conflicts, just like INSERT has always worked.

Comment by Joseph Oreste (Inactive) [ 2018-07-03 ]

Hello marko,

In response to your post
In the upgrade, could it be possible that some transactions were recovered in XA PREPARE state and never committed or rolled back? In that case, the buffer pool could be filled with record locks for the lock waits of new connections waiting for the recovered transactions to end.

Yes it is possible. I believe these databases did not use set GLOBAL innodb_fast_shutdown=0 prior to being upgraded. Also, they were not being restricted from client connection prior to shutdown. How can we tell if these databases suffer from what you suggest. Is there a way to flush the buffer pool completely? What do you recommend?

Thanks,

Joe Oreste

Comment by Marko Mäkelä [ 2019-04-18 ]

joe.oreste@xpressbet.com, sorry, I missed your feedback.

The buffer pool should by default be empty at a startup that follows a clean shutdown. (Crash recovery would read pages into the buffer pool).

InnoDB uses its buffer pool also for some other things than data pages. If I remember correctly, some data structures for the adaptive hash index can be allocated in the buffer pool. And the InnoDB table and record locks are alllocated from the buffer pool.

The innodb_adaptive_hash_index is often causing problems, so it might be helpful to disable it.

If any incomplete transactions were recovered from the undo logs at startup, these transactions would hold table IX locks until they are committed or rolled back. If some new transaction is attempting to acquire a lock that conflicts with one of the recovered transactions, then two sets of record locks would be created for each conflict: one for the lock that the recovered transaction is holding, and a wait lock request for the new transaction.

Normally, recovered incomplete transactions are rolled back by a background thread. If any transactions were recovered in the XA PREPARE state, those would remain in the system until an explicit XA COMMIT or XA ROLLBACK is executed for the transaction.

To analyze the problem, I would need more information. Possibly, the output from the following could help to see what is occupying the buffer pool.

SELECT * FROM INFORMATION_SCHEMA.INNODB_BUFFER_POOL_STATS;

Comment by Geoff Montee (Inactive) [ 2020-02-28 ]

This looks like it may be a duplicate of MDEV-19176.

Comment by Marko Mäkelä [ 2020-02-28 ]

GeoffMontee, I do not think that this is a duplicate of MDEV-19176, because I see no mention of a hang during startup. It looks like we have an explanation of this in MDEV-16339, related to an XtraDB parameter innodb_empty_free_list_algorithm=BACKOFF. That parameter looks like it could cause buf_LRU_get_free_block() to sleep, relying on page cleaners to free up more blocks. That parameter is not supported by MariaDB 10.2 or later.

Comment by Geoff Montee (Inactive) [ 2020-02-28 ]

Ah, OK. My mistake. Thanks!

Generated at Thu Feb 08 08:30:15 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.