[MDEV-23166] mysql.err log file is spammed with message "line 593 for 241.00 seconds the semaphore" Created: 2020-07-14 Updated: 2022-01-27 Resolved: 2022-01-24 |
|
| Status: | Closed |
| Project: | MariaDB Server |
| Component/s: | Storage Engine - InnoDB |
| Affects Version/s: | 10.1.41 |
| Fix Version/s: | 10.5.14, 10.6.6, 10.7.2, 10.8.1 |
| Type: | Bug | Priority: | Major |
| Reporter: | sushma k | Assignee: | Marko Mäkelä |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Environment: |
mysqld, Version: 10.1.41-MariaDB-0+deb9u1 (Debian 9.9) |
||
| Description |
|
We have a linux debian VM on which mysqld, Version: 10.1.41-MariaDB-0+deb9u1 is running. Problem faced: We tried to empty the file but again very quickly it gets filled up. Need your assistance to understand is this issue related to deadlock or is this seen due to Sharing below some details from the log pattern I have seen mysql-slow log file and mysql-err log file is huge so i was not able to upload the same. |
| Comments |
| Comment by Marko Mäkelä [ 2020-07-14 ] |
|
sushma1, I agree that the InnoDB watchdog output is rather useless. It rarely provides enough information to find the actual cause of the hang. One message suggests that the buffer pool might be configured too small, or that something is hogging the buffer pool. I think that it would be very useful to attach a debugger to the server at the time of such a hang, to produce a stack trace of all active threads (thread apply all backtrace in GDB). That would provide much better clues to the cause of the hang. It could be even more useful to collect multiple such traces. http://poormansprofiler.org/ could be useful for this. |
| Comment by sushma k [ 2020-07-15 ] |
|
Can this happen due to performance load or due to slow IO disks ? |
| Comment by sushma k [ 2020-07-15 ] |
|
adding to my above query will setting innodb_adaptive_hash_index=0 and a restart be helpful ? |
| Comment by sushma k [ 2020-07-16 ] |
|
Hi Marko, Can I please get some reply to my questions. Attaching a debugger etc. will it be practical in this case, because as soon as the issue is encountered then it falls over (7+ GB per minute of failure logging) and because we don’t have any known way to reproduce it. Any debug trace in this case is likely to be huge, too. Please let me know if i should go on with suggesting to collect the stack trace or is there any other data collection plan. |
| Comment by sushma k [ 2020-07-16 ] |
|
Adding another question to this : Is the 'innodb_buffer_pool_size=1G' parameter in my.cfg file which will need to be tweaked into and restart the services and up to what value can we set this value to ? |
| Comment by Marko Mäkelä [ 2021-11-11 ] |
|
Sorry, I had missed the updates in this ticket. But, I see that the stack trace output that I requested was not provided. The 10.1 series reached its end of life in October 2020. How parameters should be configured depends on the workload. Our support engineers could help with that. The LRU eviction was improved in MariaDB 10.5, in |
| Comment by Marko Mäkelä [ 2022-01-24 ] |
|
I think that |