[MDEV-25965] mariadb crashes and cannot resync - [ERROR] mysqld got signal 11 ; Created: 2021-06-18  Updated: 2021-08-04

Status: Open
Project: MariaDB Server
Component/s: None
Affects Version/s: 10.5.10
Fix Version/s: None

Type: Bug Priority: Major
Reporter: Alain Bourgeois Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: crash, galera
Environment:

debian linux 4.19.0-16-amd64 #1 SMP Debian 4.19.181-1 (2021-03-19) x86_64 GNU/Linux


Attachments: Text File error.log     Text File mariadb-slow.log    

 Description   

Once every ~20 days, an active node in a galera cluster crashes, always at ~ the same time (0:06) although no cron job run at that time. Server has 32Gg RAM and has enough resources.

Log shows:
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.

To report this bug, see https://mariadb.com/kb/en/reporting-bugs

We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.

Server version: 10.5.10-MariaDB-1:10.5.10+maria~buster-log
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=26
max_threads=122
thread_count=32
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 399625 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x7f10b42c31d8
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7f153cfc5d58 thread_stack 0x30000
/usr/sbin/mariadbd(my_print_stacktrace+0x2e)[0x5559a6bd34ee]
/usr/sbin/mariadbd(handle_fatal_signal+0x485)[0x5559a665cca5]
2021-06-18 0:08:29 7792507 [Warning] Aborted connection 7792507 to db: 'sentry' user: 'sentry' host: 'sentry.lan.zetescards.be' (Lock wait timeout exceeded; try restarting transaction)

then all transactions fall in timeout until
2021-06-18 7:45:04 0 [Warning] Aborted connection 0 to db: 'unconnected' user: 'unauthenticated' host: 'connecting host' (Too many connections)
The 2 other nodes in the galera continue to work.
After reboot, running glera_recover, setting number in grastate.dat, mariadb always tell wsrep has not been prepared for application use.
Node cannot resync:
2021-06-18 10:27:11 3 [Note] WSREP: SST succeeded for position 8ef7bece-c55f-11e8-8e85-1a29ec51b021:839632536
2021-06-18 10:27:11 0 [Note] WSREP: Joiner monitor thread ended with total time 1 sec
2021-06-18 10:27:11 2 [Note] WSREP: Installed new state from SST: 8ef7bece-c55f-11e8-8e85-1a29ec51b021:839632536
2021-06-18 10:27:11 2 [Note] WSREP: Receiving IST: 432 writesets, seqnos 839632537-839632968
2021-06-18 10:27:11 0 [Note] WSREP: ####### IST applying starts with 839632537
2021-06-18 10:27:11 0 [Note] WSREP: ####### IST current seqno initialized to 839632537
2021-06-18 10:27:11 0 [Note] WSREP: Receiving IST... 0.0% ( 0/432 events) complete.
2021-06-18 10:27:11 0 [Note] WSREP: Service thread queue flushed.
2021-06-18 10:27:11 0 [Note] WSREP: ####### Assign initial position for certification: 00000000-0000-0000-0000-000000000000:839632536, protocol version: 5
210618 10:27:11 [ERROR] mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.

To report this bug, see https://mariadb.com/kb/en/reporting-bugs

We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.

Server version: 10.5.10-MariaDB-1:10.5.10+maria~buster-log
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=0
max_threads=122
thread_count=5
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 399625 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x7f23c4000c18
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7f2848150d98 thread_stack 0x30000
...
2021-06-18 10:27:12 0 [Note] WSREP: (f90f0b5e-af4b, 'tcp://0.0.0.0:4567') turning message relay requesting off
2021-06-18 10:28:19 0 [Note] InnoDB: Buffer pool(s) load completed at 210618 10:28:19
2021-06-18 10:31:26 0 [Warning] InnoDB: A long semaphore wait:
--Thread 139810985170688 has waited at lock0lock.cc line 3756 for 255.00 seconds the semaphore:
Mutex at 0x55dc19ef7f80, Mutex LOCK_SYS created /home/buildbot/buildbot/build/mariadb-10.5.10/storage/innobase/lock/lock0lock.cc:461, lock var 2

=> the only solution found till now is to scratch data folder and make a full sst from another running node.
Any hint?



 Comments   
Comment by jules potvin [ 2021-08-03 ]

I had this issue running Ubuntu 20.04.2.0 LTS. I tuned the sysctl.conf and rebooted. Haven't had the issue since. I suspect an increase to file max resolved it :

fs.file-max = 2097152

Comment by Alain Bourgeois [ 2021-08-03 ]

in my case it is something else:
root@maria1:~# sysctl -a | grep file-max
fs.file-max = 9223372036854775807
root@maria1:~# sysctl -a | grep nr_open
fs.nr_open = 1048576

Comment by jules potvin [ 2021-08-03 ]

The file-max kernel parameter refers to open file descriptors, and file-nr gives us the current number of open file descriptors. You sure have a lot of digets to fill up fs.file-max !

I would start by tuning down your fs.file-max since you would not have the resources to reach this value (not to mention it's larger than the largest unsigned INT variable; some apps will not be able to check it or make calculations off this value 9223372036854775807 ).

Might want to validate that every setting in your sysctl.conf is realistic and resonable for your setup.

Comment by Alain Bourgeois [ 2021-08-03 ]

this setting is there for years (mariadb 10.1), are you sure it is now that it would cause a crash?

Comment by jules potvin [ 2021-08-03 ]

I have been developing a beta app (mwlists.com) and had no issues on 10.5.2. I updated to 10.5.10 and this error started every 1-2 days at exactly 4am, but only on 2 of my 3 nodes. I was running a large job at 4am, so this made sense. I tuned one node's sysctl.conf file, but not the two that were crashing. Once I fixed up sysctl.conf, the other nodes haven't crashed. I'm at 10 days no crash which is the longest time I've run.

At this point you have nothing to lose since your fs.file-max makes no sense whatsoever. I picked 2097152, because that's what many write intensive web apps were using when I googled it.

That said maybe it is a bug and I'll be back here in 20 days with logs.

Comment by Alain Bourgeois [ 2021-08-04 ]

Another option isthat it is fixed in 10.5.11. Since this upgrade , no crash yet.

Generated at Thu Feb 08 09:41:44 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.