Details
-
Bug
-
Status: Closed (View Workflow)
-
Major
-
Resolution: Won't Fix
-
10.0.27-galera
Description
One node (master) in a master-slave setup (gcs.fc_master_slave = YES; pc.ignore_sb = TRUE") crashed after it desynced itself from the cluster.
Iam using Xtrabackub to do Backups on the slave node.
Maybe it corresbonds with the bug, which i've already found:
https://bugs.launchpad.net/percona-xtradb-cluster/+bug/1611728
The only thing which dont fit to my setup is, that i don't use the option
inno-backup-opts='--no-backup-locks' in the config.
Here are the corresponding log lines:
160920 0:00:01 [Note] WSREP: Member 0.0 (aletheia) desyncs itself from group
160920 0:00:01 [Note] WSREP: Shifting SYNCED -> DONOR/DESYNCED (TO: 64409986)
160920 0:00:01 [Note] WSREP: Provider paused at e9f93a90-6927-11e5-a267-ce9ade7c14b0:64409986 (7023)
160920 0:00:10 [Note] WSREP: resuming provider at 7023
160920 0:00:10 [Note] WSREP: Provider resumed.
160920 0:00:10 [Note] WSREP: Member 0.0 (aletheia) resyncs itself to group
160920 0:00:10 [Note] WSREP: Shifting DONOR/DESYNCED -> JOINED (TO: 64410083)
160920 0:00:10 [Note] WSREP: Member 0.0 (aletheia) synced with group.
160920 0:00:10 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 64410083)
160920 0:00:11 [Note] WSREP: Member 0.0 (aletheia) desyncs itself from group
160920 0:00:11 [Note] WSREP: Shifting SYNCED -> DONOR/DESYNCED (TO: 64410123)
160920 0:00:15 [Note] WSREP: Synchronized with group, ready for connections
160920 0:00:18 [Note] WSREP: Provider paused at e9f93a90-6927-11e5-a267-ce9ade7c14b0:64410207 (7249)
160920 0:00:22 [Note] WSREP: resuming provider at 7249
160920 0:00:22 [Note] WSREP: Provider resumed.
160920 0:00:22 [Note] WSREP: Member 0.0 (aletheia) resyncs itself to group
160920 0:00:22 [Note] WSREP: Shifting DONOR/DESYNCED -> JOINED (TO: 64410793)
160920 0:00:23 [Note] WSREP: Member 0.0 (aletheia) desyncs itself from group
160920 0:00:23 [Note] WSREP: Shifting JOINED -> DONOR/DESYNCED (TO: 64410793)
160920 0:00:24 [ERROR] WSREP: FSM: no such a transition JOINED -> DONOR
160920 0:00:24 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Server version: 10.0.27-MariaDB-1~trusty-wsrep
key_buffer_size=268435456
read_buffer_size=131072
max_used_connections=5
max_threads=102
thread_count=3
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 486142 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0x7f1627a52008
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7f1672ed5df0 thread_stack 0x30000
/usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x7f16728d3dae]
/usr/sbin/mysqld(handle_fatal_signal+0x433)[0x7f16723ff573]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x10330)[0x7f1670b9d330]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x37)[0x7f16701f4c37]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x148)[0x7f16701f8028]
/usr/lib/galera/libgalera_smm.so(ZN6galera3FSMINS_10Replicator5StateENS_13ReplicatorSMM10TransitionENS_10EmptyGuardENS_11EmptyActionEE8shift_toES2+0x17c)[0x7f165e994f2c]
/usr/lib/galera/libgalera_smm.so(_ZN6galera13ReplicatorSMM6desyncEv+0x70)[0x7f165e98eaf0]
/usr/lib/galera/libgalera_smm.so(galera_desync+0x19)[0x7f165e99d4d9]
/usr/sbin/mysqld(_ZN16Global_read_lock34make_global_read_lock_block_commitEP3THD+0x3c8)[0x7f16724dae18]
/usr/sbin/mysqld(_Z20reload_acl_and_cacheP3THDyP10TABLE_LISTPi+0x37f)[0x7f167236954f]
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0xb36)[0x7f167226ebb6]
/usr/sbin/mysqld(+0x415f8b)[0x7f1672278f8b]
/usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1fcd)[0x7f167227b59d]
/usr/sbin/mysqld(_Z10do_commandP3THD+0x2fd)[0x7f167227c36d]
/usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x34b)[0x7f167234ce4b]
/usr/sbin/mysqld(handle_one_connection+0x40)[0x7f167234cf30]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x8184)[0x7f1670b95184]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f16702b837d]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0x7f1621013020): is an invalid pointer
Connection ID (thread ID): 381
Status: NOT_KILLED
Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_co$
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
We think the query pointer is invalid, but we will try to print it anyway.
Query: FLUSH TABLES WITH READ LOCK
160920 00:00:24 mysqld_safe Number of processes running now: 0
160920 00:00:24 mysqld_safe WSREP: not restarting wsrep node automatically
160920 00:00:24 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended