WSREP_SST: [INFO] rsync SST completed on donor (20230925 06:53:55.214)
|
2023-09-25 6:53:55 0 [Note] WSREP: Donor monitor thread ended with total time 2 sec
|
2023-09-25 6:53:55 0 [Note] WSREP: (492dcd42-89d1, 'tcp://0.0.0.0:16002') turning message relay requesting off
|
2023-09-25 6:53:56 0 [Note] WSREP: async IST sender served
|
2023-09-25 6:53:56 0 [Note] WSREP: 1.0 (centos74-amd64): State transfer from 0.0 (centos74-amd64) complete.
|
2023-09-25 6:53:56 0 [Note] WSREP: Member 1.0 (centos74-amd64) synced with group.
|
2023-09-25 6:53:58 0 [Note] WSREP: Member 1.0 (centos74-amd64) desyncs itself from group
|
2023-09-25 6:53:58 0 [Note] WSREP: Member 1.0 (centos74-amd64) resyncs itself to group.
|
2023-09-25 6:53:58 0 [Note] WSREP: Member 1.0 (centos74-amd64) synced with group.
|
2023-09-25 6:53:58 0 [Note] WSREP: Member 1.0 (centos74-amd64) desyncs itself from group
|
2023-09-25 6:53:58 0 [Note] WSREP: Member 1.0 (centos74-amd64) resyncs itself to group.
|
2023-09-25 6:53:58 0 [Note] WSREP: Member 1.0 (centos74-amd64) synced with group.
|
2023-09-25 6:53:58 1 [ERROR] Slave SQL: Error 'Unknown table 'test.t1'' on query. Default database: 'test'. Query: 'DROP TABLE t1', Internal MariaDB error code: 1051
|
2023-09-25 6:53:58 1 [Warning] WSREP: Ignoring error 'Unknown table 'test.t1'' on query. Default database: 'test'. Query: 'DROP TABLE t1', Error_code: 1051
|
2023-09-25 6:53:58 1 [ERROR] Slave SQL: Error 'Unknown SEQUENCE: 'test.sq2'' on query. Default database: 'test'. Query: 'DROP SEQUENCE sq2', Internal MariaDB error code: 4091
|
2023-09-25 6:53:58 1 [Warning] WSREP: Ignoring error 'Unknown SEQUENCE: 'test.sq2'' on query. Default database: 'test'. Query: 'DROP SEQUENCE sq2', Error_code: 4091
|
2023-09-25 6:53:58 17 [ERROR] WSREP: FSM: no such a transition REPLICATING -> COMMITTED
|
230925 6:53:58 [ERROR] mysqld got signal 6 ;
|
This could be because you hit a bug. It is also possible that this binary
|
or one of the libraries it was linked against is corrupt, improperly built,
|
or misconfigured. This error can also be caused by malfunctioning hardware.
|
|
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
|
|
We will try our best to scrape up some info that will hopefully help
|
diagnose the problem, but since we have already crashed,
|
something is definitely wrong and this may fail.
|
|
Server version: 10.4.32-MariaDB-log source revision: 3ac25b480055e7e99e46a958c04f9ffb7a6d68cf
|
key_buffer_size=1048576
|
read_buffer_size=131072
|
max_used_connections=2
|
max_threads=153
|
thread_count=10
|
It is possible that mysqld could use up to
|
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 63557 K bytes of memory
|
Hope that's ok; if not, decrease some variables in the equation.
|
|
Thread pointer: 0x563c9cbbc808
|
Attempting backtrace. You can use the following information to find out
|
where mysqld died. If you see no messages after this, something went
|
terribly wrong...
|
stack_bottom = 0x7f6d85e8dc40 thread_stack 0x49000
|
mysys/stacktrace.c:175(my_print_stacktrace)[0x563c9301d4de]
|
sql/signal_handler.cc:238(handle_fatal_signal)[0x563c92a6d687]
|
sigaction.c:0(__restore_rt)[0x7f6d8d4165e0]
|
/lib64/libc.so.6(gsignal+0x37)[0x7f6d8c86b1f7]
|
/lib64/libc.so.6(abort+0x148)[0x7f6d8c86c8e8]
|
src/fsm.hpp:56(galera::FSM<galera::TrxHandle::State, galera::TrxHandle::Transition>::shift_to(galera::TrxHandle::State, int))[0x7f6d892e4cda]
|
src/replicator_smm.cpp:1423(galera::ReplicatorSMM::commit_order_leave(galera::TrxHandleSlave&, wsrep_buf const*))[0x7f6d892f44bb]
|
detail/shared_count.hpp:371(galera_commit_order_leave)[0x7f6d892e0468]
|
/usr/sbin/mysqld(_ZN5wsrep18wsrep_provider_v2618commit_order_leaveERKNS_9ws_handleERKNS_7ws_metaERKNS_14mutable_bufferE+0x91)[0x563c930ab001]
|
src/wsrep_provider_v26.cpp:969(wsrep::wsrep_provider_v26::commit_order_leave(wsrep::ws_handle const&, wsrep::ws_meta const&, wsrep::mutable_buffer const&))[0x563c930a4ee0]
|
src/transaction.cpp:579(wsrep::transaction::ordered_commit())[0x563c92b5aae9]
|
sql/log.cc:7822(MYSQL_BIN_LOG::queue_for_group_commit(MYSQL_BIN_LOG::group_commit_entry*))[0x563c92b6001c]
|
sql/log.cc:7480(MYSQL_BIN_LOG::write_transaction_to_binlog(THD*, binlog_cache_mngr*, Log_event*, bool, bool, bool))[0x563c92b604b0]
|
sql/log.cc:516(binlog_cache_mngr::reset(bool, bool))[0x563c92b6066d]
|
sql/log.cc:1814(binlog_commit_flush_stmt_cache(THD*, bool, binlog_cache_mngr*))[0x563c92b60894]
|
sql/log.cc:2091(binlog_rollback(handlerton*, THD*, bool))[0x563c92b60a7f]
|
sql/handler.cc:1956(ha_rollback_trans(THD*, bool))[0x563c92a70f6b]
|
sql/handler.cc:1747(ha_commit_trans(THD*, bool))[0x563c92a71c94]
|
sql/transaction.cc:438(trans_commit_stmt(THD*))[0x563c9297121f]
|
sql/sql_class.h:4028(THD::get_stmt_da())[0x563c92871242]
|
sql/sql_parse.cc:8013(mysql_parse(THD*, char*, unsigned int, Parser_state*, bool, bool))[0x563c9287903b]
|
sql/sql_class.h:4028(THD::get_stmt_da())[0x563c928798a6]
|
sql/sql_parse.cc:1843(dispatch_command(enum_server_command, THD*, char*, unsigned int, bool, bool))[0x563c9287c77e]
|
sql/sql_parse.cc:1379(do_command(THD*))[0x563c9287ce22]
|
sql/sql_connect.cc:1420(do_handle_one_connection(CONNECT*))[0x563c92962512]
|
sql/sql_connect.cc:1326(handle_one_connection)[0x563c929625fd]
|
perfschema/pfs.cc:1872(pfs_spawn_thread)[0x563c92cef3ed]
|
pthread_create.c:0(start_thread)[0x7f6d8d40ee25]
|
/lib64/libc.so.6(clone+0x6d)[0x7f6d8c92e34d]
|
|
Trying to get some variables.
|
Some pointers may be invalid and cause the dump to abort.
|
Query (0x563c9ccdc020): INSERT INTO t1(b) values (1),(2),(3),(4),(5),(6),(7),(8),(9)
|
|
Connection ID (thread ID): 17
|
Status: KILL_QUERY
|
FYI, I've fixed galera.galera_as_slave_gtid_myisam in 10.10 and wsrep.wsrep_provider_plugin_defaults in 11.0