Details
Description
server failed with:
/sql/wsrep_trans_observer.h:147: int wsrep_start_trx_if_not_started(THD*): Assertion `thd->wsrep_next_trx_id() != (0x7fffffffffffffffLL * 2ULL + 1)' failed.
WSREP_SST: [INFO] rsync SST completed on donor (20231031 11:33:23.767)
|
2023-10-31 11:33:23 0 [Note] WSREP: Donor monitor thread ended with total time 2 sec
|
2023-10-31 11:33:25 0 [Note] WSREP: async IST sender served
|
2023-10-31 11:33:25 0 [Note] WSREP: 2.0 (panda): State transfer from 0.0 (panda) complete.
|
2023-10-31 11:33:25 0 [Note] WSREP: Member 2.0 (panda) synced with group.
|
2023-10-31 11:33:29 18 [Note] Master connection name: '' Master_info_file: 'master.info' Relay_info_file: 'relay-log.info'
|
2023-10-31 11:33:29 18 [Note] 'CHANGE MASTER TO executed'. Previous state master_host='', master_port='3306', master_log_file='', master_log_pos='4'. New state master_host='127.0.0.1', master_port='16003', master_log_file='', master_log_pos='4'.
|
2023-10-31 11:33:29 18 [Note] Previous Using_Gtid=No. New Using_Gtid=Current_Pos
|
2023-10-31 11:33:29 19 [Note] Slave I/O thread: Start asynchronous replication to master 'root@127.0.0.1:16003' in log '' at position 4
|
2023-10-31 11:33:29 20 [Note] Slave SQL thread initialized, starting replication in log 'FIRST' at position 4, relay log './mysqld-relay-bin.000001' position: 4; GTID position ''
|
2023-10-31 11:33:29 19 [Note] Slave I/O thread: connected to master 'root@127.0.0.1:16003',replication starts at GTID position ''
|
2023-10-31 11:33:29 20 [Note] WSREP: ready state reached
|
2023-10-31 11:33:29 21 [Note] Start binlog_dump to slave_server(21), pos(, 4), using_gtid(1), gtid('')
|
mysqld: /home/panda/mariadb-10.4/sql/wsrep_trans_observer.h:147: int wsrep_start_trx_if_not_started(THD*): Assertion `thd->wsrep_next_trx_id() != (0x7fffffffffffffffLL * 2ULL + 1)' failed.
|
231031 11:33:29 [ERROR] mysqld got signal 6 ;
|
Sorry, we probably made a mistake, and this is a bug.
|
|
Your assistance in bug reporting will enable us to fix this for the next release.
|
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
|
|
We will try our best to scrape up some info that will hopefully help
|
diagnose the problem, but since we have already crashed,
|
something is definitely wrong and this may fail.
|
|
Server version: 10.4.32-MariaDB-debug-log source revision: 12c5dec8cc31fc327a3eb66b1b2c20647a66ed77
|
key_buffer_size=1048576
|
read_buffer_size=131072
|
max_used_connections=3
|
max_threads=153
|
thread_count=13
|
It is possible that mysqld could use up to
|
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 63663 K bytes of memory
|
Hope that's ok; if not, decrease some variables in the equation.
|
|
Thread pointer: 0x7fcd84000da0
|
Attempting backtrace. You can use the following information to find out
|
where mysqld died. If you see no messages after this, something went
|
terribly wrong...
|
stack_bottom = 0x7fce080a8558 thread_stack 0x49000
|
mysys/stacktrace.c:174(my_print_stacktrace)[0x55b126004517]
|
sql/signal_handler.cc:235(handle_fatal_signal)[0x55b1256b117d]
|
2023-10-31 11:33:30 19 [ERROR] Unexpected break of being relay-logged GTID 1-11-2 event group by the current GTID event 1-11-2
|
2023-10-31 11:33:30 19 [ERROR] Slave I/O: Relay log write failure: could not queue event from master, Internal MariaDB error code: 1595
|
2023-10-31 11:33:30 19 [Note] Slave I/O thread exiting, read up to log 'mysqld-bin.000003', position 595; GTID position 1-11-1
|
2023-10-31 11:33:30 19 [Note] master was 127.0.0.1:16003
|
/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7fce16c96520]
|
libc_sigaction.c:0(__restore_rt)[0x7fce16cea9fc]
|
nptl/pthread_kill.c:44(__pthread_kill_implementation)[0x7fce16c96476]
|
posix/raise.c:27(__GI_raise)[0x7fce16c7c7f3]
|
stdlib/abort.c:81(__GI_abort)[0x7fce16c7c71b]
|
intl/loadmsgcat.c:1177(_nl_load_domain)[0x7fce16c8de96]
|
sql/wsrep_trans_observer.h:148(wsrep_start_trx_if_not_started(THD*))[0x55b12556f277]
|
sql/rpl_gtid.cc:703(rpl_slave_state::record_gtid(THD*, rpl_gtid const*, unsigned long long, bool, bool, void**))[0x55b125570b15]
|
sql/log_event.cc:8506(Gtid_list_log_event::do_apply_event(rpl_group_info*))[0x55b125829f5f]
|
sql/log_event.h:1492(Log_event::apply_event(rpl_group_info*))[0x55b12523e604]
|
sql/slave.cc:3820(apply_event_and_update_pos_apply(Log_event*, THD*, rpl_group_info*, int))[0x55b12522f72e]
|
sql/slave.cc:3982(apply_event_and_update_pos(Log_event*, THD*, rpl_group_info*))[0x55b12522fd71]
|
sql/slave.cc:4341(exec_relay_log_event(THD*, Relay_log_info*, rpl_group_info*))[0x55b125230909]
|
sql/slave.cc:5541(handle_slave_sql)[0x55b1252348e6]
|
perfschema/pfs.cc:1871(pfs_spawn_thread)[0x55b125a56c16]
|
nptl/pthread_create.c:442(start_thread)[0x7fce16ce8ac3]
|
x86_64/clone3.S:83(__clone3)[0x7fce16d7aa40]
|
|
Trying to get some variables.
|
Some pointers may be invalid and cause the dump to abort.
|
Query (0x0): (null)
|
Connection ID (thread ID): 20
|
Status: NOT_KILLED
|
|
Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=on,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on,split_materialized=on,condition_pushdown_for_subquery=on,rowid_filter=on,condition_pushdown_from_having=on
|
Attachments
Issue Links
- blocks
-
MDEV-28378 galera.galera_as_slave_ctas fails with a timeout
- Closed
-
MDEV-30172 Galera test case cleanup
- Stalled
- split from
-
MDEV-29877 Galera test failure on galera_2_cluster
- Closed