[MDEV-29724] galera ist failure crashing mariadb Created: 2022-10-06  Updated: 2023-09-04  Resolved: 2023-09-04

Status: Closed
Project: MariaDB Server
Component/s: Galera
Affects Version/s: 10.6.5
Fix Version/s: 10.6.15

Type: Bug Priority: Major
Reporter: Khai Ping Assignee: Jan Lindström
Resolution: Fixed Votes: 1
Labels: None


 Description   

Need some assistance in our 3 node galera cluster. One of the server crashes when trying to join up into the cluster. It seems to crash during IST while rolling back transactions. It did recover itself after rolling back all the transactions.

2022-10-04  3:15:05 0 [Note] /opt/sbin/mariadbd: ready for connections.
Version: '10.6.5-MariaDB'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  MariaDB Server
2022-10-04  3:15:05 7 [Note] WSREP: Recovered cluster id 846a30c5-1cc3-11ed-bb0c-be79b476f059
2022-10-04  3:15:05 3 [Note] WSREP: SST received: 846a30c5-1cc3-11ed-bb0c-be79b476f059:4807541
2022-10-04  3:15:05 3 [Note] WSREP: SST succeeded for position 846a30c5-1cc3-11ed-bb0c-be79b476f059:4807541
2022-10-04  3:15:05 0 [Note] WSREP: Joiner monitor thread ended with total time 4 sec
2022-10-04  3:15:05 2 [Note] WSREP: Installed new state from SST: 846a30c5-1cc3-11ed-bb0c-be79b476f059:4807541
2022-10-04  3:15:05 2 [Note] WSREP: Receiving IST: 1871 writesets, seqnos 4807542-4809412
2022-10-04  3:15:05 0 [Note] WSREP: ####### IST applying starts with 4807542
2022-10-04  3:15:05 0 [Note] WSREP: ####### IST current seqno initialized to 4807542
2022-10-04  3:15:05 0 [Note] WSREP: Receiving IST...  0.0% (   0/1871 events) complete.
2022-10-04  3:15:05 0 [Note] WSREP: Service thread queue flushed.
2022-10-04  3:15:05 0 [Note] WSREP: ####### Assign initial position for certification: 00000000-0000-0000-0000-000000000000:4807541, protocol version: 5
221004  3:15:05 [ERROR] mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
 
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
 
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed, 
something is definitely wrong and this may fail.
 
Server version: 10.6.5-MariaDB
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=0
max_threads=1002
thread_count=2
It is possible that mysqld could use up to 
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 2337489 K  bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
 
Thread pointer: 0x7f0cd80009b8
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7f0ce98a3ce0 thread_stack 0x49000
mysys/stacktrace.c:213(my_print_stacktrace)[0x5587224a1e5e]
sql/signal_handler.cc:226(handle_fatal_signal)[0x558721ec3ec7]
sigaction.c:0(__restore_rt)[0x7f0d75011630]
sql/sql_class.cc:4743(thd_get_thread_id)[0x558721c52401]
bits/vector.tcc:94(emplace_back<std::pair<long unsigned int, long unsigned int> >)[0x558721b756f3]
lock/lock0lock.cc:1773(lock_wait(que_thr_t*))[0x5587222215a5]
row/row0mysql.cc:687(row_mysql_handle_errors(dberr_t*, trx_t*, que_thr_t*, trx_savept_t*))[0x5587222a443f]
row/row0sel.cc:5827(row_search_mvcc(unsigned char*, page_cur_mode_t, row_prebuilt_t*, unsigned long, unsigned long))[0x5587222c1799]
handler/ha_innodb.cc:9022(ha_innobase::index_read(unsigned char*, unsigned char const*, unsigned int, ha_rkey_function))[0x5587221dd36f]
sql/handler.cc:3428(handler::ha_rnd_pos(unsigned char*, unsigned char*))[0x558721ec9c52]
sql/handler.h:4100(handler::rnd_pos_by_record(unsigned char*))[0x558721e44073]
sql/sql_class.h:7331(handler::ha_rnd_pos_by_record(unsigned char*))[0x558721fdf281]
sql/log_event_server.cc:8357(Update_rows_log_event::do_exec_row(rpl_group_info*))[0x558721fdfbce]
sql/log_event_server.cc:5732(Rows_log_event::do_apply_event(rpl_group_info*))[0x558721fd4254]
sql/log_event.h:1517(Log_event::apply_event(rpl_group_info*))[0x5587221a518d]
sql/wsrep_high_priority_service.cc:128(apply_events(THD*, Relay_log_info*, wsrep::const_buffer const&, wsrep::mutable_buffer&))[0x55872218bed0]
sql/wsrep_high_priority_service.cc:579(Wsrep_applier_service::apply_write_set(wsrep::ws_meta const&, wsrep::const_buffer const&, wsrep::mutable_buffer&))[0x55872218bfc3]
src/server_state.cpp:328(apply_write_set(wsrep::server_state&, wsrep::high_priority_service&, wsrep::ws_handle const&, wsrep::ws_meta const&, wsrep::const_buffer const&))[0x55872251dd1f]
src/server_state.cpp:1148(wsrep::server_state::on_apply(wsrep::high_priority_service&, wsrep::ws_handle const&, wsrep::ws_meta const&, wsrep::const_buffer const&))[0x55872251ea25]
src/wsrep_provider_v26.cpp:504((anonymous namespace)::apply_cb(void*, wsrep_ws_handle const*, unsigned int, wsrep_buf const*, wsrep_trx_meta const*, bool*))[0x55872252fc58]
??:0(wsrep_deinit_event_service_v1)[0x7f0d70e47900]
??:0(wsrep_deinit_event_service_v1)[0x7f0d70e51550]
??:0(wsrep_deinit_event_service_v1)[0x7f0d70e66de7]
??:0(wsrep_deinit_event_service_v1)[0x7f0d70e673d7]
??:0(wsrep_deinit_event_service_v1)[0x7f0d70e6a5ed]
??:0(wsrep_deinit_event_service_v1)[0x7f0d70e59635]
??:0(wsrep_deinit_event_service_v1)[0x7f0d70e5a0ea]
??:0(wsrep_deinit_event_service_v1)[0x7f0d70e5a47c]
??:0(wsrep_deinit_event_service_v1)[0x7f0d70e852c0]
??:0(wsrep_deinit_event_service_v1)[0x7f0d70e85ab4]
??:0(wsrep_deinit_event_service_v1)[0x7f0d70e5897c]
??:0(wsrep_deinit_event_service_v1)[0x7f0d70e39978]
src/wsrep_provider_v26.cpp:797(wsrep::wsrep_provider_v26::run_applier(wsrep::high_priority_service*))[0x55872253038e]
sql/wsrep_thd.cc:59(wsrep_replication_process(THD*, void*))[0x5587221a6d58]
sql/wsrep_mysqld.cc:3477(start_wsrep_THD(void*))[0x5587221979ba]
perfschema/pfs.cc:2204(pfs_spawn_thread)[0x55872212cc82]
pthread_create.c:0(start_thread)[0x7f0d75009ea5]
??:0(__clone)[0x7f0d7312b9fd]
 
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0x7f0d6ad1999b): UPDATE objects, files
            SET
                objects.deleted = 1, files.deleted = 1,
                objects.modified = 1664852830, objects.indexed=0
            WHERE object_id IN (12347129, 12347132, 12347135, 12347138, 12347141, 12347144, 12347147, 12347150, 12347153, 12347156) AND files.object_id = objects.id
 
Connection ID (thread ID): 2
Status: NOT_KILLED
 
Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=on,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on,split_materialized=on,condition_pushdown_for_subquery=on,rowid_filter=on,condition_pushdown_from_having=on,not_null_range_scan=off
 
The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ contains
information that should help you find out what is causing the crash.
Writing a core file...
Working directory at /db/mysql
Resource Limits:
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             127956               127956               processes 
Max open files            96000                96000                files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       127956               127956               signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us        
Core pattern: core

 
2022-10-04 12:27:41 0 [Note] InnoDB: 2 transaction(s) which must be rolled back or cleaned up in total 2826 row operations to undo
2022-10-04 12:28:08 0 [Note] InnoDB: 2 transaction(s) which must be rolled back or cleaned up in total 2826 row operations to undo
2022-10-04 12:28:31 0 [Note] InnoDB: 2 transaction(s) which must be rolled back or cleaned up in total 2826 row operations to undo
 
2022-10-04 12:56:56 0 [Note] InnoDB: 2 transaction(s) which must be rolled back or cleaned up in total 474 row operations to undo
2022-10-04 12:57:18 0 [Note] InnoDB: 2 transaction(s) which must be rolled back or cleaned up in total 474 row operations to undo
2022-10-04 12:57:33 0 [Note] InnoDB: 1 transaction(s) which must be rolled back or cleaned up in total 30 row operations to undo
 
2022-10-04 13:00:04 0 [Note] WSREP: Receiving IST...  6.8% ( 1984/29051 events) complete.
2022-10-04 13:00:14 0 [Note] WSREP: Receiving IST...  8.0% ( 2320/29051 events) complete.
2022-10-04 13:00:24 0 [Note] WSREP: Receiving IST... 13.4% ( 3904/29051 events) complete.
2022-10-04 13:00:35 0 [Note] WSREP: Receiving IST... 16.2% ( 4704/29051 events) complete.

Oct  4 03:15:06 xxx-h3 kernel: mariadbd[28254]: segfault at 3dd0 ip 0000558721c52401 sp 00007f0ce989e9e0 error 4 in mariadbd[558721515000+163e000]
Oct  4 03:15:28 xxx-h3 kernel: mariadbd[29083]: segfault at 3dd0 ip 0000560202921401 sp 00007feb4c68e9e0 error 4 in mariadbd[5602021e4000+163e000]
Oct  4 03:15:49 xxx-h3 kernel: mariadbd[29983]: segfault at 3dd0 ip 000055ad44f03401 sp 00007f487c07b9e0 error 4 in mariadbd[55ad447c6000+163e000]
Oct  4 03:16:10 xxx-h3 kernel: mariadbd[30545]: segfault at 3dd0 ip 000055590a7d2401 sp 00007f2fb089f9e0 error 4 in mariadbd[55590a095000+163e000]
Oct  4 03:16:46 xxx-h3 kernel: mariadbd[31758]: segfault at 3dd0 ip 000055a85042c401 sp 00007f8b807e79e0 error 4 in mariadbd[55a84fcef000+163e000]



 Comments   
Comment by Jan Lindström [ 2023-07-10 ]

khaiping.loh I recomend upgrading to more recent version of MariaDB 10.6 and Galera library because recent version contains important fixes for Galera lock wait handling. If issue reproduces with latest version please provide full opened stack trace, node configuration and error log.

Comment by Khai Ping [ 2023-07-10 ]

thank you @jan

Comment by Jan Lindström [ 2023-07-10 ]

khaiping.loh You need MariaDB version where MDEV-29293 is fixed.

Comment by Khai Ping [ 2023-07-10 ]

@jan how is this ticket related to MDEV-29293?

Comment by Jan Lindström [ 2023-07-10 ]

khaiping.loh There is significant refactoring on lock0lock.cc by MDEV-20612, MDEV-24738 and other changes and then Galera abort handling was refactored in MDEV-29293.

Generated at Thu Feb 08 10:10:48 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.