Details
-
Bug
-
Status: Closed (View Workflow)
-
Critical
-
Resolution: Fixed
-
10.3(EOL), 10.5, 10.6, 10.7(EOL), 10.8(EOL), 11.1(EOL)
Description
--source plugin/spider/spider/include/init_spider.inc
|
|
set global query_cache_type= on; |
set spider_same_server_link = on; |
|
eval create server s foreign data wrapper mysql options (host "127.0.0.1", database "test", user "root", port $MASTER_MYPORT); |
CREATE TABLE t (a INT); |
CREATE TABLE t_SPIDER (a INT) ENGINE=SPIDER COMMENT="wrapper 'mysql', srv 's', table 't'"; |
SELECT * FROM t_SPIDER; |
|
--source include/restart_mysqld.inc
|
|
DROP TABLE t_SPIDER; |
DROP TABLE t; |
|
--source plugin/spider/spider/include/deinit_spider.inc |
10.3 91d5fffa |
2022-06-03 1:50:08 0 [Note] Event Scheduler: Purging the queue. 0 events
|
safe_mutex: Trying to lock uninitialized mutex at /data/src/10.3/sql/sql_cache.cc, line 723
|
220603 1:50:08 [ERROR] mysqld got signal 6 ;
|
|
#3 <signal handler called>
|
#4 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
|
#5 0x00007faa86c1e537 in __GI_abort () at abort.c:79
|
#6 0x000055f9a2c5e92c in safe_mutex_lock (mp=0x55f9a36ee2c0 <query_cache+160>, my_flags=0, file=0x55f9a2ce7ea8 "/data/src/10.3/sql/sql_cache.cc", line=723) at /data/src/10.3/mysys/thr_mutex.c:248
|
#7 0x000055f9a2040152 in inline_mysql_mutex_lock (that=0x55f9a36ee2c0 <query_cache+160>, src_file=0x55f9a2ce7ea8 "/data/src/10.3/sql/sql_cache.cc", src_line=723) at /data/src/10.3/include/mysql/psi/mysql_thread.h:717
|
#8 0x000055f9a20411f8 in Query_cache::lock (this=0x55f9a36ee220 <query_cache>, thd=0x7faa0c001fe0) at /data/src/10.3/sql/sql_cache.cc:723
|
#9 0x000055f9a20480d9 in Query_cache::invalidate_table (this=0x55f9a36ee220 <query_cache>, thd=0x7faa0c001fe0, key=0x7faa829062b0 "mysql", key_length=23) at /data/src/10.3/sql/sql_cache.cc:3312
|
#10 0x000055f9a2045cf5 in Query_cache::invalidate_by_MyISAM_filename (this=0x55f9a36ee220 <query_cache>, filename=0x7faa0c00c4b8 "./mysql/spider_table_sts") at /data/src/10.3/sql/sql_cache.cc:2432
|
#11 0x000055f9a2042629 in query_cache_invalidate_by_MyISAM_filename (filename=0x7faa0c00c4b8 "./mysql/spider_table_sts") at /data/src/10.3/sql/sql_cache.cc:1249
|
#12 0x000055f9a2b6da25 in mi_update_status (param=0x7faa0c0090b0) at /data/src/10.3/storage/myisam/mi_locking.c:338
|
#13 0x000055f9a2b6db74 in mi_update_status_with_lock (info=0x7faa0c0090b0) at /data/src/10.3/storage/myisam/mi_locking.c:374
|
#14 0x000055f9a2b6cc96 in mi_lock_database (info=0x7faa0c0090b0, lock_type=2) at /data/src/10.3/storage/myisam/mi_locking.c:70
|
#15 0x000055f9a2b46dac in ha_myisam::external_lock (this=0x7faa0c030f68, thd=0x7faa0c001fe0, lock_type=2) at /data/src/10.3/storage/myisam/ha_myisam.cc:2106
|
#16 0x000055f9a23f5859 in handler::ha_external_lock (this=0x7faa0c030f68, thd=0x7faa0c001fe0, lock_type=2) at /data/src/10.3/sql/handler.cc:6418
|
#17 0x000055f9a2520301 in unlock_external (thd=0x7faa0c001fe0, table=0x55f9a5af56b8, count=1) at /data/src/10.3/sql/lock.cc:708
|
#18 0x000055f9a251f708 in mysql_unlock_tables (thd=0x7faa0c001fe0, sql_lock=0x55f9a5af5688, free_lock=false) at /data/src/10.3/sql/lock.cc:429
|
#19 0x000055f9a251f65f in mysql_unlock_tables (thd=0x7faa0c001fe0, sql_lock=0x55f9a5af5688) at /data/src/10.3/sql/lock.cc:413
|
#20 0x000055f9a20270b3 in close_thread_tables (thd=0x7faa0c001fe0) at /data/src/10.3/sql/sql_base.cc:863
|
#21 0x00007faa80a71ebe in spider_sys_close_table (thd=0x7faa0c001fe0, open_tables_backup=0x7faa82906910) at /data/src/10.3/storage/spider/spd_sys_table.cc:407
|
#22 0x00007faa80a71c51 in spider_close_sys_table (thd=0x7faa0c001fe0, table=0x7faa0c02fea0, open_tables_backup=0x7faa82906910, need_lock=false) at /data/src/10.3/storage/spider/spd_sys_table.cc:352
|
#23 0x00007faa80a7ce98 in spider_sys_insert_or_update_table_sts (thd=0x7faa0c001fe0, name=0x7faa70191728 "./test/t_SPIDER", name_length=15, data_file_length=0x7faa700c4cf0, max_data_file_length=0x7faa700c4cf8, index_file_length=0x7faa700c4d00, records=0x7faa700c4d08, mean_rec_length=0x7faa700c4d10, check_time=0x7faa700c4d18, create_time=0x7faa700c4d20, update_time=0x7faa700c4d28, need_lock=false) at /data/src/10.3/storage/spider/spd_sys_table.cc:2950
|
#24 0x00007faa80af5142 in spider_free_share (share=0x7faa700c3b40) at /data/src/10.3/storage/spider/spd_table.cc:5779
|
#25 0x00007faa80b2b055 in ha_spider::close (this=0x7faa700c1e48) at /data/src/10.3/storage/spider/ha_spider.cc:815
|
#26 0x000055f9a23eacea in handler::ha_close (this=0x7faa700c1e48) at /data/src/10.3/sql/handler.cc:2844
|
#27 0x000055f9a21e76ab in closefrm (table=0x7faa700c11e0) at /data/src/10.3/sql/table.cc:3790
|
#28 0x000055f9a230a044 in intern_close_table (table=0x7faa700c11e0) at /data/src/10.3/sql/table_cache.cc:222
|
#29 0x000055f9a230a4b2 in tc_purge (mark_flushed=true) at /data/src/10.3/sql/table_cache.cc:335
|
#30 0x000055f9a2025f26 in close_cached_tables (thd=0x0, tables=0x0, wait_for_refresh=false, timeout=31536000) at /data/src/10.3/sql/sql_base.cc:377
|
#31 0x000055f9a230bbe0 in tdc_start_shutdown () at /data/src/10.3/sql/table_cache.cc:660
|
#32 0x000055f9a1f8d188 in clean_up (print_message=true) at /data/src/10.3/sql/mysqld.cc:2241
|
#33 0x000055f9a1f8cdb0 in unireg_end () at /data/src/10.3/sql/mysqld.cc:2116
|
#34 0x000055f9a1f8ccae in kill_server (sig_ptr=0x0) at /data/src/10.3/sql/mysqld.cc:2043
|
#35 0x000055f9a1f8ccec in kill_server_thread (arg=0x7faa80ceade0) at /data/src/10.3/sql/mysqld.cc:2066
|
#36 0x000055f9a2be854a in pfs_spawn_thread (arg=0x55f9a5c19850) at /data/src/10.3/storage/perfschema/pfs.cc:1869
|
#37 0x00007faa86dc6ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
|
#38 0x00007faa86cf6def in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
|
Reproducible on 10.3, 10.5+. On some reason couldn't reproduce on 10.4.
Non-debug builds sometimes hang.
The problem is particularly non-artificial for 10.3, because Debian installation of 10.3 includes a config file with query cache enabled.
Attachments
Issue Links
- blocks
-
MDEV-29421 Thread (10.6+) and server hangs (10.4/10.5) in 'Opening tables' (on optimized builds) and SIGABRT in safe_mutex_lock (on debug) on I_S read when using Spider
-
- Closed
-
-
MDEV-32807 Delete spider sts and crd system tables
-
- Open
-
- includes
-
MDEV-29421 Thread (10.6+) and server hangs (10.4/10.5) in 'Opening tables' (on optimized builds) and SIGABRT in safe_mutex_lock (on debug) on I_S read when using Spider
-
- Closed
-
- is duplicated by
-
MDEV-29708 safe_mutex: Trying to lock uninitialized mutex in sql_cache.cc on SHUTDOWN, stack smashing, SIGABRT in safe_mutex_lock
-
- Closed
-
- relates to
-
MDEV-32807 Delete spider sts and crd system tables
-
- Open
-
-
MDEV-29024 Trying to lock mutex at spd_table.cc line 5741 when the mutex was already locked at spd_table.cc line 5741
-
- Closed
-
Activity
I confirm that the bug is reproducible but it is not reproducible with the --rr option (why?).
--echo #
|
--echo # MDEV-28739 Trying to lock uninitialized mutex or hang upon shutdown after using Spider with query_cache
|
--echo #
|
|
--disable_query_log
|
--disable_result_log
|
--source ../../t/test_init.inc
|
--enable_result_log
|
--enable_query_log
|
|
--connection child2_1
|
CREATE DATABASE auto_test_remote; |
USE auto_test_remote; |
|
CREATE TABLE tbl_a (id INT); |
|
--connection master_1
|
CREATE DATABASE auto_test_local; |
USE auto_test_local; |
|
set global query_cache_type= on; |
set spider_same_server_link = on; |
|
eval CREATE TABLE tbl_a ( |
id INT |
) $MASTER_1_ENGINE $MASTER_1_CHARSET COMMENT='table "tbl_a", srv "s_2_1"'; |
|
SELECT * FROM tbl_a; |
|
--source include/restart_mysqld.inc
|
|
--connection master_1
|
DROP DATABASE IF EXISTS auto_test_local; |
--connection child2_1
|
DROP DATABASE IF EXISTS auto_test_remote; |
|
--disable_query_log
|
--disable_result_log
|
--source ../t/test_deinit.inc
|
--enable_query_log
|
--enable_result_log |
The workaround would be to disable statistics persistence. However, even with the workaround, the test fails due to MDEV-27912.
set global spider_store_last_crd=0; |
set global spider_store_last_sts=0; |
Spider updates system tables too late in the shutdown sequence. I think that updating the tables in handlerton::panic or handlerton::pre_shutdown would fix the problem.
A tentative fix. I will refactor this somewhat. https://github.com/MariaDB/server/commit/e26b3932aa536bc71a9dea91058f729f2657a08e
spider_panic seems to be called too late for updating the stats tables.
The patch above does not work. It prevents the crash but fails to update the stats tables.
I confirm the testcase at https://jira.mariadb.org/browse/MDEV-28739?focusedCommentId=227057&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-227057 crashes in 11.1 4e5b771e980edfdad5c5414aa62c81d409d585a4. Taking over:
safe_mutex: Trying to lock uninitialized mutex at /home/ycp/source/mariadb-server/11.1/src/sql/sql_cache.cc, line 725
|
230515 14:45:47 [ERROR] mysqld got signal 6 ;
|
This could be because you hit a bug. It is also possible that this binary
|
or one of the libraries it was linked against is corrupt, improperly built,
|
or misconfigured. This error can also be caused by malfunctioning hardware.
|
|
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
|
|
We will try our best to scrape up some info that will hopefully help
|
diagnose the problem, but since we have already crashed,
|
something is definitely wrong and this may fail.
|
|
Server version: 11.1.0-MariaDB-debug-log source revision: 3ef111610b7f8a6a323975cfdf4a4257feb9dcd9
|
key_buffer_size=1048576
|
read_buffer_size=131072
|
max_used_connections=5
|
max_threads=153
|
thread_count=1
|
It is possible that mysqld could use up to
|
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 63925 K bytes of memory
|
Hope that's ok; if not, decrease some variables in the equation.
|
|
Thread pointer: 0x559ce341ca78
|
Attempting backtrace. You can use the following information to find out
|
where mysqld died. If you see no messages after this, something went
|
terribly wrong...
|
stack_bottom = 0x7fffd598bcf8 thread_stack 0x49000
|
mysys/stacktrace.c:215(my_print_stacktrace)[0x559ce1021efa]
|
sql/signal_handler.cc:238(handle_fatal_signal)[0x559ce07b60cb]
|
??:0(__restore_rt)[0x7f2440e65140]
|
??:0(gsignal)[0x7f244067cce1]
|
??:0(abort)[0x7f2440666537]
|
mysys/thr_mutex.c:247(safe_mutex_lock)[0x559ce10266d9]
|
psi/mysql_thread.h:750(inline_mysql_mutex_lock)[0x559ce0342b1c]
|
sql/sql_cache.cc:726(Query_cache::lock(THD*))[0x559ce03439f0]
|
sql/sql_cache.cc:3319(Query_cache::invalidate_table(THD*, unsigned char*, unsigned long))[0x559ce034a943]
|
sql/sql_cache.cc:2440(Query_cache::invalidate_by_MyISAM_filename(char const*))[0x559ce0348541]
|
sql/sql_cache.cc:1252(query_cache_invalidate_by_MyISAM_filename)[0x559ce0344e21]
|
maria/ha_maria.cc:3092(reset_thd_trn(THD*, st_maria_handler*))[0x559ce09cd467]
|
maria/ha_maria.cc:3630(maria_commit(handlerton*, THD*, bool))[0x559ce09ce981]
|
sql/handler.cc:2126(commit_one_phase_2(THD*, bool, THD_TRANS*, bool))[0x559ce07bd685]
|
sql/handler.cc:2079(ha_commit_one_phase(THD*, bool))[0x559ce07bd442]
|
sql/handler.cc:1873(ha_commit_trans(THD*, bool))[0x559ce07bc562]
|
sql/sql_class.cc:6100(THD::commit_whole_transaction_and_close_tables())[0x559ce03648fc]
|
spider/spd_sys_table.cc:597(spider_sys_close_table(THD*, start_new_trans**))[0x7f243825ac23]
|
spider/spd_sys_table.cc:3236(spider_sys_insert_or_update_table_sts(THD*, char const*, unsigned int, ha_statistics*))[0x7f2438266ba4]
|
spider/spd_table.cc:5623(spider_free_share(st_spider_share*))[0x7f24382e68f5]
|
spider/ha_spider.cc:577(ha_spider::close())[0x7f2438320212]
|
sql/handler.cc:3556(handler::ha_close())[0x559ce07c1aaf]
|
sql/table.cc:4674(closefrm(TABLE*))[0x559ce055b362]
|
sql/table_cache.cc:226(intern_close_table(TABLE*))[0x559ce06d913d]
|
sql/table_cache.cc:317(tc_purge())[0x559ce06d9519]
|
sql/sql_base.cc:330(purge_tables())[0x559ce03270ca]
|
sql/table_cache.cc:641(tdc_start_shutdown())[0x559ce06dab7c]
|
sql/mysqld.cc:1998(clean_up(bool))[0x559ce02491f9]
|
sql/mysqld.cc:6057(mysqld_main(int, char**))[0x559ce0251bdc]
|
sql/main.cc:34(main)[0x559ce0245595]
|
??:0(__libc_start_main)[0x7f2440667d0a]
|
??:0(_start)[0x559ce02454ba]
|
|
Trying to get some variables.
|
Some pointers may be invalid and cause the dump to abort.
|
Query (0x0): (null)
|
Connection ID (thread ID): 30
|
Status: NOT_KILLED
|
|
Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=on,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on,split_materialized=on,condition_pushdown_for_subquery=on,rowid_filter=on,condition_pushdown_from_having=on,not_null_range_scan=off
|
|
The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ contains
|
information that should help you find out what is causing the crash.
|
|
We think the query pointer is invalid, but we will try to print it anyway.
|
Query:
|
|
Writing a core file...
|
Working directory at /home/ycp/source/mariadb-server/11.1/build/mysql-test/var/mysqld.1.1/data
|
Resource Limits:
|
Limit Soft Limit Hard Limit Units
|
Max cpu time unlimited unlimited seconds
|
Max file size unlimited unlimited bytes
|
Max data size unlimited unlimited bytes
|
Max stack size 10022912 unlimited bytes
|
Max core file size unlimited unlimited bytes
|
Max resident set unlimited unlimited bytes
|
Max processes 124974 124974 processes
|
Max open files 1024 1024 files
|
Max locked memory 4108061184 4108061184 bytes
|
Max address space unlimited unlimited bytes
|
Max file locks unlimited unlimited locks
|
Max pending signals 124974 124974 signals
|
Max msgqueue size 819200 819200 bytes
|
Max nice priority 0 0
|
Max realtime priority 0 0
|
Max realtime timeout unlimited unlimited us
|
Core pattern: core
|
|
Kernel version: Linux version 6.0.0-0.deb11.2-amd64 (debian-kernel@lists.debian.org) (gcc-10 (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2) #1 SMP PREEMPT_DYNAMIC Debian 6.0.3-1~bpo11+1 (2022-10-29)
|
|
----------SERVER LOG END-------------
|
mysqltest failed but provided no output
|
|
|
- found 'core' (0/5)
|
Core generated by '/home/ycp/source/mariadb-server/11.1/build/sql/mariadbd'
|
Output from gdb follows. The first stack trace is from the failing thread.
|
The following stack traces are from all threads (so the failing one is
|
duplicated).
|
--------------------------
|
|
warning: Can't open file /[aio] (deleted) during file-backed mapping note processing
|
[New LWP 3251046]
|
[New LWP 3251060]
|
[New LWP 3251053]
|
[New LWP 3251054]
|
[New LWP 3251055]
|
[New LWP 3251048]
|
[New LWP 3251047]
|
[New LWP 3251056]
|
[New LWP 3251064]
|
[New LWP 3251058]
|
[New LWP 3251059]
|
[Thread debugging using libthread_db enabled]
|
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
|
Core was generated by `/home/ycp/source/mariadb-server/11.1/build/sql/mariadbd --defaults-group-suffix'.
|
Program terminated with signal SIGABRT, Aborted.
|
#0 0x00007f2440e61f44 in pthread_kill () from /lib/x86_64-linux-gnu/libpthread.so.0
|
[Current thread is 1 (Thread 0x7f2440cf5940 (LWP 3251046))]
|
|
Thread 11 (Thread 0x7f243234c700 (LWP 3251059)):
|
#0 0x00007f2440e60df8 in pthread_cond_clockwait () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#1 0x0000559ce0f80c68 in std::condition_variable::__wait_until_impl<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (this=0x559ce32b9dd0, __lock=@0x7f243234bb90: {_M_device = 0x559ce32b9a38, _M_owns = true}, __atime=@0x7f243234bb28: {__d = {__r = 13829482131520213}}) at /usr/include/c++/10/condition_variable:209
|
#2 0x0000559ce0f80165 in std::condition_variable::wait_until<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (this=0x559ce32b9dd0, __lock=@0x7f243234bb90: {_M_device = 0x559ce32b9a38, _M_owns = true}, __atime=@0x7f243234bb28: {__d = {__r = 13829482131520213}}) at /usr/include/c++/10/condition_variable:119
|
#3 0x0000559ce0f7f65a in std::condition_variable::wait_for<long, std::ratio<1l, 1000l> > (this=0x559ce32b9dd0, __lock=@0x7f243234bb90: {_M_device = 0x559ce32b9a38, _M_owns = true}, __rtime=@0x559ce32b9a60: {__r = 60000}) at /usr/include/c++/10/condition_variable:172
|
#4 0x0000559ce0f7cf05 in tpool::thread_pool_generic::wait_for_tasks (this=0x559ce32b9920, lk=@0x7f243234bb90: {_M_device = 0x559ce32b9a38, _M_owns = true}, thread_data=0x559ce32b9dd0) at /home/ycp/source/mariadb-server/11.1/src/tpool/tpool_generic.cc:480
|
#5 0x0000559ce0f7d111 in tpool::thread_pool_generic::get_task (this=0x559ce32b9920, thread_var=0x559ce32b9dd0, t=0x7f243234bbd8) at /home/ycp/source/mariadb-server/11.1/src/tpool/tpool_generic.cc:533
|
#6 0x0000559ce0f7d397 in tpool::thread_pool_generic::worker_main (this=0x559ce32b9920, thread_var=0x559ce32b9dd0) at /home/ycp/source/mariadb-server/11.1/src/tpool/tpool_generic.cc:578
|
#7 0x0000559ce0f82f8a in std::__invoke_impl<void, void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> (__f=@0x7f2424000d08: (void (tpool::thread_pool_generic::*)(tpool::thread_pool_generic * const, tpool::worker_data *)) 0x559ce0f7d336 <tpool::thread_pool_generic::worker_main(tpool::worker_data*)>, __t=@0x7f2424000d00: 0x559ce32b9920) at /usr/include/c++/10/bits/invoke.h:73
|
#8 0x0000559ce0f82e7a in std::__invoke<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> (__fn=@0x7f2424000d08: (void (tpool::thread_pool_generic::*)(tpool::thread_pool_generic * const, tpool::worker_data *)) 0x559ce0f7d336 <tpool::thread_pool_generic::worker_main(tpool::worker_data*)>) at /usr/include/c++/10/bits/invoke.h:95
|
#9 0x0000559ce0f82dad in std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> >::_M_invoke<0ul, 1ul, 2ul> (this=0x7f2424000cf8) at /usr/include/c++/10/thread:264
|
#10 0x0000559ce0f82d4a in std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> >::operator() (this=0x7f2424000cf8) at /usr/include/c++/10/thread:271
|
#11 0x0000559ce0f82d2e in std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> > >::_M_run (this=0x7f2424000cf0) at /usr/include/c++/10/thread:215
|
#12 0x00007f2440901ed0 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
|
#13 0x00007f2440e59ea7 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#14 0x00007f2440740a2f in clone () from /lib/x86_64-linux-gnu/libc.so.6
|
|
Thread 10 (Thread 0x7f2432cdd700 (LWP 3251058)):
|
#0 0x00007f2440e60df8 in pthread_cond_clockwait () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#1 0x0000559ce0f80c68 in std::condition_variable::__wait_until_impl<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (this=0x559ce32b9d50, __lock=@0x7f2432cdcb90: {_M_device = 0x559ce32b9a38, _M_owns = true}, __atime=@0x7f2432cdcb28: {__d = {__r = 13829482455995726}}) at /usr/include/c++/10/condition_variable:209
|
#2 0x0000559ce0f80165 in std::condition_variable::wait_until<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (this=0x559ce32b9d50, __lock=@0x7f2432cdcb90: {_M_device = 0x559ce32b9a38, _M_owns = true}, __atime=@0x7f2432cdcb28: {__d = {__r = 13829482455995726}}) at /usr/include/c++/10/condition_variable:119
|
#3 0x0000559ce0f7f65a in std::condition_variable::wait_for<long, std::ratio<1l, 1000l> > (this=0x559ce32b9d50, __lock=@0x7f2432cdcb90: {_M_device = 0x559ce32b9a38, _M_owns = true}, __rtime=@0x559ce32b9a60: {__r = 60000}) at /usr/include/c++/10/condition_variable:172
|
#4 0x0000559ce0f7cf05 in tpool::thread_pool_generic::wait_for_tasks (this=0x559ce32b9920, lk=@0x7f2432cdcb90: {_M_device = 0x559ce32b9a38, _M_owns = true}, thread_data=0x559ce32b9d50) at /home/ycp/source/mariadb-server/11.1/src/tpool/tpool_generic.cc:480
|
#5 0x0000559ce0f7d111 in tpool::thread_pool_generic::get_task (this=0x559ce32b9920, thread_var=0x559ce32b9d50, t=0x7f2432cdcbd8) at /home/ycp/source/mariadb-server/11.1/src/tpool/tpool_generic.cc:533
|
#6 0x0000559ce0f7d397 in tpool::thread_pool_generic::worker_main (this=0x559ce32b9920, thread_var=0x559ce32b9d50) at /home/ycp/source/mariadb-server/11.1/src/tpool/tpool_generic.cc:578
|
#7 0x0000559ce0f82f8a in std::__invoke_impl<void, void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> (__f=@0x7f2424000b78: (void (tpool::thread_pool_generic::*)(tpool::thread_pool_generic * const, tpool::worker_data *)) 0x559ce0f7d336 <tpool::thread_pool_generic::worker_main(tpool::worker_data*)>, __t=@0x7f2424000b70: 0x559ce32b9920) at /usr/include/c++/10/bits/invoke.h:73
|
#8 0x0000559ce0f82e7a in std::__invoke<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> (__fn=@0x7f2424000b78: (void (tpool::thread_pool_generic::*)(tpool::thread_pool_generic * const, tpool::worker_data *)) 0x559ce0f7d336 <tpool::thread_pool_generic::worker_main(tpool::worker_data*)>) at /usr/include/c++/10/bits/invoke.h:95
|
#9 0x0000559ce0f82dad in std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> >::_M_invoke<0ul, 1ul, 2ul> (this=0x7f2424000b68) at /usr/include/c++/10/thread:264
|
#10 0x0000559ce0f82d4a in std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> >::operator() (this=0x7f2424000b68) at /usr/include/c++/10/thread:271
|
#11 0x0000559ce0f82d2e in std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> > >::_M_run (this=0x7f2424000b60) at /usr/include/c++/10/thread:215
|
#12 0x00007f2440901ed0 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
|
#13 0x00007f2440e59ea7 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#14 0x00007f2440740a2f in clone () from /lib/x86_64-linux-gnu/libc.so.6
|
|
Thread 9 (Thread 0x7f2440050700 (LWP 3251064)):
|
#0 0x00007f244067dba2 in sigtimedwait () from /lib/x86_64-linux-gnu/libc.so.6
|
#1 0x0000559ce0245746 in my_sigwait (set=0x7f244004fc10, sig=0x7f244004fbd4, code=0x7f244004fbd8) at /home/ycp/source/mariadb-server/11.1/src/include/my_pthread.h:195
|
#2 0x0000559ce024be9c in signal_hand (arg=0x0) at /home/ycp/source/mariadb-server/11.1/src/sql/mysqld.cc:3263
|
#3 0x0000559ce0abdfc4 in pfs_spawn_thread (arg=0x559ce3602618) at /home/ycp/source/mariadb-server/11.1/src/storage/perfschema/pfs.cc:2201
|
#4 0x00007f2440e59ea7 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#5 0x00007f2440740a2f in clone () from /lib/x86_64-linux-gnu/libc.so.6
|
|
Thread 8 (Thread 0x7f243366e700 (LWP 3251056)):
|
#0 0x00007f2440e60df8 in pthread_cond_clockwait () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#1 0x0000559ce0f80c68 in std::condition_variable::__wait_until_impl<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (this=0x559ce32b9cd0, __lock=@0x7f243366db90: {_M_device = 0x559ce32b9a38, _M_owns = true}, __atime=@0x7f243366db28: {__d = {__r = 13829482115437018}}) at /usr/include/c++/10/condition_variable:209
|
#2 0x0000559ce0f80165 in std::condition_variable::wait_until<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (this=0x559ce32b9cd0, __lock=@0x7f243366db90: {_M_device = 0x559ce32b9a38, _M_owns = true}, __atime=@0x7f243366db28: {__d = {__r = 13829482115437018}}) at /usr/include/c++/10/condition_variable:119
|
#3 0x0000559ce0f7f65a in std::condition_variable::wait_for<long, std::ratio<1l, 1000l> > (this=0x559ce32b9cd0, __lock=@0x7f243366db90: {_M_device = 0x559ce32b9a38, _M_owns = true}, __rtime=@0x559ce32b9a60: {__r = 60000}) at /usr/include/c++/10/condition_variable:172
|
#4 0x0000559ce0f7cf05 in tpool::thread_pool_generic::wait_for_tasks (this=0x559ce32b9920, lk=@0x7f243366db90: {_M_device = 0x559ce32b9a38, _M_owns = true}, thread_data=0x559ce32b9cd0) at /home/ycp/source/mariadb-server/11.1/src/tpool/tpool_generic.cc:480
|
#5 0x0000559ce0f7d111 in tpool::thread_pool_generic::get_task (this=0x559ce32b9920, thread_var=0x559ce32b9cd0, t=0x7f243366dbd8) at /home/ycp/source/mariadb-server/11.1/src/tpool/tpool_generic.cc:533
|
#6 0x0000559ce0f7d397 in tpool::thread_pool_generic::worker_main (this=0x559ce32b9920, thread_var=0x559ce32b9cd0) at /home/ycp/source/mariadb-server/11.1/src/tpool/tpool_generic.cc:578
|
#7 0x0000559ce0f82f8a in std::__invoke_impl<void, void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> (__f=@0x7f243c002278: (void (tpool::thread_pool_generic::*)(tpool::thread_pool_generic * const, tpool::worker_data *)) 0x559ce0f7d336 <tpool::thread_pool_generic::worker_main(tpool::worker_data*)>, __t=@0x7f243c002270: 0x559ce32b9920) at /usr/include/c++/10/bits/invoke.h:73
|
#8 0x0000559ce0f82e7a in std::__invoke<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> (__fn=@0x7f243c002278: (void (tpool::thread_pool_generic::*)(tpool::thread_pool_generic * const, tpool::worker_data *)) 0x559ce0f7d336 <tpool::thread_pool_generic::worker_main(tpool::worker_data*)>) at /usr/include/c++/10/bits/invoke.h:95
|
#9 0x0000559ce0f82dad in std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> >::_M_invoke<0ul, 1ul, 2ul> (this=0x7f243c002268) at /usr/include/c++/10/thread:264
|
#10 0x0000559ce0f82d4a in std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> >::operator() (this=0x7f243c002268) at /usr/include/c++/10/thread:271
|
#11 0x0000559ce0f82d2e in std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> > >::_M_run (this=0x7f243c002260) at /usr/include/c++/10/thread:215
|
#12 0x00007f2440901ed0 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
|
#13 0x00007f2440e59ea7 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#14 0x00007f2440740a2f in clone () from /lib/x86_64-linux-gnu/libc.so.6
|
|
Thread 7 (Thread 0x7f24410db700 (LWP 3251047)):
|
#0 0x00007f2440e60ad8 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#1 0x0000559ce10274e5 in safe_cond_timedwait (cond=0x559ce2716a00 <COND_timer>, mp=0x559ce2716940 <LOCK_timer>, abstime=0x7f24410dace0, file=0x559ce15c53c0 "/home/ycp/source/mariadb-server/11.1/src/include/mysql/psi/mysql_thread.h", line=1088) at /home/ycp/source/mariadb-server/11.1/src/mysys/thr_mutex.c:548
|
#2 0x0000559ce1028496 in inline_mysql_cond_timedwait (that=0x559ce2716a00 <COND_timer>, mutex=0x559ce2716940 <LOCK_timer>, abstime=0x7f24410dace0, src_file=0x559ce15c5410 "/home/ycp/source/mariadb-server/11.1/src/mysys/thr_timer.c", src_line=321) at /home/ycp/source/mariadb-server/11.1/src/include/mysql/psi/mysql_thread.h:1088
|
#3 0x0000559ce1029116 in timer_handler (arg=0x0) at /home/ycp/source/mariadb-server/11.1/src/mysys/thr_timer.c:321
|
#4 0x00007f2440e59ea7 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#5 0x00007f2440740a2f in clone () from /lib/x86_64-linux-gnu/libc.so.6
|
|
Thread 6 (Thread 0x7f243b67b700 (LWP 3251048)):
|
#0 0x00007f2440e60ad8 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#1 0x0000559ce10274e5 in safe_cond_timedwait (cond=0x559ce2683cc0 <COND_checkpoint>, mp=0x559ce2683c00 <LOCK_checkpoint>, abstime=0x7f243b67ac30, file=0x559ce13d7020 "/home/ycp/source/mariadb-server/11.1/src/include/mysql/psi/mysql_thread.h", line=1088) at /home/ycp/source/mariadb-server/11.1/src/mysys/thr_mutex.c:548
|
#2 0x0000559ce0a0d713 in inline_mysql_cond_timedwait (that=0x559ce2683cc0 <COND_checkpoint>, mutex=0x559ce2683c00 <LOCK_checkpoint>, abstime=0x7f243b67ac30, src_file=0x559ce13d7070 "/home/ycp/source/mariadb-server/11.1/src/storage/maria/ma_servicethread.c", src_line=115) at /home/ycp/source/mariadb-server/11.1/src/include/mysql/psi/mysql_thread.h:1088
|
#3 0x0000559ce0a0dbf9 in my_service_thread_sleep (control=0x559ce1c31900 <checkpoint_control>, sleep_time=29000000000) at /home/ycp/source/mariadb-server/11.1/src/storage/maria/ma_servicethread.c:115
|
#4 0x0000559ce0a01994 in ma_checkpoint_background (arg=0x1e) at /home/ycp/source/mariadb-server/11.1/src/storage/maria/ma_checkpoint.c:725
|
#5 0x00007f2440e59ea7 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#6 0x00007f2440740a2f in clone () from /lib/x86_64-linux-gnu/libc.so.6
|
|
Thread 5 (Thread 0x7f2433fff700 (LWP 3251055)):
|
#0 0x00007f2440e60df8 in pthread_cond_clockwait () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#1 0x0000559ce0f80c68 in std::condition_variable::__wait_until_impl<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (this=0x559ce32b9c50, __lock=@0x7f2433ffeb90: {_M_device = 0x559ce32b9a38, _M_owns = true}, __atime=@0x7f2433ffeb28: {__d = {__r = 13829482131538908}}) at /usr/include/c++/10/condition_variable:209
|
#2 0x0000559ce0f80165 in std::condition_variable::wait_until<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (this=0x559ce32b9c50, __lock=@0x7f2433ffeb90: {_M_device = 0x559ce32b9a38, _M_owns = true}, __atime=@0x7f2433ffeb28: {__d = {__r = 13829482131538908}}) at /usr/include/c++/10/condition_variable:119
|
#3 0x0000559ce0f7f65a in std::condition_variable::wait_for<long, std::ratio<1l, 1000l> > (this=0x559ce32b9c50, __lock=@0x7f2433ffeb90: {_M_device = 0x559ce32b9a38, _M_owns = true}, __rtime=@0x559ce32b9a60: {__r = 60000}) at /usr/include/c++/10/condition_variable:172
|
#4 0x0000559ce0f7cf05 in tpool::thread_pool_generic::wait_for_tasks (this=0x559ce32b9920, lk=@0x7f2433ffeb90: {_M_device = 0x559ce32b9a38, _M_owns = true}, thread_data=0x559ce32b9c50) at /home/ycp/source/mariadb-server/11.1/src/tpool/tpool_generic.cc:480
|
#5 0x0000559ce0f7d111 in tpool::thread_pool_generic::get_task (this=0x559ce32b9920, thread_var=0x559ce32b9c50, t=0x7f2433ffebd8) at /home/ycp/source/mariadb-server/11.1/src/tpool/tpool_generic.cc:533
|
#6 0x0000559ce0f7d397 in tpool::thread_pool_generic::worker_main (this=0x559ce32b9920, thread_var=0x559ce32b9c50) at /home/ycp/source/mariadb-server/11.1/src/tpool/tpool_generic.cc:578
|
#7 0x0000559ce0f82f8a in std::__invoke_impl<void, void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> (__f=@0x559ce341c9d8: (void (tpool::thread_pool_generic::*)(tpool::thread_pool_generic * const, tpool::worker_data *)) 0x559ce0f7d336 <tpool::thread_pool_generic::worker_main(tpool::worker_data*)>, __t=@0x559ce341c9d0: 0x559ce32b9920) at /usr/include/c++/10/bits/invoke.h:73
|
#8 0x0000559ce0f82e7a in std::__invoke<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> (__fn=@0x559ce341c9d8: (void (tpool::thread_pool_generic::*)(tpool::thread_pool_generic * const, tpool::worker_data *)) 0x559ce0f7d336 <tpool::thread_pool_generic::worker_main(tpool::worker_data*)>) at /usr/include/c++/10/bits/invoke.h:95
|
#9 0x0000559ce0f82dad in std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> >::_M_invoke<0ul, 1ul, 2ul> (this=0x559ce341c9c8) at /usr/include/c++/10/thread:264
|
#10 0x0000559ce0f82d4a in std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> >::operator() (this=0x559ce341c9c8) at /usr/include/c++/10/thread:271
|
#11 0x0000559ce0f82d2e in std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> > >::_M_run (this=0x559ce341c9c0) at /usr/include/c++/10/thread:215
|
#12 0x00007f2440901ed0 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
|
#13 0x00007f2440e59ea7 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#14 0x00007f2440740a2f in clone () from /lib/x86_64-linux-gnu/libc.so.6
|
|
Thread 4 (Thread 0x7f2439358700 (LWP 3251054)):
|
#0 0x00007f2440e60ad8 in pthread_cond_timedwait@@GLIBC_2.3.2 () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#1 0x0000559ce10274e5 in safe_cond_timedwait (cond=0x559ce1c4fbd0 <buf_pool+17424>, mp=0x559ce1c4fac0 <buf_pool+17152>, abstime=0x7f2439357c20, file=0x559ce1554f68 "/home/ycp/source/mariadb-server/11.1/src/storage/innobase/buf/buf0flu.cc", line=2332) at /home/ycp/source/mariadb-server/11.1/src/mysys/thr_mutex.c:548
|
#2 0x0000559ce0e83469 in buf_flush_page_cleaner () at /home/ycp/source/mariadb-server/11.1/src/storage/innobase/buf/buf0flu.cc:2332
|
#3 0x0000559ce0e877df in std::__invoke_impl<void, void (*)()> (__f=@0x559ce33956a8: 0x559ce0e830d1 <buf_flush_page_cleaner()>) at /usr/include/c++/10/bits/invoke.h:60
|
#4 0x0000559ce0e87789 in std::__invoke<void (*)()> (__fn=@0x559ce33956a8: 0x559ce0e830d1 <buf_flush_page_cleaner()>) at /usr/include/c++/10/bits/invoke.h:95
|
#5 0x0000559ce0e87736 in std::thread::_Invoker<std::tuple<void (*)()> >::_M_invoke<0ul> (this=0x559ce33956a8) at /usr/include/c++/10/thread:264
|
#6 0x0000559ce0e8770a in std::thread::_Invoker<std::tuple<void (*)()> >::operator() (this=0x559ce33956a8) at /usr/include/c++/10/thread:271
|
#7 0x0000559ce0e876ee in std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (*)()> > >::_M_run (this=0x559ce33956a0) at /usr/include/c++/10/thread:215
|
#8 0x00007f2440901ed0 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
|
#9 0x00007f2440e59ea7 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#10 0x00007f2440740a2f in clone () from /lib/x86_64-linux-gnu/libc.so.6
|
|
Thread 3 (Thread 0x7f243a8e9700 (LWP 3251053)):
|
#0 0x00007f244073a2e9 in syscall () from /lib/x86_64-linux-gnu/libc.so.6
|
#1 0x0000559ce0f83a36 in my_getevents (ctx=0x7f24400e7000, min_nr=1, nr=256, ev=0x7f243a8e6c00) at /home/ycp/source/mariadb-server/11.1/src/tpool/aio_linux.cc:63
|
#2 0x0000559ce0f83bfe in tpool::aio_linux::getevent_thread_routine (aio=0x559ce32b98f0) at /home/ycp/source/mariadb-server/11.1/src/tpool/aio_linux.cc:105
|
#3 0x0000559ce0f844d8 in std::__invoke_impl<void, void (*)(tpool::aio_linux*), tpool::aio_linux*> (__f=@0x559ce31de9f0: 0x559ce0f83bbb <tpool::aio_linux::getevent_thread_routine(tpool::aio_linux*)>) at /usr/include/c++/10/bits/invoke.h:60
|
#4 0x0000559ce0f8443f in std::__invoke<void (*)(tpool::aio_linux*), tpool::aio_linux*> (__fn=@0x559ce31de9f0: 0x559ce0f83bbb <tpool::aio_linux::getevent_thread_routine(tpool::aio_linux*)>) at /usr/include/c++/10/bits/invoke.h:95
|
#5 0x0000559ce0f843af in std::thread::_Invoker<std::tuple<void (*)(tpool::aio_linux*), tpool::aio_linux*> >::_M_invoke<0ul, 1ul> (this=0x559ce31de9e8) at /usr/include/c++/10/thread:264
|
#6 0x0000559ce0f84368 in std::thread::_Invoker<std::tuple<void (*)(tpool::aio_linux*), tpool::aio_linux*> >::operator() (this=0x559ce31de9e8) at /usr/include/c++/10/thread:271
|
#7 0x0000559ce0f8434c in std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (*)(tpool::aio_linux*), tpool::aio_linux*> > >::_M_run (this=0x559ce31de9e0) at /usr/include/c++/10/thread:215
|
#8 0x00007f2440901ed0 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
|
#9 0x00007f2440e59ea7 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#10 0x00007f2440740a2f in clone () from /lib/x86_64-linux-gnu/libc.so.6
|
|
Thread 2 (Thread 0x7f24319bb700 (LWP 3251060)):
|
#0 0x00007f2440e60df8 in pthread_cond_clockwait () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#1 0x0000559ce0f80c68 in std::condition_variable::__wait_until_impl<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (this=0x559ce32b9e50, __lock=@0x7f24319bab90: {_M_device = 0x559ce32b9a38, _M_owns = true}, __atime=@0x7f24319bab28: {__d = {__r = 13829482131526435}}) at /usr/include/c++/10/condition_variable:209
|
#2 0x0000559ce0f80165 in std::condition_variable::wait_until<std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (this=0x559ce32b9e50, __lock=@0x7f24319bab90: {_M_device = 0x559ce32b9a38, _M_owns = true}, __atime=@0x7f24319bab28: {__d = {__r = 13829482131526435}}) at /usr/include/c++/10/condition_variable:119
|
#3 0x0000559ce0f7f65a in std::condition_variable::wait_for<long, std::ratio<1l, 1000l> > (this=0x559ce32b9e50, __lock=@0x7f24319bab90: {_M_device = 0x559ce32b9a38, _M_owns = true}, __rtime=@0x559ce32b9a60: {__r = 60000}) at /usr/include/c++/10/condition_variable:172
|
#4 0x0000559ce0f7cf05 in tpool::thread_pool_generic::wait_for_tasks (this=0x559ce32b9920, lk=@0x7f24319bab90: {_M_device = 0x559ce32b9a38, _M_owns = true}, thread_data=0x559ce32b9e50) at /home/ycp/source/mariadb-server/11.1/src/tpool/tpool_generic.cc:480
|
#5 0x0000559ce0f7d111 in tpool::thread_pool_generic::get_task (this=0x559ce32b9920, thread_var=0x559ce32b9e50, t=0x7f24319babd8) at /home/ycp/source/mariadb-server/11.1/src/tpool/tpool_generic.cc:533
|
#6 0x0000559ce0f7d397 in tpool::thread_pool_generic::worker_main (this=0x559ce32b9920, thread_var=0x559ce32b9e50) at /home/ycp/source/mariadb-server/11.1/src/tpool/tpool_generic.cc:578
|
#7 0x0000559ce0f82f8a in std::__invoke_impl<void, void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> (__f=@0x7f2424000e98: (void (tpool::thread_pool_generic::*)(tpool::thread_pool_generic * const, tpool::worker_data *)) 0x559ce0f7d336 <tpool::thread_pool_generic::worker_main(tpool::worker_data*)>, __t=@0x7f2424000e90: 0x559ce32b9920) at /usr/include/c++/10/bits/invoke.h:73
|
#8 0x0000559ce0f82e7a in std::__invoke<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> (__fn=@0x7f2424000e98: (void (tpool::thread_pool_generic::*)(tpool::thread_pool_generic * const, tpool::worker_data *)) 0x559ce0f7d336 <tpool::thread_pool_generic::worker_main(tpool::worker_data*)>) at /usr/include/c++/10/bits/invoke.h:95
|
#9 0x0000559ce0f82dad in std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> >::_M_invoke<0ul, 1ul, 2ul> (this=0x7f2424000e88) at /usr/include/c++/10/thread:264
|
#10 0x0000559ce0f82d4a in std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> >::operator() (this=0x7f2424000e88) at /usr/include/c++/10/thread:271
|
#11 0x0000559ce0f82d2e in std::thread::_State_impl<std::thread::_Invoker<std::tuple<void (tpool::thread_pool_generic::*)(tpool::worker_data*), tpool::thread_pool_generic*, tpool::worker_data*> > >::_M_run (this=0x7f2424000e80) at /usr/include/c++/10/thread:215
|
#12 0x00007f2440901ed0 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6
|
#13 0x00007f2440e59ea7 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#14 0x00007f2440740a2f in clone () from /lib/x86_64-linux-gnu/libc.so.6
|
|
Thread 1 (Thread 0x7f2440cf5940 (LWP 3251046)):
|
#0 0x00007f2440e61f44 in pthread_kill () from /lib/x86_64-linux-gnu/libpthread.so.0
|
#1 0x0000559ce1021fec in my_write_core (sig=6) at /home/ycp/source/mariadb-server/11.1/src/mysys/stacktrace.c:424
|
#2 0x0000559ce07b6411 in handle_fatal_signal (sig=6) at /home/ycp/source/mariadb-server/11.1/src/sql/signal_handler.cc:357
|
#3 <signal handler called>
|
#4 0x00007f244067cce1 in raise () from /lib/x86_64-linux-gnu/libc.so.6
|
#5 0x00007f2440666537 in abort () from /lib/x86_64-linux-gnu/libc.so.6
|
#6 0x0000559ce10266d9 in safe_mutex_lock (mp=0x559ce1e45620 <query_cache+160>, my_flags=0, file=0x559ce114f318 "/home/ycp/source/mariadb-server/11.1/src/sql/sql_cache.cc", line=725) at /home/ycp/source/mariadb-server/11.1/src/mysys/thr_mutex.c:245
|
#7 0x0000559ce0342b1c in inline_mysql_mutex_lock (that=0x559ce1e45620 <query_cache+160>, src_file=0x559ce114f318 "/home/ycp/source/mariadb-server/11.1/src/sql/sql_cache.cc", src_line=725) at /home/ycp/source/mariadb-server/11.1/src/include/mysql/psi/mysql_thread.h:750
|
#8 0x0000559ce03439f0 in Query_cache::lock (this=0x559ce1e45580 <query_cache>, thd=0x559ce341ca78) at /home/ycp/source/mariadb-server/11.1/src/sql/sql_cache.cc:725
|
#9 0x0000559ce034a943 in Query_cache::invalidate_table (this=0x559ce1e45580 <query_cache>, thd=0x559ce341ca78, key=0x7fffd598ae90 "mysql", key_length=23) at /home/ycp/source/mariadb-server/11.1/src/sql/sql_cache.cc:3317
|
#10 0x0000559ce0348541 in Query_cache::invalidate_by_MyISAM_filename (this=0x559ce1e45580 <query_cache>, filename=0x559ce367ce00 "./mysql/spider_table_sts.MAD") at /home/ycp/source/mariadb-server/11.1/src/sql/sql_cache.cc:2439
|
#11 0x0000559ce0344e21 in query_cache_invalidate_by_MyISAM_filename (filename=0x559ce367ce00 "./mysql/spider_table_sts.MAD") at /home/ycp/source/mariadb-server/11.1/src/sql/sql_cache.cc:1251
|
#12 0x0000559ce09cd467 in reset_thd_trn (thd=0x559ce341ca78, first_table=0x559ce36bf478) at /home/ycp/source/mariadb-server/11.1/src/storage/maria/ha_maria.cc:3105
|
#13 0x0000559ce09ce981 in maria_commit (hton=0x559ce3202ca8, thd=0x559ce341ca78, all=false) at /home/ycp/source/mariadb-server/11.1/src/storage/maria/ha_maria.cc:3629
|
#14 0x0000559ce07bd685 in commit_one_phase_2 (thd=0x559ce341ca78, all=false, trans=0x559ce31399d8, is_real_trans=true) at /home/ycp/source/mariadb-server/11.1/src/sql/handler.cc:2126
|
#15 0x0000559ce07bd442 in ha_commit_one_phase (thd=0x559ce341ca78, all=false) at /home/ycp/source/mariadb-server/11.1/src/sql/handler.cc:2079
|
#16 0x0000559ce07bc562 in ha_commit_trans (thd=0x559ce341ca78, all=false) at /home/ycp/source/mariadb-server/11.1/src/sql/handler.cc:1873
|
#17 0x0000559ce03648fc in THD::commit_whole_transaction_and_close_tables (this=0x559ce341ca78) at /home/ycp/source/mariadb-server/11.1/src/sql/sql_class.cc:6100
|
#18 0x00007f243825ac23 in spider_sys_close_table (thd=0x559ce341ca78, open_tables_backup=0x7fffd598bcd0) at /home/ycp/source/mariadb-server/11.1/src/storage/spider/spd_sys_table.cc:596
|
#19 0x00007f2438266ba4 in spider_sys_insert_or_update_table_sts (thd=0x559ce341ca78, name=0x7f240403dc60 "./auto_test_local/tbl_a", name_length=23, stat=0x7f24041ff728) at /home/ycp/source/mariadb-server/11.1/src/storage/spider/spd_sys_table.cc:3235
|
#20 0x00007f24382e68f5 in spider_free_share (share=0x7f24041feee8) at /home/ycp/source/mariadb-server/11.1/src/storage/spider/spd_table.cc:5615
|
#21 0x00007f2438320212 in ha_spider::close (this=0x7f24041d3380) at /home/ycp/source/mariadb-server/11.1/src/storage/spider/ha_spider.cc:576
|
#22 0x0000559ce07c1aaf in handler::ha_close (this=0x7f24041d3380) at /home/ycp/source/mariadb-server/11.1/src/sql/handler.cc:3556
|
#23 0x0000559ce055b362 in closefrm (table=0x7f24041d2a78) at /home/ycp/source/mariadb-server/11.1/src/sql/table.cc:4674
|
#24 0x0000559ce06d913d in intern_close_table (table=0x7f24041d2a78) at /home/ycp/source/mariadb-server/11.1/src/sql/table_cache.cc:225
|
#25 0x0000559ce06d9519 in tc_purge () at /home/ycp/source/mariadb-server/11.1/src/sql/table_cache.cc:317
|
#26 0x0000559ce03270ca in purge_tables () at /home/ycp/source/mariadb-server/11.1/src/sql/sql_base.cc:328
|
#27 0x0000559ce06dab7c in tdc_start_shutdown () at /home/ycp/source/mariadb-server/11.1/src/sql/table_cache.cc:639
|
#28 0x0000559ce02491f9 in clean_up (print_message=true) at /home/ycp/source/mariadb-server/11.1/src/sql/mysqld.cc:1996
|
#29 0x0000559ce0251bdc in mysqld_main (argc=156, argv=0x559ce3133580) at /home/ycp/source/mariadb-server/11.1/src/sql/mysqld.cc:6056
|
#30 0x0000559ce0245595 in main (argc=8, argv=0x7fffd598c158) at /home/ycp/source/mariadb-server/11.1/src/sql/main.cc:34
|
The crash happens when shutting down the server. For example, the
following is a simplified case:
--disable_query_log
|
--disable_result_log
|
--source ../../t/test_init.inc
|
--enable_result_log
|
--enable_query_log
|
|
set global query_cache_type= on;
|
evalp CREATE SERVER srv FOREIGN DATA WRAPPER mysql
|
OPTIONS (SOCKET "$MASTER_1_MYSOCK", DATABASE 'test',user 'root');
|
create table t2 (c int);
|
create table t1 (c int) ENGINE=Spider
|
COMMENT='WRAPPER "mysql", srv "srv",TABLE "t2"';
|
SELECT * FROM t1;
|
shutdown;
|
Removing the shutdown; and we get the same "safe_mutex: Trying to
lock uninitialized mutex" failure, as well as crash log in
mysqld.1.1.err.
The trace:
safe_mutex_lock > inline_mysql_mutex_lock > Query_cache::invalidate_table > reset_thd_trn > commit_one_phase_2 > ha_commit_one_phase > THD::commit_whole_transaction_and_close_tables > spider_close_sys_table > spider_free_share > handler::ha_close > closefrm > intern_close_table > purge_tables > tdc_start_shutdown > clean_up > mysqld_main
The cause is that spider updates its sts system tables during the
close() method, which is called a bit too late, after the call to
query_cache_destroy() which destroys the structure_guard_mutex.
10.10 |
// Inside clean_up():
|
query_cache_destroy();
|
// [... 4 lines elided]
|
tdc_start_shutdown();
|
However, to begin with, it is not clear whether the sts and crd
tables need to be updated at all when freeing a spider_share.
Removing the call to spider_sys_insert_or_update_table_sts() and
spider_sys_insert_or_update_table_crd() from spider_free_share()
fixes the test, and does not introduce any regressions in existing
tests either, see the following commit
6931635fa0f upstream/bb-10.10-mdev-28739 MDEV-28739 [demo] Commenting out spider sts and crd table updates fixes the problem
In fact, spider_free_share() is the only place where the spider sts
and crd tables are updated. sts tables, for example, are used to
populate the spider handler's stat field. If the sts table does not
have info about a remote table, spider will make a "show table
status" to obtain this piece of information.
Apart from failure paths of spider_open_share() etc.,
spider_free_share() is only called in ha_spider::close(). So the sts
and crd tables are only updated at ha_spider::close(). The freshness
of the data is thus questionable. It remains to be seen whether
spider checks for the freshness before querying these tables. In any
case, this ticket is now about the usefulness of the spider sts and
crd system tables, so I will group it with other relevant tickets
about these tables under the label spider-sts-crd
(<https://jira.mariadb.org/issues/?jql=labels%20%3D%20spider-sts-crd>).
Draft commit, will send to review tomorrow if nothing new occurs:
cb8df270bb3 MDEV-28739 MDEV-29421 Remove updating spider sts/crd tables from spider_free_share()
We are moving the spider system tables altogether, since the only
reference to the update function is removed. See also the commit message.
Hi holyfoot, ptal thanks (based on 11.0)
c29972b314690bc94b890a2aaa6f2898c7880432
|
MDEV-28739 MDEV-29421 Remove spider persistent table stats
|
|
We remove the call to update spider persistent table stats (sts/crd)
|
in spider_free_share(). This prevents spider from opening and closing
|
further tables during close(), which fixes the following issues:
|
|
MDEV-28739: ha_spider::close() is called during tdc_start_shutdown(),
|
which is called after query_cache_destroy(). Closing the sts/crd Aria
|
tables will trigger a call to Query_cache::invalidate_table(), which
|
will attempt to use the query cache mutex structure_guard_mutex
|
destroyed previously.
|
|
MDEV-29421: during ha_spider::close(), spider_free_share() could
|
trigger another spider_free_share() through updating sts/crd table,
|
because open_table() calls tc_add_table(), which could trigger another
|
ha_spider::close()...
|
|
Since spider sts/crd system tables are only updated here, there's no
|
use for these tables any more, and we remove all uses of these tables
|
too.
|
|
The removal should not cause any performance issue, as in memory
|
spider table stats are only updated based on a time
|
interval (spider_sts_interval and spider_crd_interval), which defaults
|
to 10 seconds. It should not affect accuracy either, due to the
|
infrequency of server restart. And inaccurate stats are not a problem
|
for optimizer anyway.
|
|
To be on the safe side, we defer the removal of the spider sts/crd
|
tables themselves to future.
|
Here's a 10.4 version:
87bf3f002e6 MDEV-28739 MDEV-29421 Remove spider persistent table stats
pushed bdfd93d30c1bba2a8932477f16f6280ee665d818 to 10.4
Just noticed the comment in the description that it was not reproducible on 10.4 at reporting. Just to note that I could reproduce in 10.4 which is why I had a 10.4 fix.
Conflict resolution (old commit, make sure to add add_suppression in the test mdev_27575):
- 10.4->10.5: 1a97c706f072a55e27f9580e7bf6dc8727f4316d
- 10.5->10.6: ad7620a25cf7f8f517c9f4373d9c1d00853a6141
- 10.6->10.11: 7ba4f323c6990d1825a2754bf8577f16f5d4cf40
- 10.11->11.0: e3ce81b1e90c73b1e21c8751c78572893e55b456
- ES23.08: 57ab808c0cf34baa34a60df08368b1f4ae9e640e
Note the use of the same server link makes the test case in the description degenerate, at least in the sense that it switches the query cache type both for the parent and the child at once. Possibly it can be dealt with by changing the query_cache_type on the session level in the connection where Spider table is opened, but it doesn't seem very reliable.
I would recommend after fixing the issue to add a regression test with proper separation between the parent and the child nodes and with corresponding combinations of enabled/disabled query cache – enabled only on parent, enabled only on child (or, depending on the fix, maybe more than one child), enabled on both.