I am daily replicating some tables between two machines using .ibd file transfer. I randomly get crashes when doing this and i replicate one schema at a time.
On source machine:
- First using FLUSH TABLES ... FOR EXPORT;
- Copy the .ibd files
- UNLOCK TABLES;
- LOCK TABLES ... LOW_PRIORITY WRITE;
On target for each table:
- ALTER TABLE <schema>.<tbl> DISCARD TABLESPACE;
- Copy ibd file in place then set proper permissions
- ALTER TABLE xxx IMPORT TABLESPACE. ( <== Here is where it always break, and on varying tables)
- UNLOCK TABLES;
Example log below, i tried to follow the procedure by docs but might be overlooking something? I do things like dropping indexes on large table at source when reloading to get speed but this arises also on tables where i do not do this so do not see it would be related to a specific table modification at the source. Both machines are running same version and same OS.
My solution now is simply make another try after db service restarts itself and then it seem to work. It keeps me afloat but it's a really ugly "solution" as i have plenty of services connected to this provider machine. I want to have "hotswap" of data all data at once and only at a specific trigger once a day when its ready cooked in source and that's why i'am using this approach instead of using a galera cluster setup.
Thank you so much in advance for having look into this. Let me know what more data i could provide to ease the review.
2020-11-30 09:08:51 0x7fc8644e0700 InnoDB: Assertion failure in file /home/buildbot/buildbot/padding_for_CPACK_RPM_BUILD_SOURCE_DIRS_PREFIX/mariadb-10.5.5/storage/innobase/dict/dict0dict.cc line 1918
InnoDB: Failing assertion: table->get_ref_count() == 0
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to https://jira.mariadb.org/
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: about forcing recovery.
201130 9:08:51 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Server version: 10.5.5-MariaDB
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 357211 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x7fc064000c58
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
stack_bottom = 0x7fc8644dfbd8 thread_stack 0x49000
??:0(Wsrep_server_service::log_dummy_write_set(wsrep::client_state&, wsrep::ws_meta const&))[0x55a3c9b0ac23]
??:0(Wsrep_server_service::log_dummy_write_set(wsrep::client_state&, wsrep::ws_meta const&))[0x55a3c9b18b1c]
??:0(wsrep_notify_status(wsrep::server_state::state, wsrep::view const*))[0x55a3ca0f29fa]
??:0(mysql_discard_or_import_tablespace(THD*, TABLE_LIST*, bool))[0x55a3c9cc4f77]
??:0(mysql_parse(THD*, char*, unsigned int, Parser_state*, bool, bool))[0x55a3c9c2bc62]
??:0(dispatch_command(enum_server_command, THD*, char*, unsigned int, bool, bool))[0x55a3c9c360fe]
??:0(MyCTX_nopad::finish(unsigned char*, unsigned int*))[0x55a3ca043d5a]
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0x7fc0640109d0): ALTER TABLE `dwf`.`billing` IMPORT TABLESPACE
Connection ID (thread ID): 4
Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=on,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on,split_materialized=on,condition_pushdown_for_subquery=on,rowid_filter=on,condition_pushdown_from_having=on,not_null_range_scan=off
The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ contains
information that should help you find out what is causing the crash.
Writing a core file...
Working directory at /b001/dwfdb-data/mysql
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size unlimited unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 192044 192044 processes
Max open files 16384 16384 files
Max locked memory 65536 65536 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 192044 192044 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
Core pattern: |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e