Details
-
Bug
-
Status: Closed (View Workflow)
-
Critical
-
Resolution: Duplicate
-
10.1.13
-
None
-
Debian 8
Description
We have Galera multi-master cluster with 3 nodes. We are experiencing random crashes of mysqld process on some nodes when logrotate is executed.
Daily logrotate is executed at 6:25. At exactly this time database sometimes crashes. After update from 10.0 to 10.1.13 for few days everything was ok. Then database crashed on Node1 two times in a row, day after day. At this time I've disabled logrotate for mysql for a few days on Node1 and there were no crashes. I've separated this node from others, disabled replication and executed logrotate manually few times. Nothing happend. I've restored node back to the cluster and enabled logrotate. Next day crash happend, but on Node2. Next day - nothing. Day after that - another crash, but on both Node1 and Node2.
On similar testing environment those errors don't occure.
Logs are configured to go to /var/log/mysql/mariadb-error.log and /var/log/mysql/mariadb-slow.log. General query log is not enabled and cannot be.
/etc/logrotate.d/mysql-server content:
/var/log/mysql/mariadb-error.log /var/log/mysql/mariadb-slow.log {
|
daily
|
rotate 7
|
missingok
|
create 640 mysql adm
|
compress
|
sharedscripts
|
postrotate
|
test -x /usr/bin/mysqladmin || exit 0
|
|
if [ -f `my_print_defaults --mysqld | grep -oP "pid-file=\K[^$]+"` ]; then
|
# If this fails, check debian.conf!
|
mysqladmin --defaults-file=/etc/mysql/debian.cnf flush-logs
|
fi
|
endscript
|
}
|
(standard except logs location, but those files exists and paths are correct)
I don't know if it's related to Galera or not.
Logs after restoration from crash:
160623 6:25:03 [ERROR] mysqld got signal 11 ;
|
This could be because you hit a bug. It is also possible that this binary
|
or one of the libraries it was linked against is corrupt, improperly built,
|
or misconfigured. This error can also be caused by malfunctioning hardware.
|
|
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
|
|
We will try our best to scrape up some info that will hopefully help
|
diagnose the problem, but since we have already crashed,
|
something is definitely wrong and this may fail.
|
|
Server version: 10.1.13-MariaDB-1~jessie
|
key_buffer_size=134217728
|
read_buffer_size=2097152
|
max_used_connections=7
|
max_threads=102
|
thread_count=23
|
It is possible that mysqld could use up to
|
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 759828 K bytes of memory
|
Hope that's ok; if not, decrease some variables in the equation.
|
|
Thread pointer: 0x0x7fd4c3c77008
|
Attempting backtrace. You can use the following information to find out
|
where mysqld died. If you see no messages after this, something went
|
terribly wrong...
|
stack_bottom = 0x7fd571ee91f8 thread_stack 0x48400
|
/usr/sbin/mysqld(my_print_stacktrace+0x2e)[0x7fd576ce6e3e]
|
/usr/sbin/mysqld(handle_fatal_signal+0x34d)[0x7fd57682870d]
|
/lib/x86_64-linux-gnu/libpthread.so.0(+0xf8d0)[0x7fd575e568d0]
|
/usr/sbin/mysqld(_ZN13MYSQL_BIN_LOG21do_checkpoint_requestEm+0x98)[0x7fd5768ddbd8]
|
/usr/sbin/mysqld(_ZN13MYSQL_BIN_LOG20checkpoint_and_purgeEm+0x11)[0x7fd5768ddc01]
|
/usr/sbin/mysqld(_ZN13MYSQL_BIN_LOG16rotate_and_purgeEb+0x7e)[0x7fd5768e009e]
|
/usr/sbin/mysqld(_Z20reload_acl_and_cacheP3THDyP10TABLE_LISTPi+0x131)[0x7fd57678ce51]
|
/usr/sbin/mysqld(_Z21mysql_execute_commandP3THD+0x1405)[0x7fd57669eab5]
|
/usr/sbin/mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x26e)[0x7fd5766a6dbe]
|
/usr/sbin/mysqld(+0x4205b9)[0x7fd5766a75b9]
|
/usr/sbin/mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1a24)[0x7fd5766a9614]
|
/usr/sbin/mysqld(_Z10do_commandP3THD+0x16e)[0x7fd5766aa40e]
|
/usr/sbin/mysqld(_Z24do_handle_one_connectionP3THD+0x182)[0x7fd576773752]
|
/usr/sbin/mysqld(handle_one_connection+0x40)[0x7fd576773910]
|
/lib/x86_64-linux-gnu/libpthread.so.0(+0x80a4)[0x7fd575e4f0a4]
|
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7fd573ffa87d]
|
|
Trying to get some variables.
|
Some pointers may be invalid and cause the dump to abort.
|
Query (0x7fd55f1f3020): flush logs
|
Connection ID (thread ID): 1507663
|
Status: NOT_KILLED
|
|
Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=off,table_elimination=on,extended_keys=on,exists_to_in=on
|
Attachments
Issue Links
- is duplicated by
-
MDEV-11550 Crash when running flush logs
- Closed
- relates to
-
MDEV-9510 Segmentation fault in binlog thread causes crash
- Closed
-
MDEV-11610 Logrotate to only FLUSH LOCAL ERROR LOGS, ENGINE LOGS, GENERAL LOGS
- Closed