Details
-
Bug
-
Status: Closed (View Workflow)
-
Critical
-
Resolution: Not a Bug
-
10.1.22
-
None
Description
The setup is master-slaves
1 Master->3 slaves all nodes in release 10.1.21
We setup a new slave in 10.1.22
via mysqldump --gtid --master-data --single transaction
as long as we do on the new slave 10.1.22
START SLAVE
All other 10.1.21 slaves IO threads stopped replicating from the Master
STOP SLAVE
Make all other slaves recovering
Relevant settings on replication is
plugin_load = "semisync_master.so;semisync_slave.so;sql_errlog.so"
|
rpl_semi_sync_master = ON
|
rpl_semi_sync_slave = ON
|
loose_rpl_semi_sync_master_enabled = ON
|
loose_rpl_semi_sync_slave_enabled = ON
|
slave_parallel_mode = optimistic
|
slave_parallel_threads = 4
|
binlog_format = ROW
|
binlog_checksum = 1
|
replicate_annotate_row_events = 1
|
log_slow_slave_statements = 1
|
log_slow_verbosity=query_plan,explain
|
log_warnings = 2
|
optimizer_switch='orderby_uses_equalities=on'
|
 |
innodb_defragement=1
|
innodb_purge_threads = 8
|
innodb_print_all_deadlocks = 1
|
innodb_flush_neighbors = 1
|
innodb_stats_on_metadata = 0
|
I noted that the master have 1 month binlog 22G this could be investigated
We have tried few things on this new slave to understand what could be wrong here
Disable GTID vi master_use_gtid=no and give the old styme coordonate
Issue still visible
Disable all semisync plugin
Issue still visible
Disable l parameters
binlog_checksum = 1
|
replicate_annotate_row_events = 1
|
slave_parallel_mode = optimistic
|
slave_parallel_threads = 4
|
Issue still visible
More info may be provided via their support contract i will link to this jira when i get some feedback
Any suggestion on how to move forward is welcome