Details
-
Bug
-
Status: Closed (View Workflow)
-
Major
-
Resolution: Fixed
-
10.5
-
None
Description
This originates from CLX-630.
Let's take S3 Storage engine, and set it up in way that's used when both master and slave are pointed at the same S3 bucket (the specific option we're interested in is the s3_replicate_alter_as_create_select=1.
Then, if the master runs DROP TABLE tbl when it hasn't yet discovered the existence of table tbl, it will successfully drop the table, but the binlog will get DROP TABLE tbl instead of DROP TABLE IF EXISTS tbl. The table tbl will be gone when the slave executes the statement, which will cause it to stop with an error.
MTR testcase to demonstrate it:
mysql-test/suite/s3/rpl_b.cnf
!include ../rpl/my.cnf
|
!include ./my.cnf
|
!include ./slave.cnf
|
mysql-test/suite/s3/rpl_b.test
--source include/have_s3.inc
|
--source include/master-slave.inc
|
|
connection slave;
|
show variables like 's3_slave%';
|
|
create table t1 (a int, b int) engine=aria;
|
insert into t1 values (1,1),(2,2),(3,3);
|
alter table t1 engine=s3;
|
|
connection master;
|
show variables like 's3_replicate%';
|
drop table t1;
|
show binlog events;
|
|
--sync_slave_with_master
|
|
--source include/rpl_end.inc
|
This will show something like:
master-bin.000001 371 Query 1 479 use `test`; DROP TABLE `t1` /* generated by server */
|
which is incorrect, it should be DROP TABLE IF EXISTS `t1`.
The slave will not fail because it had "seen" the table t1 before. If we had three nodes (one to create the table, then master->slave to run the testcase), we would get a replication failure.