[MDEV-24351] S3, same-backend replication: Dropping a table on master causes error on slave Created: 2020-12-04 Updated: 2020-12-08 Resolved: 2020-12-08 |
|
| Status: | Closed |
| Project: | MariaDB Server |
| Component/s: | Replication |
| Affects Version/s: | 10.5 |
| Fix Version/s: | 10.5.9 |
| Type: | Bug | Priority: | Major |
| Reporter: | Sergei Petrunia | Assignee: | Sergei Petrunia |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Issue Links: |
|
||||
| Description |
|
This originates from CLX-630. Let's take S3 Storage engine, and set it up in way that's used when both master and slave are pointed at the same S3 bucket (the specific option we're interested in is the s3_replicate_alter_as_create_select=1. Then, if the master runs DROP TABLE tbl when it hasn't yet discovered the existence of table tbl, it will successfully drop the table, but the binlog will get DROP TABLE tbl instead of DROP TABLE IF EXISTS tbl. The table tbl will be gone when the slave executes the statement, which will cause it to stop with an error. MTR testcase to demonstrate it: mysql-test/suite/s3/rpl_b.cnf
mysql-test/suite/s3/rpl_b.test
This will show something like:
which is incorrect, it should be DROP TABLE IF EXISTS `t1`. The slave will not fail because it had "seen" the table t1 before. If we had three nodes (one to create the table, then master->slave to run the testcase), we would get a replication failure. |
| Comments |
| Comment by Sergei Petrunia [ 2020-12-04 ] |
|
http://lists.askmonty.org/pipermail/commits/2020-December/014376.html |
| Comment by Sergei Petrunia [ 2020-12-08 ] |
|
Not reproducible in 10.4-ES |