When converting a table (test.s3_table) from S3 to another engine, the following will be logged to the binary log:
DROP TABLE IF EXISTS test.t1;
CREATE OR REPLACE TABLE test.t1 (full create of the result table) ENGINE=new_engine
INSERT rows to test.t1 in binary-row-log-format
The bug is that the above statements are logged one by one to the binary log.
This means that a fast slave configured to use the same S3 storage as the master, would be able to read the DROP and CREATE from the binary log before the master has finished the ALTER TABLE. In this case the slave will ignore the DROP (as it's on a S3 table) but it will break on CREATE as the table is still in S3 and the CREATE can't be completed. (The REPLACE will be ignored by the slave as the table is in S3)
The fix is to ensure that all the above statements is written to binary log after the table has been deleted from S3.