[MDEV-33057] alter table form s3 to different engine on leader break replicas with binlog_alter_two_phase; Created: 2023-12-18  Updated: 2023-12-18

Status: Open
Project: MariaDB Server
Component/s: Replication, Storage Engine - S3
Affects Version/s: 10.11.6
Fix Version/s: None

Type: Bug Priority: Major
Reporter: VAROQUI Stephane Assignee: Unassigned
Resolution: Unresolved Votes: 0
Labels: None

Issue Links:
Problem/Incident
is caused by MDEV-11675 Lag Free Alter On Slave Closed

 Description   

On a replication cluster using mixed InnoDB tables ( active dataset) and S3 tables ( read only dataset) stored on a single shared s3 server.

At some point to modify the content of any s3 table, we can alter it back to InnoDB on the leader of the replication cluster

The new InnoDB table get deleted on all replicas probably via table discovery breaking the capability to failover the cluster without a filter on the modified table.

The solution would be to inject DDL create or replace and all row events of the fetched row to innodb to materialized the innodb table on all replicas and looking at the binlogs we can see a tentative of doing this using binlog-alter-two-phase

The reason it failed in optimistic and semi-sync the replication SQL thread receive the alter table but deadlock with a drop table

|      6 | system user |                 | NULL         | Slave_IO     | 869406 | Waiting for master to send event              | NULL                                                             |    0.000 |
|      8 | system user |                 | NULL         | Slave_worker | 869406 | Waiting for work from SQL thread              | NULL                                                             |    0.000 |
|     10 | system user |                 | observations | Slave_worker |   2562 | Waiting for table metadata lock               | DROP TABLE IF EXISTS `observations`.`climato_controle_terrestre` |    0.000 |
|      9 | system user |                 | NULL         | Slave_worker | 869406 | Waiting for work from SQL thread              | NULL                                                             |    0.000 |
|     11 | system user |                 | observations | Slave_worker |   2565 | NULL                                          | alter table climato_controle_terrestre engine=innodb             |    0.000 |
|      7 | system user |                 | NULL         | Slave_SQL    | 155427 | Waiting for room in worker thread event queue | NULL   

killing the alter and the drop table SQL threads and restart slave recover the innodb table on replicat

for the record we used
MariaDB [observations]> show variables like 's3%';

Variable_name Value
s3_access_key
s3_block_size 4194304
s3_bucket archive-db-obs
s3_debug OFF
s3_host_name 10.0.0.182
s3_pagecache_age_threshold 300
s3_pagecache_buffer_size 134217728
s3_pagecache_division_limit 100
s3_pagecache_file_hash_size 512
s3_port 9000
s3_protocol_version Auto
s3_region  
s3_replicate_alter_as_create_select ON
s3_secret_key
s3_slave_ignore_updates ON
s3_use_http ON

We can confirme that disabling set global binlog_alter_two_phase=OFF; as suggested by @Kristian Nielsen is a valid workaround and table get restored on all replicas without replication issues



 Comments   
Comment by Kristian Nielsen [ 2023-12-18 ]

I do not follow from the description why the table gets deleted on the replicas. Maybe I'm not sufficiently familiar with how S3 tables work - are they physically stored in some S3 cloud storage, with a storage engine layer that merely translates between SQL and corresponding network requests to the S3 storage?

If the ALTER TABLE t_s3 ENGINE=InnoDB succeeds on the leader, then why doesn't it work the same on the replicas? Is the intention that the table should be altered to InnoDB on all the replicas, or only on the leader?

I also did not understand the proposed solution "to inject cre", can you elaborate what "cre" means?

Comment by Kristian Nielsen [ 2023-12-18 ]

From Zulip discussions, found that --binlog-alter-two-phase appears to be enabled. This explains how alter table and DROP TABLE end up replicating in parallel, and probably indicates a bug in the --binlog-alter-two-phase feature.

Generated at Thu Feb 08 10:36:04 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.