Details
-
Task
-
Status: Closed (View Workflow)
-
Major
-
Resolution: Fixed
-
None
Description
S3 tables works with replication if the slave is using it's own S3 storage (and thus duplicate the data stored in S3). If master and slave is sharing same S3 storage then one must
configure the master to not replicate the S3 tables (the S3 tables will be automatically discovered on the slave). See https://mariadb.com/kb/en/library/replication-filters/ for how to add a filter.
The intention of this task is to do this more 'automatic'
How to solve replication of commands involving S3 on the slave
There are two different cases:
1) Master and Slave uses different S3 storage for S3 tables
In this case slave should execute exactly same commands as the
master. In this case the slave would be able to execute identical
commands as the master. All commands with S3 engine should work on
both master and slave.
2) Master and Slave share same S3 storage for S3 tables
- The slave thread should threat all S3 tables as if they would be
blackhole tables, except that we would not create any .frm files for
DDL's. This means for the slave thread: - All CREATE TABLES should be ignored (and no .frm table created)
- All updates should be ignored
- DROP TABLE should only remove any local .frm definition, not touch S3 data
- ALTER TABLE of local table to S3 should be same as DROP TABLE
- RENAME of S3 tables should be same as DROP TABLE (as the table is already
renamed in S3 and will be be automatically discovered on the slave). - Selects should threat the table as empty (maybe?)
- For normal users on the slave, an S3 table will work as any other table,
except if the table doesn't exist on the slave, it will automatically be
discovered from S3 when doing an SELECT on the table.
- One the master, S3 tables works as any other read only tables, except in
the case of ALTER TABLE s3_table ENGINE=innodb
The slave would not be able to execute this query as it may not be
able to access the data from s3_table (as the table may not exists
when the ALTER is run on the slave).
The solution for this is that if we on the master convert a S3
table to a local on disk table we should replicate this as it would be a
CREATE ... SELECT with row format + drop of the original S3 table.
- The above could be achieved by having a new SLAVE option
's3_ignore_slave_updates' (1 by default) that would by default treat
S3 as blackhole for replication. - For the MASTER we need another option s3_replicate_alter_as_create_select
(1 by default) to change replication behavior of ALTER TABLE of S3 table to
normal table on the master. - Both of the above variables should be accessed trough general handler functions
(like handler->i>ignore_slave_updates()) to allow the server to handle the
particulars.
Attachments
Issue Links
- is part of
-
MDEV-17841 S3 Storage engine
- Closed
- relates to
-
MDEV-22327 Feature request: replicate_do_engine/replicate_ignore_engine
- Open