Details
-
Task
-
Status: Open (View Workflow)
-
Major
-
Resolution: Unresolved
-
None
-
None
Description
If you want to dump a subset of schemas/tables to create a 'light' Slave, there's no way to dump consistently the tables in one shot and even more problematic to have one single GTID valid for all to start from.
If you dump one schema at a time apart from not being consistent, you'd have to use the earliest GTID(with lower seqno) to start replication not to lose any transactions, but the schemas dumped after the first would receive duplicate transactions.
It would be good to have a syntax like:
{{ mysqldump ...... --tables <db1>.<tableA> <db2>.<tableB> }}
Currently one workaround is to apply(and remove) replicate filters for the segments between dumps.
T1: dump SchemaA tables
T2: dump SchemaB tables
T3: dump SchemaC tables
(pseudo code, actual command is CHANGE MASTER/START SLAVE .... UNTIL)
START SLAVE FROM T1 TO T2 filtering out SchemaB and SchemaC (and all others not dumped at all)
|
START SLAVE FROM T2 TO T3 filtering out SchemaC (and all others schemas and tables not dumped/needed at all)
|
START SLAVE FROM T3 (filtering all others schemas and tables not dumped/needed at all)
|
Quite more complicated than a simple unique consistent and one gtid-ed dump.