[MDEV-5566] Change master to using relay log Created: 2014-01-25 Updated: 2021-12-28 Resolved: 2021-12-28 |
|
| Status: | Closed |
| Project: | MariaDB Server |
| Component/s: | Replication, Storage Engine - Spider |
| Fix Version/s: | N/A |
| Type: | Task | Priority: | Major |
| Reporter: | VAROQUI Stephane | Assignee: | Unassigned |
| Resolution: | Incomplete | Votes: | 0 |
| Labels: | None | ||
| Issue Links: |
|
||||||||||||
| Description |
|
Promoting a slave to master, we need to ensure that the slave get most up to date binlog events from all slaves in the cluster. Using SQL command only we must ensure to make all slaves starting at the same state. |
| Comments |
| Comment by Sergei Golubchik [ 2015-02-18 ] |
|
could you elaborate on this, please? |
| Comment by VAROQUI Stephane [ 2015-02-18 ] |
|
We need to implement a way to make each cluster node topology aware
Nodes in a cluster could be instrumented from various way from plugin to external tools so a system table is probably the best here Defining a cluster manager plugin API that can provide the nodes status Some possible plugin implementation:
When system table topology change we need to auto set the replication This required founding the oldest gtid in the cluster, fetching all following gtid from that node , instrumenting the new master define in new topology and waiting until that master already have the oldest gtid. I propose to start the task with a SQL command that implement the replication failover based on the system table (improving the existing "server" table with a cluster name or HA group to mimic the fabric concept + additional per node status properties ) In charge to cluster manager plugin or external tools for populating that table before using the command MHA or MariaDB rpl tools from Guillaume , have to do this manually found the topology of the replication , found the most up to date slave , wait until each slave catch up from the promoted master Maxscale can populate such tables based on his monitoring plugin and later on trigger failover by invoking the command First cluster manager plugin can be demonstrated on 3 nodes using spider storage engine One of the node is instrumented with : All other nodes All nodes Director do
We later can emprove with a rollback from gtid feature that would rollback following gtid transactions by reversing the binlog row events based on the before image. It would enable reintroducing the old master, and copying all rollbacked events to a bin-log.lost files. |