[MDEV-7924] "START SLAVE UNTIL TIMESTAMP = <timestamp>" wanted Created: 2015-04-06  Updated: 2023-12-01

Status: Open
Project: MariaDB Server
Component/s: Replication
Fix Version/s: None

Type: Task Priority: Minor
Reporter: Igor Pashev Assignee: Unassigned
Resolution: Unresolved Votes: 5
Labels: beginner-friendly, start_slave_until


 Description   

Please, could it be possible to implement "START SLAVE UNTIL TIMESTAMP = <timestamp>"? It could be useful for backup purposes when a few slaves (of different masters) have to be in sync. Currently it is possible by searching binlogs for the given timestamp and setting "UNTIL MASTER_LOG_FILE = 'log_name', MASTER_LOG_POS = log_pos".

"START SLAVE UNTIL TIMESTAMP = <timestamp>" would greatly simplify this and also allow stopping in the future when a binlog entry with a greater timestamp arrives.



 Comments   
Comment by Daniel Black [ 2015-04-07 ]

using https://mariadb.com/kb/en/mariadb/global-transaction-id/ in mariadb-10+ I'd be looking at recording the gtid as part of the backup. Then recovery is set global gtid_slave_pos=...; CHANGE MASTER TO master_use_gtid = slave_pos; start slave; regardless of server.

Timestamp is quite ambiguous when multiple transactions can happen in a millisecond.

I totally agree that log file/pos between servers is a pain, which I suspect was one of the great motivators for gtid implementation.

Comment by Igor Pashev [ 2015-04-07 ]

> Timestamp is quite ambiguous when multiple transactions can happen in a millisecond.

It should not matter when we use it for a cutoff: if (timestamp > stop_timestamp) stop;
(Also "STOP SLAVE AFTER <timestamp>" could be also usable).

I'm asking for no new features, just for a convenient way to do what is already possible.

The point of this feature is the ability to make dumps of two or more different servers for the given point in time.
We could stop masters at the same time, and then make dumps, but this is not an option Thus slaves
and stopping replication at the same time, but this time we should be taken from binlogs because of possible slave lags.

Comment by Andre Hilgers [ 2020-02-12 ]

Point-In-Time Recovery needs time.

  • restore full level backup
  • apply binary logs to point-in-time

The idea is to setup a replication server to get prepared for requests from Development Teams.
Easy delivery of test data is possible when we break replication in a point-in-time and offer secondary server to development.

For example:
Every 4 weeks we have to prepare 20 test server with test data from different database server.
Data of each system has to be synchronized to one point in time.
We have 24 hours to copy data to test server with backup/recover-methods.
In my opinion it would be easier to setup 20 test server as replication clients in the period of 4 weeks and stop replication (point-in-time) when new test data is needed.

Comment by Oli Sennhauser [ 2023-11-30 ]

Label beginner-friendly would be appropriate as well?

Generated at Thu Feb 08 07:23:18 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.