Details
-
Bug
-
Status: Closed (View Workflow)
-
Major
-
Resolution: Fixed
-
10.0.14
Description
I have reports from a user of parallel replication, where we get in the error
log: "Commit failed due to failure of an earlier commit on which this one
depends". But there is no prior failure seen.
Restarting the slave restarts at the affected position, again suggesting that
no prior transaction actually failed.
Thanks to good help from the user, I managed to track it down with custom
debug printf's in the production environment. Below is a test case that
reproduces the problem. The test case is rather complex, reflecting the quite
complex nature of the problem.
What happens in the problem case:
We have three transactions T1, T2, T3 running in parallel on the master and on
the slave.
T1 ran before T2 on the master (but they were able to group commit in parallel
on the master). But on the slave, T2 somehow ends up blocking T1 on an InnoDB
row lock (there are a few corner cases where this is possible).
T2 blocking T1 on the slave is a deadlock, so T2 is killed and marked for
retry, so that T1 can proceed.
This kill of T2 happens after T2 has already started its commit and is waiting
for T1 in the function queue_for_group_commit(). T2 returns failure from
commit, and this causes the function ha_commit_trans() to signal a failure to
T3.
Later, the kill error is converted to a retry of T2 that later succeeds. But
at this point, T3 has already been incorrectly signalled that T2 failed. So we
end up with a mysterious message in the error log that T3 failed due to
failure of an earlier commit.
I will try to come up with a patch that fixes this bug.
mysql-test-run test case:
--source include/have_innodb.inc
|
--source include/have_debug.inc
|
--source include/have_debug_sync.inc
|
--let $rpl_topology=1->2
|
--source include/rpl_init.inc
|
|
--connection server_1
|
|
ALTER TABLE mysql.gtid_slave_pos ENGINE=InnoDB;
|
CREATE TABLE t1 (a INT PRIMARY KEY, b INT, KEY b_idx(b)) ENGINE=InnoDB;
|
INSERT INTO t1 VALUES (1,NULL), (2,2), (3,NULL), (4,4), (5, NULL), (6, 6);
|
CREATE TABLE t2 (a INT PRIMARY KEY, b INT) ENGINE=InnoDB;
|
|
# Use a stored function to inject a debug_sync into the appropriate THD.
|
# The function does nothing on the master, and on the slave it injects the
|
# desired debug_sync action(s).
|
SET sql_log_bin=0;
|
--delimiter ||
|
CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
|
RETURNS INT DETERMINISTIC
|
BEGIN
|
RETURN x;
|
END
|
||
|
--delimiter ;
|
SET sql_log_bin=1;
|
|
SET @old_format= @@SESSION.binlog_format;
|
SET binlog_format='statement';
|
--save_master_pos
|
|
--connection server_2
|
--sync_with_master
|
--source include/stop_slave.inc
|
SET @old_parallel= @@GLOBAL.slave_parallel_threads;
|
SET GLOBAL slave_parallel_threads=10;
|
CHANGE MASTER TO master_use_gtid=no;
|
SET sql_log_bin=0;
|
--delimiter ||
|
CREATE FUNCTION foo(x INT, d1 VARCHAR(500), d2 VARCHAR(500))
|
RETURNS INT DETERMINISTIC
|
BEGIN
|
IF d1 != '' THEN
|
SET debug_sync = d1;
|
END IF;
|
IF d2 != '' THEN
|
SET debug_sync = d2;
|
END IF;
|
RETURN x;
|
END
|
||
|
--delimiter ;
|
SET sql_log_bin=1;
|
|
--connection server_1
|
|
# Create a group commit with three transactions T1, T2, T3.
|
# T2 will block T1 on the slave where we will make it run first, so it will be
|
# deadlock killed.
|
# The bug was that in this case, T3 was signalled to fail due to T2 failing,
|
# even though the retry of T2 was later successful.
|
|
--connect (con1,127.0.0.1,root,,test,$SERVER_MYPORT_1,)
|
SET @old_format= @@SESSION.binlog_format;
|
SET binlog_format='statement';
|
BEGIN;
|
INSERT INTO t2 VALUES (10, foo(1, 'before_execute_sql_command WAIT_FOR t1_start', ''));
|
UPDATE t1 SET b=NULL WHERE a=6;
|
SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL master_queued1 WAIT_FOR master_cont1';
|
send COMMIT;
|
--connection server_1
|
SET debug_sync='now WAIT_FOR master_queued1';
|
|
--connect (con2,127.0.0.1,root,,test,$SERVER_MYPORT_1,)
|
SET @old_format= @@SESSION.binlog_format;
|
SET binlog_format='statement';
|
BEGIN;
|
INSERT INTO t2 VALUES (20, foo(2, 'group_commit_waiting_for_prior SIGNAL t2_waiting', ''));
|
DELETE FROM t1 WHERE b <= 3;
|
SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL master_queued2';
|
send COMMIT;
|
|
--connection server_1
|
SET debug_sync='now WAIT_FOR master_queued2';
|
|
--connect (con3,127.0.0.1,root,,test,$SERVER_MYPORT_1,)
|
SET @old_format= @@SESSION.binlog_format;
|
SET binlog_format='statement';
|
BEGIN;
|
INSERT INTO t2 VALUES (30, foo(3, 'before_execute_sql_command WAIT_FOR t3_start', 'group_commit_waiting_for_prior SIGNAL t3_waiting'));
|
INSERT INTO t1 VALUES (7,7);
|
SET debug_sync='commit_after_release_LOCK_prepare_ordered SIGNAL master_queued3';
|
send COMMIT;
|
|
--connection server_1
|
SET debug_sync='now WAIT_FOR master_queued3';
|
SET debug_sync='now SIGNAL master_cont1';
|
|
--connection con1
|
REAP;
|
SET binlog_format=@old_format;
|
--connection con2
|
REAP;
|
SET binlog_format=@old_format;
|
--connection con3
|
REAP;
|
SET debug_sync='RESET';
|
SET binlog_format=@old_format;
|
|
--connection server_1
|
--save_master_pos
|
SHOW BINLOG EVENTS;
|
|
|
--connection server_2
|
let $old_retry= query_get_value(SHOW STATUS LIKE 'Slave_retried_transactions', Value, 1);
|
SET @old_dbug=@@GLOBAL.debug_dbug;
|
SET GLOBAL debug_dbug="+d,thd_need_ordering_with_force";
|
--source include/start_slave.inc
|
# First, wait for T2 to complete up to where it is waiting for T1 to group
|
# commit for both of them. This will set locks that will block T1, causing
|
# a deadlock kill and retry of T2. T1 and T3 are still blocked at the start
|
# of each their SQL statements.
|
SET debug_sync='now WAIT_FOR t2_waiting';
|
# Now let T3 move on until the point where it is itself ready to commit.
|
SET debug_sync='now SIGNAL t3_start';
|
SET debug_sync='now WAIT_FOR t3_waiting';
|
# Now T2 and T3 are set up, so we can let T1 proceed.
|
SET debug_sync='now SIGNAL t1_start';
|
# Now we can wait for the slave to catch up.
|
# We should see T2 being deadlock killed and retried.
|
# The bug was that T2 deadlock kill would cause T3 to fail due to failure
|
# of an earlier commit. This is wrong as T2 did not fail, it was only
|
# retried.
|
--sync_with_master
|
SET GLOBAL debug_dbug=@old_dbug;
|
SET debug_sync='RESET';
|
let $new_retry= query_get_value(SHOW STATUS LIKE 'Slave_retried_transactions', Value, 1);
|
--disable_query_log
|
eval SELECT $new_retry - $old_retry AS retries;
|
--enable_query_log
|
|
|
--connection server_1
|
SET binlog_format=@old_format;
|
|
|
# Clean up
|
--connection server_2
|
--source include/stop_slave.inc
|
SET GLOBAL slave_parallel_threads=@old_parallel;
|
SET debug_sync='RESET';
|
--source include/start_slave.inc
|
|
--connection server_1
|
DROP TABLE t1, t2;
|
DROP FUNCTION foo;
|
--source include/rpl_end.inc
|
The testcase requires the following patch to do a dbug insertion:
=== modified file 'sql/sql_class.cc'
|
--- sql/sql_class.cc 2014-10-23 13:04:03 +0000
|
+++ sql/sql_class.cc 2014-11-10 15:03:04 +0000
|
@@ -4360,6 +4360,7 @@ thd_need_ordering_with(const MYSQL_THD t
|
return 1;
|
if (!rgi->commit_id || rgi->commit_id != other_rgi->commit_id)
|
return 1;
|
+ DBUG_EXECUTE_IF("thd_need_ordering_with_force", return 1;);
|
/*
|
These two threads are doing parallel replication within the same
|
replication domain. Their commit order is already fixed, so we do not need
|