Parallel applier guarantees that transactions are committed in a fixed predefined order (same as on the master). If trx1 must be committed before trx2, but the parallel applier executes them concurrently and trx2 happen to block trx1, then the applier aborts trx2, allows trx1 to finish, and then re-executes trx2.
This does not work if trx1 is an XA transaction. It becomes persistent on XA PREPARE, so it's XA PREPARE that must happen before trx2, not XA COMMIT. But XA PREPARE doesn't release all locks, so XA PREPARE is not a guarantee that a conflicting trx2 will be able to continue.
How can be fixed?
Attachments
Issue Links
is caused by
MDEV-742LP:803649 - Xa recovery failed on client disconnection
First, I don't understand why XA PREPARE cannot be rolled back? The whole
point of XA PREPARE is to leave the transaction in a state where it can both
be rolled back and retried. Why not do an XA ROLLBACK in case of conflict
and then re-try?
Is it because the global transaction id is persisted also after rollback,
and a new XA START with the same id will fail? Though that doesn't seem
possible, as it would require being able to look up all ids forever in the
server. Even if this is the case, it can be solved by allocating a new
replication-specific transaction id name in the slave applier (ISTR LOAD
FILE is handled similarly to avoid file name conflict). This could also help
identify slave XA transactions left in the PREPARED state and roll them back
at server startup.
As far as I can see there should be no problem with rolling back and
re-trying an XA PREPAREd transaction, in fact the XA system seems to
guarantee that this is possible.
Second, why binlog XA PREPARE at all? XA is a mechanism to ensure consistent
commit between multiple transactional systems, one of them the original
MariaDB master server. Replication slave servers are not involved in this in
any way. Normal transactions are not replicated until and unless they commit
on the master, why should XA transactions work differently? Maybe this is
the real bug here?
Replicating XA PREPARE leaves a prepared transaction active on the slave,
with all of the complexity that incurs - and it leaves dangling locks on the
slave potentially for a long time, if the user XA COMMIT is delayed for some
reason. It would be much preferable to keep the binlog cache on the master
across XA PREPARE, and binlog it only at XA COMMIT time - just like other
transactions. It doesn't even have to be binlogged as an XA transaction,
just a normal transaction is fine.
This does require to persist the binlog cache to preserve an XA PREPAREd
transaction across master restart, that can be done in a system InnoDB
table. This is some work but relatively straight-forward, and surely simpler
than trying to implement sparse relaylogs on the slave.
Third, the XA COMMIT (and XA ROLLBACK) event groups must be marked non-trans
in the binlog (they are currently marked as "trans"). Unlike XA PREPARE,
these cannot be rolled back, and also they cannot be safely applied in
parallel with earlier transactions (in case of their own XA PREPARE event
group). This seems clearly a bug (with trivial fix). When XA COMMIT and XA
ROLLBACK are not marked transactional, the parallel replication will wait
for all prior commits to complete before executing them.
The bug description mentions that "it can be purged from relay log after
that". I don't see why this is the case? The inuse_relaylog mechanism exists
to ensure that relay logs are kept as long as needed by parallel replication
retry. I also don't see how relaylogs would become sparse. It doesn't seem
justifyable to introduce all this complexity with sparse relaylogs and
re-fetching from master in SQL thread just for the sake of a little-used
feature like XA - nor does it seem necessary as per above suggestions?
Hope this helps,
Kristian.
Kristian Nielsen
added a comment - First, I don't understand why XA PREPARE cannot be rolled back? The whole
point of XA PREPARE is to leave the transaction in a state where it can both
be rolled back and retried. Why not do an XA ROLLBACK in case of conflict
and then re-try?
Is it because the global transaction id is persisted also after rollback,
and a new XA START with the same id will fail? Though that doesn't seem
possible, as it would require being able to look up all ids forever in the
server. Even if this is the case, it can be solved by allocating a new
replication-specific transaction id name in the slave applier (ISTR LOAD
FILE is handled similarly to avoid file name conflict). This could also help
identify slave XA transactions left in the PREPARED state and roll them back
at server startup.
As far as I can see there should be no problem with rolling back and
re-trying an XA PREPAREd transaction, in fact the XA system seems to
guarantee that this is possible.
Second, why binlog XA PREPARE at all? XA is a mechanism to ensure consistent
commit between multiple transactional systems, one of them the original
MariaDB master server. Replication slave servers are not involved in this in
any way. Normal transactions are not replicated until and unless they commit
on the master, why should XA transactions work differently? Maybe this is
the real bug here?
Replicating XA PREPARE leaves a prepared transaction active on the slave,
with all of the complexity that incurs - and it leaves dangling locks on the
slave potentially for a long time, if the user XA COMMIT is delayed for some
reason. It would be much preferable to keep the binlog cache on the master
across XA PREPARE, and binlog it only at XA COMMIT time - just like other
transactions. It doesn't even have to be binlogged as an XA transaction,
just a normal transaction is fine.
This does require to persist the binlog cache to preserve an XA PREPAREd
transaction across master restart, that can be done in a system InnoDB
table. This is some work but relatively straight-forward, and surely simpler
than trying to implement sparse relaylogs on the slave.
Third, the XA COMMIT (and XA ROLLBACK) event groups must be marked non-trans
in the binlog (they are currently marked as "trans"). Unlike XA PREPARE,
these cannot be rolled back, and also they cannot be safely applied in
parallel with earlier transactions (in case of their own XA PREPARE event
group). This seems clearly a bug (with trivial fix). When XA COMMIT and XA
ROLLBACK are not marked transactional, the parallel replication will wait
for all prior commits to complete before executing them.
The bug description mentions that "it can be purged from relay log after
that". I don't see why this is the case? The inuse_relaylog mechanism exists
to ensure that relay logs are kept as long as needed by parallel replication
retry. I also don't see how relaylogs would become sparse. It doesn't seem
justifyable to introduce all this complexity with sparse relaylogs and
re-fetching from master in SQL thread just for the sake of a little-used
feature like XA - nor does it seem necessary as per above suggestions?
Hope this helps,
Kristian.
Technically, a transaction after XA PREPARE can be rolled back, and should. This MDEV is about doing exactly that.
But currently an "XA transaction" in relay log is a sequence of events from XA START to XA PREPARE. This is what the master writes to binlog, binlog trx_cache in THD is flushed to binlog on XA PREPARE. So, while a transaction in the sql worker thread can be rolled back after XA PREPARE, from the relay log point of view the transaction was done, relay log forgets about it and it cannot be re-applied. This is what this MDEV wants to fix — to preserve XA transactions over XA PREPARE up to XA COMMIT or XA ROLLBACK. Somehow.
"why binlog XA PREPARE at all" — this was MDEV-742, a way to make binlog 2PC capable, so that a binlog would be able to prepare a transaction (make it persistent), and later commit it, or roll it back.
Sergei Golubchik
added a comment - Technically, a transaction after XA PREPARE can be rolled back, and should. This MDEV is about doing exactly that.
But currently an "XA transaction" in relay log is a sequence of events from XA START to XA PREPARE . This is what the master writes to binlog, binlog trx_cache in THD is flushed to binlog on XA PREPARE . So, while a transaction in the sql worker thread can be rolled back after XA PREPARE , from the relay log point of view the transaction was done, relay log forgets about it and it cannot be re-applied. This is what this MDEV wants to fix — to preserve XA transactions over XA PREPARE up to XA COMMIT or XA ROLLBACK . Somehow.
"why binlog XA PREPARE at all" — this was MDEV-742 , a way to make binlog 2PC capable, so that a binlog would be able to prepare a transaction (make it persistent), and later commit it, or roll it back.
I still think the inuse_relaylog should ensure that the relaylog does not go away too early.
When the slave worker executes XA PREPARE, this should participate in binlog group commit (it writes to slave binlog, right), which includes doing a wait_for_prior_commit().
Until wait_for_prior_commit() completes, the transaction can be safely rolled back, the XA PREPARE is not yet persisted, the relay log is not yet deleted.
After wait_for_prior_commit(), there are no earlier commits to conflict with, the optimistic parallel replication will not need to rollback and retry the XA PREPARE.
I think if this doesn't work, there is a (simple) bug that should be fixed. Or is there something I'm missing?
Is there a test case that shows the problem?
To the second point, the XA PREPARE is written to the binlog for the sake of 2PC persistency, ok. This doesn't explain why it is replicated to the slaves? There seem to be no benefit for having the transaction in XA PREPAREd state on the slave (and a number of disadvantages).
Save the binlog cache in memory after XA PREPARE on the master. Then at XA COMMIT, write it to the binlog for the slave to replicate (with a normal START TRANSACTION/COMMIT). In case of crash, load it into binlog cache again during crash recovery.
The XA PREPARE binlog event group can be there, just don't send it to the slave, or send it but ignore it on the slave.
It seems needlessly complicated to have replicated transactions in XA PREPAREd state on the slave. For example, what happens if the slave is switched to a different master while an XA PREPAREd transaction is in the middle of being replicated?
Hope this helps,
Kristian.
Kristian Nielsen
added a comment - I still think the inuse_relaylog should ensure that the relaylog does not go away too early.
When the slave worker executes XA PREPARE, this should participate in binlog group commit (it writes to slave binlog, right), which includes doing a wait_for_prior_commit().
Until wait_for_prior_commit() completes, the transaction can be safely rolled back, the XA PREPARE is not yet persisted, the relay log is not yet deleted.
After wait_for_prior_commit(), there are no earlier commits to conflict with, the optimistic parallel replication will not need to rollback and retry the XA PREPARE.
I think if this doesn't work, there is a (simple) bug that should be fixed. Or is there something I'm missing?
Is there a test case that shows the problem?
To the second point, the XA PREPARE is written to the binlog for the sake of 2PC persistency, ok. This doesn't explain why it is replicated to the slaves? There seem to be no benefit for having the transaction in XA PREPAREd state on the slave (and a number of disadvantages).
Save the binlog cache in memory after XA PREPARE on the master. Then at XA COMMIT, write it to the binlog for the slave to replicate (with a normal START TRANSACTION/COMMIT). In case of crash, load it into binlog cache again during crash recovery.
The XA PREPARE binlog event group can be there, just don't send it to the slave, or send it but ignore it on the slave.
It seems needlessly complicated to have replicated transactions in XA PREPAREd state on the slave. For example, what happens if the slave is switched to a different master while an XA PREPAREd transaction is in the middle of being replicated?
Hope this helps,
Kristian.
"trx2 is an XA transaction that managed to do XA PREPARE before trx1 is blocked"
This shouldn't be possible. The XA PREPARE is similar to a commit/XID event, it completes the event group. So it must not complete until all prior transactions have committed (ie. it must do wait_for_prior_commit() before completing).
If it does not currently do that, then maybe that is the real bug here?
If XA PREPARE writes to the binlog (as I would think), there is an optimized code path that does the wait_for_prior_commit() implicitly as part of binlog group commit.
A less optimal way is to just run wait_for_prior_commit() at the start of XA PREPARE.
I don't see a reason that the normal wait_for_prior_commit mechanism to ensure correct parallel replication order and rollback/retry from relay log files should not also work for XA PREPARE.
Kristian Nielsen
added a comment - Reading the original description again:
"trx2 is an XA transaction that managed to do XA PREPARE before trx1 is blocked"
This shouldn't be possible. The XA PREPARE is similar to a commit/XID event, it completes the event group. So it must not complete until all prior transactions have committed (ie. it must do wait_for_prior_commit() before completing).
If it does not currently do that, then maybe that is the real bug here?
If XA PREPARE writes to the binlog (as I would think), there is an optimized code path that does the wait_for_prior_commit() implicitly as part of binlog group commit.
A less optimal way is to just run wait_for_prior_commit() at the start of XA PREPARE.
I don't see a reason that the normal wait_for_prior_commit mechanism to ensure correct parallel replication order and rollback/retry from relay log files should not also work for XA PREPARE.
knielsen, let me reply to some of your questions (I am not yet regular at kbd).
> This shouldn't be possible. The XA PREPARE is ...
> ... then maybe that is the real bug here?
Indeed: MDEV-28709, MDEV-26682. The latter one aimed to circumvent assymmetric locking behaviour by Innodb. Namely a GAP and InsertIntention locks are conflicting when II is granted
first and GAP is requested last. Combine with that that master and slave can execute lock requests for 2 trx:s in different orders.
The idea to get rid of useless and harmful for replication GAP locks must be the right way to go, but this ticket is rather cautious about implementation of that objective.
So if for any reason a prepared XA blocks a later (in binlog order) trx, we'd remove it temporarily out of the way.
Also to the reason of MDEV-742 's replicating of the XA in prepared state, that's to address
failover: slave becomes promotable to master at any time without losing the prepared trx as the user sees it prepared.
(I'll respond to other questions a bit later)
Andrei Elkin
added a comment - knielsen , let me reply to some of your questions (I am not yet regular at kbd).
> This shouldn't be possible. The XA PREPARE is ...
> ... then maybe that is the real bug here?
Indeed: MDEV-28709 , MDEV-26682 . The latter one aimed to circumvent assymmetric locking behaviour by Innodb. Namely a GAP and InsertIntention locks are conflicting when II is granted
first and GAP is requested last. Combine with that that master and slave can execute lock requests for 2 trx:s in different orders.
The idea to get rid of useless and harmful for replication GAP locks must be the right way to go, but this ticket is rather cautious about implementation of that objective.
So if for any reason a prepared XA blocks a later (in binlog order) trx, we'd remove it temporarily out of the way.
Also to the reason of MDEV-742 's replicating of the XA in prepared state, that's to address
failover: slave becomes promotable to master at any time without losing the prepared trx as the user sees it prepared.
(I'll respond to other questions a bit later)
People
Andrei Elkin
Sergei Golubchik
Votes:
0Vote for this issue
Watchers:
6Start watching this issue
Dates
Created:
Updated:
Git Integration
Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.
{"report":{"fcp":965.0999999046326,"ttfb":220.40000009536743,"pageVisibility":"visible","entityId":114279,"key":"jira.project.issue.view-issue","isInitial":true,"threshold":1000,"elementTimings":{},"userDeviceMemory":8,"userDeviceProcessors":32,"apdex":0.5,"journeyId":"a85860f9-629a-4860-a721-b00f90ee4033","navigationType":0,"readyForUser":1043.5,"redirectCount":0,"resourceLoadedEnd":1194.4000000953674,"resourceLoadedStart":225.40000009536743,"resourceTiming":[{"duration":247.90000009536743,"initiatorType":"link","name":"https://jira.mariadb.org/s/2c21342762a6a02add1c328bed317ffd-CDN/lu2cib/820016/12ta74/0a8bac35585be7fc6c9cc5a0464cd4cf/_/download/contextbatch/css/_super/batch.css","startTime":225.40000009536743,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":225.40000009536743,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":473.30000019073486,"responseStart":0,"secureConnectionStart":0},{"duration":248,"initiatorType":"link","name":"https://jira.mariadb.org/s/7ebd35e77e471bc30ff0eba799ebc151-CDN/lu2cib/820016/12ta74/494e4c556ecbb29f90a3d3b4f09cb99c/_/download/contextbatch/css/jira.browse.project,project.issue.navigator,jira.view.issue,jira.general,jira.global,atl.general,-_super/batch.css?agile_global_admin_condition=true&jag=true&jira.create.linked.issue=true&slack-enabled=true&whisper-enabled=true","startTime":225.69999980926514,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":225.69999980926514,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":473.69999980926514,"responseStart":0,"secureConnectionStart":0},{"duration":256.5,"initiatorType":"script","name":"https://jira.mariadb.org/s/0917945aaa57108d00c5076fea35e069-CDN/lu2cib/820016/12ta74/0a8bac35585be7fc6c9cc5a0464cd4cf/_/download/contextbatch/js/_super/batch.js?locale=en","startTime":225.90000009536743,"connectEnd":225.90000009536743,"connectStart":225.90000009536743,"domainLookupEnd":225.90000009536743,"domainLookupStart":225.90000009536743,"fetchStart":225.90000009536743,"redirectEnd":0,"redirectStart":0,"requestStart":225.90000009536743,"responseEnd":482.40000009536743,"responseStart":482.40000009536743,"secureConnectionStart":225.90000009536743},{"duration":337.09999990463257,"initiatorType":"script","name":"https://jira.mariadb.org/s/2d8175ec2fa4c816e8023260bd8c1786-CDN/lu2cib/820016/12ta74/494e4c556ecbb29f90a3d3b4f09cb99c/_/download/contextbatch/js/jira.browse.project,project.issue.navigator,jira.view.issue,jira.general,jira.global,atl.general,-_super/batch.js?agile_global_admin_condition=true&jag=true&jira.create.linked.issue=true&locale=en&slack-enabled=true&whisper-enabled=true","startTime":226.09999990463257,"connectEnd":226.09999990463257,"connectStart":226.09999990463257,"domainLookupEnd":226.09999990463257,"domainLookupStart":226.09999990463257,"fetchStart":226.09999990463257,"redirectEnd":0,"redirectStart":0,"requestStart":226.09999990463257,"responseEnd":563.1999998092651,"responseStart":563.1999998092651,"secureConnectionStart":226.09999990463257},{"duration":340.5,"initiatorType":"script","name":"https://jira.mariadb.org/s/a9324d6758d385eb45c462685ad88f1d-CDN/lu2cib/820016/12ta74/c92c0caa9a024ae85b0ebdbed7fb4bd7/_/download/contextbatch/js/atl.global,-_super/batch.js?locale=en","startTime":226.30000019073486,"connectEnd":226.30000019073486,"connectStart":226.30000019073486,"domainLookupEnd":226.30000019073486,"domainLookupStart":226.30000019073486,"fetchStart":226.30000019073486,"redirectEnd":0,"redirectStart":0,"requestStart":226.30000019073486,"responseEnd":566.8000001907349,"responseStart":566.8000001907349,"secureConnectionStart":226.30000019073486},{"duration":340.90000009536743,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2cib/820016/12ta74/1.0/_/download/batch/jira.webresources:calendar-en/jira.webresources:calendar-en.js","startTime":226.40000009536743,"connectEnd":226.40000009536743,"connectStart":226.40000009536743,"domainLookupEnd":226.40000009536743,"domainLookupStart":226.40000009536743,"fetchStart":226.40000009536743,"redirectEnd":0,"redirectStart":0,"requestStart":226.40000009536743,"responseEnd":567.3000001907349,"responseStart":567.3000001907349,"secureConnectionStart":226.40000009536743},{"duration":341,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2cib/820016/12ta74/1.0/_/download/batch/jira.webresources:calendar-localisation-moment/jira.webresources:calendar-localisation-moment.js","startTime":226.59999990463257,"connectEnd":226.59999990463257,"connectStart":226.59999990463257,"domainLookupEnd":226.59999990463257,"domainLookupStart":226.59999990463257,"fetchStart":226.59999990463257,"redirectEnd":0,"redirectStart":0,"requestStart":226.59999990463257,"responseEnd":567.5999999046326,"responseStart":567.5999999046326,"secureConnectionStart":226.59999990463257},{"duration":400.90000009536743,"initiatorType":"link","name":"https://jira.mariadb.org/s/b04b06a02d1959df322d9cded3aeecc1-CDN/lu2cib/820016/12ta74/a2ff6aa845ffc9a1d22fe23d9ee791fc/_/download/contextbatch/css/jira.global.look-and-feel,-_super/batch.css","startTime":226.69999980926514,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":226.69999980926514,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":627.5999999046326,"responseStart":0,"secureConnectionStart":0},{"duration":341.09999990463257,"initiatorType":"script","name":"https://jira.mariadb.org/rest/api/1.0/shortcuts/820016/47140b6e0a9bc2e4913da06536125810/shortcuts.js?context=issuenavigation&context=issueaction","startTime":226.90000009536743,"connectEnd":226.90000009536743,"connectStart":226.90000009536743,"domainLookupEnd":226.90000009536743,"domainLookupStart":226.90000009536743,"fetchStart":226.90000009536743,"redirectEnd":0,"redirectStart":0,"requestStart":226.90000009536743,"responseEnd":568,"responseStart":568,"secureConnectionStart":226.90000009536743},{"duration":400.59999990463257,"initiatorType":"link","name":"https://jira.mariadb.org/s/3ac36323ba5e4eb0af2aa7ac7211b4bb-CDN/lu2cib/820016/12ta74/d176f0986478cc64f24226b3d20c140d/_/download/contextbatch/css/com.atlassian.jira.projects.sidebar.init,-_super,-project.issue.navigator,-jira.view.issue/batch.css?jira.create.linked.issue=true","startTime":227.09999990463257,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":227.09999990463257,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":627.6999998092651,"responseStart":0,"secureConnectionStart":0},{"duration":341.40000009536743,"initiatorType":"script","name":"https://jira.mariadb.org/s/5d5e8fe91fbc506585e83ea3b62ccc4b-CDN/lu2cib/820016/12ta74/d176f0986478cc64f24226b3d20c140d/_/download/contextbatch/js/com.atlassian.jira.projects.sidebar.init,-_super,-project.issue.navigator,-jira.view.issue/batch.js?jira.create.linked.issue=true&locale=en","startTime":227.19999980926514,"connectEnd":227.19999980926514,"connectStart":227.19999980926514,"domainLookupEnd":227.19999980926514,"domainLookupStart":227.19999980926514,"fetchStart":227.19999980926514,"redirectEnd":0,"redirectStart":0,"requestStart":227.19999980926514,"responseEnd":568.5999999046326,"responseStart":568.5999999046326,"secureConnectionStart":227.19999980926514},{"duration":468.69999980926514,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2cib/820016/12ta74/1.0/_/download/batch/jira.webresources:bigpipe-js/jira.webresources:bigpipe-js.js","startTime":232.40000009536743,"connectEnd":232.40000009536743,"connectStart":232.40000009536743,"domainLookupEnd":232.40000009536743,"domainLookupStart":232.40000009536743,"fetchStart":232.40000009536743,"redirectEnd":0,"redirectStart":0,"requestStart":232.40000009536743,"responseEnd":701.0999999046326,"responseStart":701.0999999046326,"secureConnectionStart":232.40000009536743},{"duration":938.3999996185303,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2cib/820016/12ta74/1.0/_/download/batch/jira.webresources:bigpipe-init/jira.webresources:bigpipe-init.js","startTime":235.30000019073486,"connectEnd":235.30000019073486,"connectStart":235.30000019073486,"domainLookupEnd":235.30000019073486,"domainLookupStart":235.30000019073486,"fetchStart":235.30000019073486,"redirectEnd":0,"redirectStart":0,"requestStart":235.30000019073486,"responseEnd":1173.6999998092651,"responseStart":1173.6999998092651,"secureConnectionStart":235.30000019073486},{"duration":126.90000009536743,"initiatorType":"xmlhttprequest","name":"https://jira.mariadb.org/rest/webResources/1.0/resources","startTime":638.5999999046326,"connectEnd":638.5999999046326,"connectStart":638.5999999046326,"domainLookupEnd":638.5999999046326,"domainLookupStart":638.5999999046326,"fetchStart":638.5999999046326,"redirectEnd":0,"redirectStart":0,"requestStart":638.5999999046326,"responseEnd":765.5,"responseStart":765.5,"secureConnectionStart":638.5999999046326},{"duration":280.09999990463257,"initiatorType":"link","name":"https://jira.mariadb.org/s/d5715adaadd168a9002b108b2b039b50-CDN/lu2cib/820016/12ta74/be4b45e9cec53099498fa61c8b7acba4/_/download/contextbatch/css/jira.project.sidebar,-_super,-project.issue.navigator,-jira.general,-jira.browse.project,-jira.view.issue,-jira.global,-atl.general,-com.atlassian.jira.projects.sidebar.init/batch.css?agile_global_admin_condition=true&jag=true&jira.create.linked.issue=true&slack-enabled=true&whisper-enabled=true","startTime":914.3000001907349,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":914.3000001907349,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":1194.4000000953674,"responseStart":0,"secureConnectionStart":0},{"duration":268.6000003814697,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2cib/820016/12ta74/e65b778d185daf5aee24936755b43da6/_/download/contextbatch/js/browser-metrics-plugin.contrib,-_super,-project.issue.navigator,-jira.view.issue,-atl.general/batch.js?agile_global_admin_condition=true&jag=true&jira.create.linked.issue=true&slack-enabled=true&whisper-enabled=true","startTime":915.1999998092651,"connectEnd":915.1999998092651,"connectStart":915.1999998092651,"domainLookupEnd":915.1999998092651,"domainLookupStart":915.1999998092651,"fetchStart":915.1999998092651,"redirectEnd":0,"redirectStart":0,"requestStart":915.1999998092651,"responseEnd":1183.8000001907349,"responseStart":1183.8000001907349,"secureConnectionStart":915.1999998092651},{"duration":244,"initiatorType":"script","name":"https://www.google-analytics.com/analytics.js","startTime":958.5999999046326,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":958.5999999046326,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":1202.5999999046326,"responseStart":0,"secureConnectionStart":0},{"duration":278.09999990463257,"initiatorType":"script","name":"https://jira.mariadb.org/s/097ae97cb8fbec7d6ea4bbb1f26955b9-CDN/lu2cib/820016/12ta74/be4b45e9cec53099498fa61c8b7acba4/_/download/contextbatch/js/jira.project.sidebar,-_super,-project.issue.navigator,-jira.general,-jira.browse.project,-jira.view.issue,-jira.global,-atl.general,-com.atlassian.jira.projects.sidebar.init/batch.js?agile_global_admin_condition=true&jag=true&jira.create.linked.issue=true&locale=en&slack-enabled=true&whisper-enabled=true","startTime":915.5,"connectEnd":915.5,"connectStart":915.5,"domainLookupEnd":915.5,"domainLookupStart":915.5,"fetchStart":915.5,"redirectEnd":0,"redirectStart":0,"requestStart":915.5,"responseEnd":1193.5999999046326,"responseStart":1193.5,"secureConnectionStart":915.5}],"fetchStart":0,"domainLookupStart":0,"domainLookupEnd":0,"connectStart":0,"connectEnd":0,"requestStart":64,"responseStart":220,"responseEnd":235,"domLoading":223,"domInteractive":1205,"domContentLoadedEventStart":1205,"domContentLoadedEventEnd":1246,"domComplete":1586,"loadEventStart":1586,"loadEventEnd":1587,"userAgent":"Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)","marks":[{"name":"bigPipe.sidebar-id.start","time":1175.4000000953674},{"name":"bigPipe.sidebar-id.end","time":1176.3000001907349},{"name":"bigPipe.activity-panel-pipe-id.start","time":1176.4000000953674},{"name":"bigPipe.activity-panel-pipe-id.end","time":1178.5},{"name":"activityTabFullyLoaded","time":1263.9000000953674}],"measures":[],"correlationId":"be36485685495a","effectiveType":"4g","downlink":10,"rtt":0,"serverDuration":101,"dbReadsTimeInMs":12,"dbConnsTimeInMs":20,"applicationHash":"9d11dbea5f4be3d4cc21f03a88dd11d8c8687422","experiments":[]}}
First, I don't understand why XA PREPARE cannot be rolled back? The whole
point of XA PREPARE is to leave the transaction in a state where it can both
be rolled back and retried. Why not do an XA ROLLBACK in case of conflict
and then re-try?
Is it because the global transaction id is persisted also after rollback,
and a new XA START with the same id will fail? Though that doesn't seem
possible, as it would require being able to look up all ids forever in the
server. Even if this is the case, it can be solved by allocating a new
replication-specific transaction id name in the slave applier (ISTR LOAD
FILE is handled similarly to avoid file name conflict). This could also help
identify slave XA transactions left in the PREPARED state and roll them back
at server startup.
As far as I can see there should be no problem with rolling back and
re-trying an XA PREPAREd transaction, in fact the XA system seems to
guarantee that this is possible.
Second, why binlog XA PREPARE at all? XA is a mechanism to ensure consistent
commit between multiple transactional systems, one of them the original
MariaDB master server. Replication slave servers are not involved in this in
any way. Normal transactions are not replicated until and unless they commit
on the master, why should XA transactions work differently? Maybe this is
the real bug here?
Replicating XA PREPARE leaves a prepared transaction active on the slave,
with all of the complexity that incurs - and it leaves dangling locks on the
slave potentially for a long time, if the user XA COMMIT is delayed for some
reason. It would be much preferable to keep the binlog cache on the master
across XA PREPARE, and binlog it only at XA COMMIT time - just like other
transactions. It doesn't even have to be binlogged as an XA transaction,
just a normal transaction is fine.
This does require to persist the binlog cache to preserve an XA PREPAREd
transaction across master restart, that can be done in a system InnoDB
table. This is some work but relatively straight-forward, and surely simpler
than trying to implement sparse relaylogs on the slave.
Third, the XA COMMIT (and XA ROLLBACK) event groups must be marked non-trans
in the binlog (they are currently marked as "trans"). Unlike XA PREPARE,
these cannot be rolled back, and also they cannot be safely applied in
parallel with earlier transactions (in case of their own XA PREPARE event
group). This seems clearly a bug (with trivial fix). When XA COMMIT and XA
ROLLBACK are not marked transactional, the parallel replication will wait
for all prior commits to complete before executing them.
The bug description mentions that "it can be purged from relay log after
that". I don't see why this is the case? The inuse_relaylog mechanism exists
to ensure that relay logs are kept as long as needed by parallel replication
retry. I also don't see how relaylogs would become sparse. It doesn't seem
justifyable to introduce all this complexity with sparse relaylogs and
re-fetching from master in SQL thread just for the sake of a little-used
feature like XA - nor does it seem necessary as per above suggestions?
Hope this helps,