[MDEV-6795] More efficient transaction retry in parallel replication Created: 2014-09-26 Updated: 2015-03-19 |
|
| Status: | Open |
| Project: | MariaDB Server |
| Component/s: | None |
| Fix Version/s: | None |
| Type: | Task | Priority: | Minor |
| Reporter: | Kristian Nielsen | Assignee: | Kristian Nielsen |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | parallelslave, replication | ||
| Description |
|
Currently, when parallel replication needs to retry a transaction due to This is necessary in the general case, where a transaction may be huge and This list is only freed in batches for efficiency, so in most cases the events Transaction retry efficiency becomes somewhat more important with Say, the worker thread, when freeing queued events, will keep around the last The main problem with this approach is testing. The code that reads events |
| Comments |
| Comment by Kristian Nielsen [ 2014-10-02 ] |
|
One thing to look out for with this is what to do with the Log_event objects stored in the work queue. Now, they are deleted immediately after being first applied, in delete_or_keep_event_post_apply(). So that will have to be postponed, in case of re-using an event for retry. But then the question is if all the code in the different do_apply_event() implementations in log_event.cc leave the event object in the same state as originally? It seems quite possible that there will be some cases where an object is left in a different state, so that re-try that reuses the event object can give subtly different results. Again, testing will be a challenge. |