Details
-
New Feature
-
Status: Closed (View Workflow)
-
Critical
-
Resolution: Fixed
Description
Current GTID code needs to scan one binlog file from the beginning when a
slave connects, to find the place to start replicating. If slave reconnects
are frequent, this can be a performance regression, as in earlier versions
slaves could immediate seek to the supplied file offset.
To fix this problem, indexing should be done on the binlog files, allowing to
quickly locate any GTID in the binlog file. As an added bonus, this would
allow to detect if old-style replication tries to connect at an incorrect file
offset (eg. in the middle of an event), avoiding sending potentially corrupted
events.
The index could be an extra file master-bin.000001.idx written in parallel
with the binlog file. There is no need to flush or sync the file at every
binlog write, as it can be recovered easily in case of crash or code can fall
back to scanning the corresponding binlog file.
The index would be page-based, allowing a connecting slave to do binary search
to find the desired location in the binlog to start replication. The file
would contain an ordered sequence of GTID binlog states with their
corresponding start offset into the associated binlog file.
The connecting slave would then binary-search for its start position in the index, and this way be able to jump directly to the right start position in the binlog file, without needing to scan that binlog from the start. This would greatly improve performance when many slaves connect simultaneously.
A B-tree like structure is more efficient for disk-based searching than binary search.
Since we write out the index record in order, we can actually build a B-tree
append-only to the index file. At the end, the root node will be the last page in the
file.
There is no need to include every position in the index. We can write say one
in every 10 transactions into the index; a connecting slave will then lookup
the closest matching position in the index and at most need to skip over 10
transactions in the binlog. In general, we can keep track of the size of
binlog written and index written, and write only a fraction of transactions
into the index to ensure that the ratio of index size to binlog size does not
exceed some appropriate number (eg. 2% or something).
To further reduce the index size, it could be "compressed" by omitting from entries those (domain_id, server_id) combinations that do not change. Typically, there can be many distinct such values in a binlog file, but only a few of them are likely to change within one given file.
A work-in-progress high-level design description:
This implements an on-disk index for each binlog file to speed up access to
|
the binlog at a specific offset or GTID position. This is primarily used when
|
a slave connects to the master, but also by user calling BINLOG_GTID_POS().
|
|
A connecting slave first scans the binlog files to find the last one with an
|
initial GTID_LIST event that lies before the starting GTID position. Then a
|
sequential scan of the binlog file is done until the requested GTID position
|
is found.
|
|
The binlog index conceptually extends this using index records corresponding
|
to different offset within one binlog file. Each record functions as if it
|
was the initial GTID_LIST event of a new binlog file, allowing the
|
sequential scan to start from the corresponding position. By having
|
sufficiently many index records, the scan will be fast.
|
|
The code has a performance-critical "sync" path which is called while holding
|
LOCK_log whenever a new GTID is added to a binlog file. And a less critical
|
"async" path which runs in the binlog background thread and does most of the
|
processing. The "sync" and "async" paths each run single threaded, but can
|
execute in parallel with each other.
|
|
The index file is written incrementally together with the binlog file.
|
However there is no fsync()'s of the index file needed while writing. A
|
partially written index left by a crashing server will be re-written during
|
binlog recovery. A reader is allowed to use the index as it is begin written
|
(for the "hot" binlog file); such access is protected by mutex.
|
|
In case of lost or corrupt index, fallback to full sequential scan is done
|
(so performance will be affected but not correct functionality).
|
|
The index file is structured like a B+-tree. The index is append-only, so
|
also resembles a log-structured merge-tree, but with no merging of levels
|
needed as it covers a single fixed-size binlog file. This makes the building
|
of the tree relatively simple.
|
|
Keys in the tree consist of a GTID state (corresponding to a GTID_LIST
|
event) and the associated binlog file offset. All keys (except the first key
|
in each level of the tree) are delta-compressed to save space, holding only
|
the (domain_id, server_id) pairs that differ from the previous record.
|
|
The file is page-based. The first page contains the leftmost leaf node, and
|
the root node is at the end of the file. An incompletely written index file
|
can be detected by the last page in the file not being a root node page.
|
Nodes in the B+-tree usually fit in one page, but a node can be split across
|
multiple pages if GTID states are very large.
|
|
ToDo: Document the page /indexfile format.
|
|
Here is an example index file in schematic form:
|
|
S0 D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11
|
A(S0 D1 D2) B(D3 D4 D5) C(D6 D7 D8) E(D9 D10) F(D11)
|
D(A <S3> B <D4+D5+D6> C) G(E <D10+D11> F)
|
H(D <S9> G)
|
|
S0 is the full initial GTID state at the start of the file.
|
D1-D11 are the differential GTID states in the binlog file; eg. they could
|
be the individual GTIDs in the binlog file if a record is writte for
|
each GTID.
|
S3 is the full GTID state corresponding to D3, ie. S3=S0+D1+D2+D3.
|
A(), B(), ..., H() are the nodes in the binlog index. H is the root.
|
A(S0 D1 D2) is a leaf node containing records S0, D1, and D2.
|
G(E <D10+D11> F) is an interior node with key <D10+D11> and child pointers to
|
E and F.
|
|
To find eg. S4, we start from the root H. S4<S9, so we follow the left child
|
pointer to D. S4>S3, so we follow the child pointer to leaf node C.
|
|
Here are the operations that occur while writing the example index file:
|
|
S0 A(A) R(A,S0)
|
D1 R(A,D1)
|
D2 R(A,D2)
|
D3 W(A) I(D) P(D,A) A(B) R(B,D3) R(D,S3)
|
D4 R(A,D4)
|
D5 R(A,D5)
|
D6 W(B) P(D,B) A(C) R(C,D6) R(D,D4+D5+D6)
|
D7 R(C,D7)
|
D8 R(C,D8)
|
D9 W(C) P(D,C) A(E) R(E,D9) W(D) I(H) P(H,D) R(H,S9)
|
D10 R(E,D10)
|
D11 W(E) I(G) P(G,E) A(F) R(F,S10) R(G,D10+D11)
|
<EOF> W(F) P(G,F) W(G) P(H,G) W(H)
|
|
A(x) -> allocate leaf node x.
|
R(x,k) -> insert an index record containing key k in node x.
|
W(x) -> write node x to the index file.
|
I(y) -> allocate interior node y.
|
P(y,x) -> insert a child pointer to y in x.
|
Attachments
Issue Links
- blocks
-
MDEV-25764 SELECT binlog_gtid_pos takes a very long time when binary log is encrypted
-
- Closed
-
- causes
-
MDEV-36424 binlog_encryption.encrypted_master_switch_to_unencrypted_gtid Fails in BB 11.4+
-
- Closed
-
- includes
-
MDEV-25392 IO thread reporting yes despite failing to fetch GTID
-
- Open
-
- relates to
-
MDEV-33426 Assertion `status_var.local_memory_used == 0 || !debug_assert_on_not_freed_memory' failed in THD::~THD from handle_slave_sql on slave upon INSERT to TEMPORARY Aria table, Memory not freed: -50616
-
- In Testing
-
- links to
- mentioned in
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
-
Page Loading...
Activity
Field | Original Value | New Value |
---|---|---|
Issue Type | Bug [ 1 ] | Task [ 3 ] |
Workflow | defaullt [ 28815 ] | MariaDB v2 [ 44152 ] |
Labels | gtid | gsoc15 gtid |
Description |
Current GTID code needs to scan one binlog file from the beginning when a slave connects, to find the place to start replicating. If slave reconnects are frequent, this can be a performance regression, as in earlier versions slaves could immediate seek to the supplied file offset. To fix this problem, indexing should be done on the binlog files, allowing to quickly locate any GTID in the binlog file. As an added bonus, this would allow to detect if old-style replication tries to connect at an incorrect file offset (eg. in the middle of an event), avoiding sending potentially corrupted events. The index could be an extra file master-bin.000001.idx written in parallel with the binlog file. There is no need to flush or sync the file at every binlog write, as it can be recovered easily in case of crash or code can fall back to scanning the corresponding binlog file. The better way would be to rewrite the binlog to use a proper page-based log file. This would allow to greatly improve fsync performance of the binlog, as just a single page write would be sufficient (current implementation needs for the OS kernel to sync first the new data, and after that the new metadata (new length of the file)). |
Current GTID code needs to scan one binlog file from the beginning when a slave connects, to find the place to start replicating. If slave reconnects are frequent, this can be a performance regression, as in earlier versions slaves could immediate seek to the supplied file offset. To fix this problem, indexing should be done on the binlog files, allowing to quickly locate any GTID in the binlog file. As an added bonus, this would allow to detect if old-style replication tries to connect at an incorrect file offset (eg. in the middle of an event), avoiding sending potentially corrupted events. The index could be an extra file master-bin.000001.idx written in parallel with the binlog file. There is no need to flush or sync the file at every binlog write, as it can be recovered easily in case of crash or code can fall back to scanning the corresponding binlog file. The index would be page-based, allowing a connecting slave to do binary search to find the desired location in the binlog to start replication. The file would contain an ordered sequence of GTID binlog states with their corresponding start offset into the associated binlog file. The connecting slave would then binary-search for its start position in the index, and this way be able to jump directly to the right start position in the binlog file, without needing to scan that binlog from the start. This would greatly improve performance when many slaves connect simultaneously. There is no need to include every position in the index. We can write say one in every 10 transactions into the index; a connecting slave will then lookup the closest matching position in the index and at most need to skip over 10 transactions in the binlog. In general, we can keep track of the size of binlog written and index written, and write only a fraction of transactions into the index to ensure that the ratio of index size to binlog size does not exceed some appropriate number (eg. 2% or something). |
Description |
Current GTID code needs to scan one binlog file from the beginning when a slave connects, to find the place to start replicating. If slave reconnects are frequent, this can be a performance regression, as in earlier versions slaves could immediate seek to the supplied file offset. To fix this problem, indexing should be done on the binlog files, allowing to quickly locate any GTID in the binlog file. As an added bonus, this would allow to detect if old-style replication tries to connect at an incorrect file offset (eg. in the middle of an event), avoiding sending potentially corrupted events. The index could be an extra file master-bin.000001.idx written in parallel with the binlog file. There is no need to flush or sync the file at every binlog write, as it can be recovered easily in case of crash or code can fall back to scanning the corresponding binlog file. The index would be page-based, allowing a connecting slave to do binary search to find the desired location in the binlog to start replication. The file would contain an ordered sequence of GTID binlog states with their corresponding start offset into the associated binlog file. The connecting slave would then binary-search for its start position in the index, and this way be able to jump directly to the right start position in the binlog file, without needing to scan that binlog from the start. This would greatly improve performance when many slaves connect simultaneously. There is no need to include every position in the index. We can write say one in every 10 transactions into the index; a connecting slave will then lookup the closest matching position in the index and at most need to skip over 10 transactions in the binlog. In general, we can keep track of the size of binlog written and index written, and write only a fraction of transactions into the index to ensure that the ratio of index size to binlog size does not exceed some appropriate number (eg. 2% or something). |
Current GTID code needs to scan one binlog file from the beginning when a slave connects, to find the place to start replicating. If slave reconnects are frequent, this can be a performance regression, as in earlier versions slaves could immediate seek to the supplied file offset. To fix this problem, indexing should be done on the binlog files, allowing to quickly locate any GTID in the binlog file. As an added bonus, this would allow to detect if old-style replication tries to connect at an incorrect file offset (eg. in the middle of an event), avoiding sending potentially corrupted events. The index could be an extra file master-bin.000001.idx written in parallel with the binlog file. There is no need to flush or sync the file at every binlog write, as it can be recovered easily in case of crash or code can fall back to scanning the corresponding binlog file. The index would be page-based, allowing a connecting slave to do binary search to find the desired location in the binlog to start replication. The file would contain an ordered sequence of GTID binlog states with their corresponding start offset into the associated binlog file. The connecting slave would then binary-search for its start position in the index, and this way be able to jump directly to the right start position in the binlog file, without needing to scan that binlog from the start. This would greatly improve performance when many slaves connect simultaneously. There is no need to include every position in the index. We can write say one in every 10 transactions into the index; a connecting slave will then lookup the closest matching position in the index and at most need to skip over 10 transactions in the binlog. In general, we can keep track of the size of binlog written and index written, and write only a fraction of transactions into the index to ensure that the ratio of index size to binlog size does not exceed some appropriate number (eg. 2% or something). To further reduce the index size, it could be "compressed" by omitting from entries those (domain_id, server_id) combinations that do not change. Typically, there can be many distinct such values in a binlog file, but only a few of them are likely to change within one given file. |
Priority | Major [ 3 ] | Critical [ 2 ] |
Fix Version/s | 10.2 [ 14601 ] |
Workflow | MariaDB v2 [ 44152 ] | MariaDB v3 [ 66330 ] |
Assignee | Kristian Nielsen [ knielsen ] |
Fix Version/s | 10.0 [ 16000 ] | |
Fix Version/s | 10.2 [ 14601 ] |
Status | Open [ 1 ] | In Progress [ 3 ] |
Labels | gsoc15 gtid | gtid |
Fix Version/s | 10.0 [ 16000 ] |
Link | This issue includes MDEV-25392 [ MDEV-25392 ] |
Assignee | Kristian Nielsen [ knielsen ] | Brandon Nesterenko [ JIRAUSER48702 ] |
Assignee | Brandon Nesterenko [ JIRAUSER48702 ] | Sachin Setiya [ sachin.setiya.007 ] |
Assignee | Sachin Setiya [ sachin.setiya.007 ] | Ralf Gebhardt [ ralf.gebhardt@mariadb.com ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31210 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31216 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31220 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31227 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31234 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31243 ] |
Status | In Progress [ 3 ] | Stalled [ 10000 ] |
Assignee | Ralf Gebhardt [ ralf.gebhardt@mariadb.com ] | Sachin Setiya [ sachin.setiya.007 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31306 ] |
Link |
This issue blocks |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31312 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31324 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31327 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31335 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31367 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31392 ] |
Labels | gtid | ServiceNow gtid |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31414 ] |
Labels | ServiceNow gtid | 76qDvLB8Gju6Hs7nk3VY3EX42G795W5z gtid |
Link |
This issue is part of |
Link |
This issue is part of |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31440 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31469 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31505 ] |
Fix Version/s | 10.7 [ 24805 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31529 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31558 ] |
Labels | 76qDvLB8Gju6Hs7nk3VY3EX42G795W5z gtid | gtid |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31594 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31606 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31611 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31638 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31704 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 31714 ] |
Priority | Critical [ 2 ] | Major [ 3 ] |
Assignee | Sachin Setiya [ sachin.setiya.007 ] | Andrei Elkin [ elkin ] |
Remote Link | This issue links to "Page (Confluence)" [ 31731 ] |
Status | Stalled [ 10000 ] | In Progress [ 3 ] |
Remote Link | This issue links to "Page (Confluence)" [ 31753 ] |
Remote Link | This issue links to "Page (Confluence)" [ 31802 ] |
Remote Link | This issue links to "Page (Confluence)" [ 32010 ] |
Remote Link | This issue links to "Page (Confluence)" [ 32103 ] |
Remote Link | This issue links to "Page (Confluence)" [ 32205 ] |
Fix Version/s | 10.7 [ 24805 ] |
Remote Link | This issue links to "Page (Confluence)" [ 32215 ] |
Remote Link | This issue links to "Page (Confluence)" [ 32231 ] |
Remote Link | This issue links to "Page (Confluence)" [ 32237 ] |
Remote Link | This issue links to "Page (Confluence)" [ 32247 ] |
Remote Link | This issue links to "Page (Confluence)" [ 32306 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 32325 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 32415 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 32509 ] |
Assignee | Andrei Elkin [ elkin ] | Brandon Nesterenko [ JIRAUSER48702 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 32628 ] |
Workflow | MariaDB v3 [ 66330 ] | MariaDB v4 [ 131811 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 32637 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 32660 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 32675 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 32689 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 32722 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 32743 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 32903 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 32925 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33019 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33117 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33207 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33238 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33267 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33299 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33418 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33602 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33628 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33654 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33723 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33731 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33731 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33735 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33802 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33818 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33902 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 33916 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34002 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34018 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34042 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34051 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34080 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34103 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34116 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34224 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34233 ] |
Description |
Current GTID code needs to scan one binlog file from the beginning when a slave connects, to find the place to start replicating. If slave reconnects are frequent, this can be a performance regression, as in earlier versions slaves could immediate seek to the supplied file offset. To fix this problem, indexing should be done on the binlog files, allowing to quickly locate any GTID in the binlog file. As an added bonus, this would allow to detect if old-style replication tries to connect at an incorrect file offset (eg. in the middle of an event), avoiding sending potentially corrupted events. The index could be an extra file master-bin.000001.idx written in parallel with the binlog file. There is no need to flush or sync the file at every binlog write, as it can be recovered easily in case of crash or code can fall back to scanning the corresponding binlog file. The index would be page-based, allowing a connecting slave to do binary search to find the desired location in the binlog to start replication. The file would contain an ordered sequence of GTID binlog states with their corresponding start offset into the associated binlog file. The connecting slave would then binary-search for its start position in the index, and this way be able to jump directly to the right start position in the binlog file, without needing to scan that binlog from the start. This would greatly improve performance when many slaves connect simultaneously. There is no need to include every position in the index. We can write say one in every 10 transactions into the index; a connecting slave will then lookup the closest matching position in the index and at most need to skip over 10 transactions in the binlog. In general, we can keep track of the size of binlog written and index written, and write only a fraction of transactions into the index to ensure that the ratio of index size to binlog size does not exceed some appropriate number (eg. 2% or something). To further reduce the index size, it could be "compressed" by omitting from entries those (domain_id, server_id) combinations that do not change. Typically, there can be many distinct such values in a binlog file, but only a few of them are likely to change within one given file. |
Current GTID code needs to scan one binlog file from the beginning when a slave connects, to find the place to start replicating. If slave reconnects are frequent, this can be a performance regression, as in earlier versions slaves could immediate seek to the supplied file offset. To fix this problem, indexing should be done on the binlog files, allowing to quickly locate any GTID in the binlog file. As an added bonus, this would allow to detect if old\-style replication tries to connect at an incorrect file offset (eg. in the middle of an event), avoiding sending potentially corrupted events. The index could be an extra file master\-bin.000001.idx written in parallel with the binlog file. There is no need to flush or sync the file at every binlog write, as it can be recovered easily in case of crash or code can fall back to scanning the corresponding binlog file. The index would be page\-based, allowing a connecting slave to do binary search to find the desired location in the binlog to start replication. The file would contain an ordered sequence of GTID binlog states with their corresponding start offset into the associated binlog file. The connecting slave would then binary\-search for its start position in the index, and this way be able to jump directly to the right start position in the binlog file, without needing to scan that binlog from the start. This would greatly improve performance when many slaves connect simultaneously. There is no need to include every position in the index. We can write say one in every 10 transactions into the index; a connecting slave will then lookup the closest matching position in the index and at most need to skip over 10 transactions in the binlog. In general, we can keep track of the size of binlog written and index written, and write only a fraction of transactions into the index to ensure that the ratio of index size to binlog size does not exceed some appropriate number (eg. 2% or something). To further reduce the index size, it could be "compressed" by omitting from entries those (domain_id, server_id) combinations that do not change. Typically, there can be many distinct such values in a binlog file, but only a few of them are likely to change within one given file. |
Description |
Current GTID code needs to scan one binlog file from the beginning when a slave connects, to find the place to start replicating. If slave reconnects are frequent, this can be a performance regression, as in earlier versions slaves could immediate seek to the supplied file offset. To fix this problem, indexing should be done on the binlog files, allowing to quickly locate any GTID in the binlog file. As an added bonus, this would allow to detect if old\-style replication tries to connect at an incorrect file offset (eg. in the middle of an event), avoiding sending potentially corrupted events. The index could be an extra file master\-bin.000001.idx written in parallel with the binlog file. There is no need to flush or sync the file at every binlog write, as it can be recovered easily in case of crash or code can fall back to scanning the corresponding binlog file. The index would be page\-based, allowing a connecting slave to do binary search to find the desired location in the binlog to start replication. The file would contain an ordered sequence of GTID binlog states with their corresponding start offset into the associated binlog file. The connecting slave would then binary\-search for its start position in the index, and this way be able to jump directly to the right start position in the binlog file, without needing to scan that binlog from the start. This would greatly improve performance when many slaves connect simultaneously. There is no need to include every position in the index. We can write say one in every 10 transactions into the index; a connecting slave will then lookup the closest matching position in the index and at most need to skip over 10 transactions in the binlog. In general, we can keep track of the size of binlog written and index written, and write only a fraction of transactions into the index to ensure that the ratio of index size to binlog size does not exceed some appropriate number (eg. 2% or something). To further reduce the index size, it could be "compressed" by omitting from entries those (domain_id, server_id) combinations that do not change. Typically, there can be many distinct such values in a binlog file, but only a few of them are likely to change within one given file. |
Current GTID code needs to scan one binlog file from the beginning when a
slave connects, to find the place to start replicating. If slave reconnects are frequent, this can be a performance regression, as in earlier versions slaves could immediate seek to the supplied file offset. To fix this problem, indexing should be done on the binlog files, allowing to quickly locate any GTID in the binlog file. As an added bonus, this would allow to detect if old-style replication tries to connect at an incorrect file offset (eg. in the middle of an event), avoiding sending potentially corrupted events. The index could be an extra file master-bin.000001.idx written in parallel with the binlog file. There is no need to flush or sync the file at every binlog write, as it can be recovered easily in case of crash or code can fall back to scanning the corresponding binlog file. The index would be page-based, allowing a connecting slave to do binary search to find the desired location in the binlog to start replication. The file would contain an ordered sequence of GTID binlog states with their corresponding start offset into the associated binlog file. The connecting slave would then binary\-search for its start position in the index, and this way be able to jump directly to the right start position in the binlog file, without needing to scan that binlog from the start. This would greatly improve performance when many slaves connect simultaneously. There is no need to include every position in the index. We can write say one in every 10 transactions into the index; a connecting slave will then lookup the closest matching position in the index and at most need to skip over 10 transactions in the binlog. In general, we can keep track of the size of binlog written and index written, and write only a fraction of transactions into the index to ensure that the ratio of index size to binlog size does not exceed some appropriate number (eg. 2% or something). To further reduce the index size, it could be "compressed" by omitting from entries those (domain_id, server_id) combinations that do not change. Typically, there can be many distinct such values in a binlog file, but only a few of them are likely to change within one given file. |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34248 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34262 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34312 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34436 ] |
Status | In Progress [ 3 ] | Stalled [ 10000 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34444 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34461 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34484 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34503 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34514 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34528 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34538 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34600 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34611 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34622 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34640 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34655 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34675 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34705 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34713 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34809 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34825 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34843 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34913 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34929 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34953 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34970 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 34998 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35114 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35231 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35262 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35282 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35303 ] |
Priority | Major [ 3 ] | Critical [ 2 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35315 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35406 ] |
Assignee | Brandon Nesterenko [ JIRAUSER48702 ] | Aleksey Midenkov [ midenok ] |
Component/s | Replication [ 10100 ] |
Fix Version/s | 10.11 [ 27614 ] |
Status | Stalled [ 10000 ] | In Progress [ 3 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35429 ] |
Fix Version/s | 11.3 [ 28565 ] | |
Fix Version/s | 10.11 [ 27614 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35446 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35459 ] |
Assignee | Aleksey Midenkov [ midenok ] | Andrei Elkin [ elkin ] |
Status | In Progress [ 3 ] | In Review [ 10002 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35472 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35497 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35607 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35627 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35805 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35816 ] |
Description |
Current GTID code needs to scan one binlog file from the beginning when a
slave connects, to find the place to start replicating. If slave reconnects are frequent, this can be a performance regression, as in earlier versions slaves could immediate seek to the supplied file offset. To fix this problem, indexing should be done on the binlog files, allowing to quickly locate any GTID in the binlog file. As an added bonus, this would allow to detect if old-style replication tries to connect at an incorrect file offset (eg. in the middle of an event), avoiding sending potentially corrupted events. The index could be an extra file master-bin.000001.idx written in parallel with the binlog file. There is no need to flush or sync the file at every binlog write, as it can be recovered easily in case of crash or code can fall back to scanning the corresponding binlog file. The index would be page-based, allowing a connecting slave to do binary search to find the desired location in the binlog to start replication. The file would contain an ordered sequence of GTID binlog states with their corresponding start offset into the associated binlog file. The connecting slave would then binary\-search for its start position in the index, and this way be able to jump directly to the right start position in the binlog file, without needing to scan that binlog from the start. This would greatly improve performance when many slaves connect simultaneously. There is no need to include every position in the index. We can write say one in every 10 transactions into the index; a connecting slave will then lookup the closest matching position in the index and at most need to skip over 10 transactions in the binlog. In general, we can keep track of the size of binlog written and index written, and write only a fraction of transactions into the index to ensure that the ratio of index size to binlog size does not exceed some appropriate number (eg. 2% or something). To further reduce the index size, it could be "compressed" by omitting from entries those (domain_id, server_id) combinations that do not change. Typically, there can be many distinct such values in a binlog file, but only a few of them are likely to change within one given file. |
Current GTID code needs to scan one binlog file from the beginning when a
slave connects, to find the place to start replicating. If slave reconnects are frequent, this can be a performance regression, as in earlier versions slaves could immediate seek to the supplied file offset. To fix this problem, indexing should be done on the binlog files, allowing to quickly locate any GTID in the binlog file. As an added bonus, this would allow to detect if old-style replication tries to connect at an incorrect file offset (eg. in the middle of an event), avoiding sending potentially corrupted events. The index could be an extra file master-bin.000001.idx written in parallel with the binlog file. There is no need to flush or sync the file at every binlog write, as it can be recovered easily in case of crash or code can fall back to scanning the corresponding binlog file. The index would be page-based, allowing a connecting slave to do binary search to find the desired location in the binlog to start replication. The file would contain an ordered sequence of GTID binlog states with their corresponding start offset into the associated binlog file. -The connecting slave would then binary\-search for its start position in the index, and this way be able to jump directly to the right start position in the binlog file, without needing to scan that binlog from the start. This would greatly improve performance when many slaves connect simultaneously.- A B-tree like structure is more efficient for disk-based searching than binary search. Since we write out the index record in order, we can actually build a B-tree append-only to the index file. At the end, the root node will be the last page in the file. There is no need to include every position in the index. We can write say one in every 10 transactions into the index; a connecting slave will then lookup the closest matching position in the index and at most need to skip over 10 transactions in the binlog. In general, we can keep track of the size of binlog written and index written, and write only a fraction of transactions into the index to ensure that the ratio of index size to binlog size does not exceed some appropriate number (eg. 2% or something). To further reduce the index size, it could be "compressed" by omitting from entries those (domain_id, server_id) combinations that do not change. Typically, there can be many distinct such values in a binlog file, but only a few of them are likely to change within one given file. |
Description |
Current GTID code needs to scan one binlog file from the beginning when a
slave connects, to find the place to start replicating. If slave reconnects are frequent, this can be a performance regression, as in earlier versions slaves could immediate seek to the supplied file offset. To fix this problem, indexing should be done on the binlog files, allowing to quickly locate any GTID in the binlog file. As an added bonus, this would allow to detect if old-style replication tries to connect at an incorrect file offset (eg. in the middle of an event), avoiding sending potentially corrupted events. The index could be an extra file master-bin.000001.idx written in parallel with the binlog file. There is no need to flush or sync the file at every binlog write, as it can be recovered easily in case of crash or code can fall back to scanning the corresponding binlog file. The index would be page-based, allowing a connecting slave to do binary search to find the desired location in the binlog to start replication. The file would contain an ordered sequence of GTID binlog states with their corresponding start offset into the associated binlog file. -The connecting slave would then binary\-search for its start position in the index, and this way be able to jump directly to the right start position in the binlog file, without needing to scan that binlog from the start. This would greatly improve performance when many slaves connect simultaneously.- A B-tree like structure is more efficient for disk-based searching than binary search. Since we write out the index record in order, we can actually build a B-tree append-only to the index file. At the end, the root node will be the last page in the file. There is no need to include every position in the index. We can write say one in every 10 transactions into the index; a connecting slave will then lookup the closest matching position in the index and at most need to skip over 10 transactions in the binlog. In general, we can keep track of the size of binlog written and index written, and write only a fraction of transactions into the index to ensure that the ratio of index size to binlog size does not exceed some appropriate number (eg. 2% or something). To further reduce the index size, it could be "compressed" by omitting from entries those (domain_id, server_id) combinations that do not change. Typically, there can be many distinct such values in a binlog file, but only a few of them are likely to change within one given file. |
Current GTID code needs to scan one binlog file from the beginning when a
slave connects, to find the place to start replicating. If slave reconnects are frequent, this can be a performance regression, as in earlier versions slaves could immediate seek to the supplied file offset. To fix this problem, indexing should be done on the binlog files, allowing to quickly locate any GTID in the binlog file. As an added bonus, this would allow to detect if old-style replication tries to connect at an incorrect file offset (eg. in the middle of an event), avoiding sending potentially corrupted events. The index could be an extra file master-bin.000001.idx written in parallel with the binlog file. There is no need to flush or sync the file at every binlog write, as it can be recovered easily in case of crash or code can fall back to scanning the corresponding binlog file. The index would be page-based, allowing a connecting slave to do binary search to find the desired location in the binlog to start replication. The file would contain an ordered sequence of GTID binlog states with their corresponding start offset into the associated binlog file. -The connecting slave would then binary\-search for its start position in the index, and this way be able to jump directly to the right start position in the binlog file, without needing to scan that binlog from the start. This would greatly improve performance when many slaves connect simultaneously.- A B-tree like structure is more efficient for disk-based searching than binary search. Since we write out the index record in order, we can actually build a B-tree append-only to the index file. At the end, the root node will be the last page in the file. There is no need to include every position in the index. We can write say one in every 10 transactions into the index; a connecting slave will then lookup the closest matching position in the index and at most need to skip over 10 transactions in the binlog. In general, we can keep track of the size of binlog written and index written, and write only a fraction of transactions into the index to ensure that the ratio of index size to binlog size does not exceed some appropriate number (eg. 2% or something). To further reduce the index size, it could be "compressed" by omitting from entries those (domain_id, server_id) combinations that do not change. Typically, there can be many distinct such values in a binlog file, but only a few of them are likely to change within one given file. |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35906 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35916 ] |
Description |
Current GTID code needs to scan one binlog file from the beginning when a
slave connects, to find the place to start replicating. If slave reconnects are frequent, this can be a performance regression, as in earlier versions slaves could immediate seek to the supplied file offset. To fix this problem, indexing should be done on the binlog files, allowing to quickly locate any GTID in the binlog file. As an added bonus, this would allow to detect if old-style replication tries to connect at an incorrect file offset (eg. in the middle of an event), avoiding sending potentially corrupted events. The index could be an extra file master-bin.000001.idx written in parallel with the binlog file. There is no need to flush or sync the file at every binlog write, as it can be recovered easily in case of crash or code can fall back to scanning the corresponding binlog file. The index would be page-based, allowing a connecting slave to do binary search to find the desired location in the binlog to start replication. The file would contain an ordered sequence of GTID binlog states with their corresponding start offset into the associated binlog file. -The connecting slave would then binary\-search for its start position in the index, and this way be able to jump directly to the right start position in the binlog file, without needing to scan that binlog from the start. This would greatly improve performance when many slaves connect simultaneously.- A B-tree like structure is more efficient for disk-based searching than binary search. Since we write out the index record in order, we can actually build a B-tree append-only to the index file. At the end, the root node will be the last page in the file. There is no need to include every position in the index. We can write say one in every 10 transactions into the index; a connecting slave will then lookup the closest matching position in the index and at most need to skip over 10 transactions in the binlog. In general, we can keep track of the size of binlog written and index written, and write only a fraction of transactions into the index to ensure that the ratio of index size to binlog size does not exceed some appropriate number (eg. 2% or something). To further reduce the index size, it could be "compressed" by omitting from entries those (domain_id, server_id) combinations that do not change. Typically, there can be many distinct such values in a binlog file, but only a few of them are likely to change within one given file. |
Current GTID code needs to scan one binlog file from the beginning when a
slave connects, to find the place to start replicating. If slave reconnects are frequent, this can be a performance regression, as in earlier versions slaves could immediate seek to the supplied file offset. To fix this problem, indexing should be done on the binlog files, allowing to quickly locate any GTID in the binlog file. As an added bonus, this would allow to detect if old-style replication tries to connect at an incorrect file offset (eg. in the middle of an event), avoiding sending potentially corrupted events. The index could be an extra file master-bin.000001.idx written in parallel with the binlog file. There is no need to flush or sync the file at every binlog write, as it can be recovered easily in case of crash or code can fall back to scanning the corresponding binlog file. The index would be page-based, allowing a connecting slave to do binary search to find the desired location in the binlog to start replication. The file would contain an ordered sequence of GTID binlog states with their corresponding start offset into the associated binlog file. -The connecting slave would then binary\-search for its start position in the index, and this way be able to jump directly to the right start position in the binlog file, without needing to scan that binlog from the start. This would greatly improve performance when many slaves connect simultaneously.- A B-tree like structure is more efficient for disk-based searching than binary search. Since we write out the index record in order, we can actually build a B-tree append-only to the index file. At the end, the root node will be the last page in the file. There is no need to include every position in the index. We can write say one in every 10 transactions into the index; a connecting slave will then lookup the closest matching position in the index and at most need to skip over 10 transactions in the binlog. In general, we can keep track of the size of binlog written and index written, and write only a fraction of transactions into the index to ensure that the ratio of index size to binlog size does not exceed some appropriate number (eg. 2% or something). To further reduce the index size, it could be "compressed" by omitting from entries those (domain_id, server_id) combinations that do not change. Typically, there can be many distinct such values in a binlog file, but only a few of them are likely to change within one given file. A work-in-progress high-level design description: {noformat} This implements an on-disk index for each binlog file to speed up access to the binlog at a specific offset or GTID position. This is primarily used when a slave connects to the master, but also by user calling BINLOG_GTID_POS(). A connecting slave first scans the binlog files to find the last one with an initial GTID_LIST event that lies before the starting GTID position. Then a sequential scan of the binlog file is done until the requested GTID position is found. The binlog index conceptually extends this using index records corresponding to different offset within one binlog file. Each record functions as if it was the initial GTID_LIST event of a new binlog file, allowing the sequential scan to start from the corresponding position. By having sufficiently many index records, the scan will be fast. The code has a performance-critical "sync" path which is called while holding LOCK_log whenever a new GTID is added to a binlog file. And a less critical "async" path which runs in the binlog background thread and does most of the processing. The "sync" and "async" paths each run single threaded, but can execute in parallel with each other. The index file is written incrementally together with the binlog file. However there is no fsync()'s of the index file needed while writing. A partially written index left by a crashing server will be re-written during binlog recovery. A reader is allowed to use the index as it is begin written (for the "hot" binlog file); such access is protected by mutex. In case of lost or corrupt index, fallback to full sequential scan is done (so performance will be affected but not correct functionality). The index file is structured like a B+-tree. The index is append-only, so also resembles a log-structured merge-tree, but with no merging of levels needed as it covers a single fixed-size binlog file. This makes the building of the tree relatively simple. Keys in the tree consist of a GTID state (corresponding to a GTID_LIST event) and the associated binlog file offset. All keys (except the first key in each level of the tree) are delta-compressed to save space, holding only the (domain_id, server_id) pairs that differ from the previous record. The file is page-based. The first page contains the leftmost leaf node, and the root node is at the end of the file. An incompletely written index file can be detected by the last page in the file not being a root node page. Nodes in the B+-tree usually fit in one page, but a node can be split across multiple pages if GTID states are very large. ToDo: Document the page /indexfile format. Here is an example index file in schematic form: S0 D1 D2 D3 D4 D5 D6 D7 D8 D9 D10 D11 A(S0 D1 D2) B(D3 D4 D5) C(D6 D7 D8) E(D9 D10) F(D11) D(A <S3> B <D4+D5+D6> C) G(E <D10+D11> F) H(D <S9> G) S0 is the full initial GTID state at the start of the file. D1-D11 are the differential GTID states in the binlog file; eg. they could be the individual GTIDs in the binlog file if a record is writte for each GTID. S3 is the full GTID state corresponding to D3, ie. S3=S0+D1+D2+D3. A(), B(), ..., H() are the nodes in the binlog index. H is the root. A(S0 D1 D2) is a leaf node containing records S0, D1, and D2. G(E <D10+D11> F) is an interior node with key <D10+D11> and child pointers to E and F. To find eg. S4, we start from the root H. S4<S9, so we follow the left child pointer to D. S4>S3, so we follow the child pointer to leaf node C. Here are the operations that occur while writing the example index file: S0 A(A) R(A,S0) D1 R(A,D1) D2 R(A,D2) D3 W(A) I(D) P(D,A) A(B) R(B,D3) R(D,S3) D4 R(A,D4) D5 R(A,D5) D6 W(B) P(D,B) A(C) R(C,D6) R(D,D4+D5+D6) D7 R(C,D7) D8 R(C,D8) D9 W(C) P(D,C) A(E) R(E,D9) W(D) I(H) P(H,D) R(H,S9) D10 R(E,D10) D11 W(E) I(G) P(G,E) A(F) R(F,S10) R(G,D10+D11) <EOF> W(F) P(G,F) W(G) P(H,G) W(H) A(x) -> allocate leaf node x. R(x,k) -> insert an index record containing key k in node x. W(x) -> write node x to the index file. I(y) -> allocate interior node y. P(y,x) -> insert a child pointer to y in x. {noformat} |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35933 ] |
Remote Link | This issue links to "Mailing list discussion of the design (Web Link)" [ 35943 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35948 ] |
Fix Version/s | 11.4 [ 29301 ] | |
Fix Version/s | 11.3 [ 28565 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35972 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36010 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36107 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36138 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36178 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36215 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36231 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36245 ] |
Assignee | Andrei Elkin [ elkin ] | Kristian Nielsen [ knielsen ] |
Status | In Review [ 10002 ] | Stalled [ 10000 ] |
Status | Stalled [ 10000 ] | In Progress [ 3 ] |
Status | In Progress [ 3 ] | Stalled [ 10000 ] |
Status | Stalled [ 10000 ] | In Testing [ 10301 ] |
Assignee | Kristian Nielsen [ knielsen ] | Roel Van de Paar [ roel ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36261 ] |
Issue Type | Task [ 3 ] | New Feature [ 2 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36310 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36321 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36331 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36339 ] |
Link | This issue causes TODO-4495 [ TODO-4495 ] |
Link | This issue causes TODO-4495 [ TODO-4495 ] |
Link | This issue is part of TODO-4495 [ TODO-4495 ] |
Labels | gtid | Preview_11.4 gtid |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36348 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36352 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36362 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36378 ] |
Assignee | Roel Van de Paar [ roel ] | Kristian Nielsen [ knielsen ] |
Status | In Testing [ 10301 ] | Stalled [ 10000 ] |
Fix Version/s | 11.4.1 [ 29523 ] | |
Fix Version/s | 11.4 [ 29301 ] | |
Resolution | Fixed [ 1 ] | |
Status | Stalled [ 10000 ] | Closed [ 6 ] |
Comment |
[ Not sure why this was merged: no OK to push was provided.
As there are still errors to be evaluated and this will likely not make it into 11.4 due to a backlog of issues and tasks. ] |
Assignee | Kristian Nielsen [ knielsen ] | Roel Van de Paar [ roel ] |
Resolution | Fixed [ 1 ] | |
Status | Closed [ 6 ] | Stalled [ 10000 ] |
Status | Stalled [ 10000 ] | In Progress [ 3 ] |
Status | In Progress [ 3 ] | In Testing [ 10301 ] |
Fix Version/s | 11.4.1 [ 29523 ] |
Comment |
[ > Kristian Nielsen Please revert https://github.com/MariaDB/server/commit/d039346a7acac7c72f264377a8cd6b0273c548df
Why? ] |
Fix Version/s | 11.4.1 [ 29523 ] | |
Assignee | Roel Van de Paar [ roel ] | Kristian Nielsen [ knielsen ] |
Resolution | Fixed [ 1 ] | |
Status | In Testing [ 10301 ] | Closed [ 6 ] |
Assignee | Kristian Nielsen [ knielsen ] | Roel Van de Paar [ roel ] |
Resolution | Fixed [ 1 ] | |
Status | Closed [ 6 ] | Stalled [ 10000 ] |
Status | Stalled [ 10000 ] | In Progress [ 3 ] |
Status | In Progress [ 3 ] | In Testing [ 10301 ] |
Fix Version/s | 11.5 [ 29506 ] | |
Fix Version/s | 11.4.1 [ 29523 ] |
Fix Version/s | 11.4.1 [ 29523 ] | |
Fix Version/s | 11.5 [ 29506 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36605 ] |
Link | This issue relates to MDEV-33426 [ MDEV-33426 ] |
issue.field.resolutiondate | 2024-02-12 21:38:06.0 | 2024-02-12 21:38:05.733 |
Resolution | Fixed [ 1 ] | |
Status | In Testing [ 10301 ] | Closed [ 6 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36620 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 35906 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36703 ] |
Remote Link | This issue links to "Page (MariaDB Confluence)" [ 36703 ] |
Zendesk Related Tickets | 201658 174686 | |
Zendesk active tickets | 201658 |
Link |
This issue causes |