bitnami/mariadb-galera:11.0.4-debian-11-r0 in k8s cluster
Description
After upgrading initially from 10.6->10.11->11.0.3 and now to 11.0.4 we see that 2 of the members start up without any issues but the 3rd member(db-0) fails to start up due to an old sequence no.(117376) that i believe is being passed on from the donor.
We are unable to find the old seqno anywhere except in the `ibdata1` file of the donor(by searching for the hex of it). But not sure how to get rid of this old seqno.
What we did try is starting db-0 with fresh volume by removing K8s PVC for it, so it does a SST but that fails as well.
Logs from member db-0:
[Note] WSREP: SST received
[Note] WSREP: Server status change joiner -> initializing
[Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
I have the same problem and I didn't even update from a very old version. Somehow I cannot start the second node anymore even if I wipe /var/lib/mysql completely on the joining node. The RSync-SST finishes but then I get: "SST script passed bogus GTID" and the node fails. How can I reset this and get out of this situation?
Ranjan Ghosh
added a comment - I have the same problem and I didn't even update from a very old version. Somehow I cannot start the second node anymore even if I wipe /var/lib/mysql completely on the joining node. The RSync-SST finishes but then I get: "SST script passed bogus GTID" and the node fails. How can I reset this and get out of this situation?
What worked was bootstrapping the cluster. And then manual SST for the affected node(probably the SST on startup would also work but we didn't try).
We have other databases which have this issue as well(with 11.0.3) and we thought upgrading to 11.0.4 might fix it due this change. But the issue can be seen even with the change, and we don't want to upgrade/restart those due to risk of running in to the same issue.
Ihtisham ul Haq
added a comment - What worked was bootstrapping the cluster. And then manual SST for the affected node(probably the SST on startup would also work but we didn't try).
We have other databases which have this issue as well(with 11.0.3) and we thought upgrading to 11.0.4 might fix it due this change . But the issue can be seen even with the change, and we don't want to upgrade/restart those due to risk of running in to the same issue.
@ihti: I haven't been able to reproduce it but your tip to bootstrap the whole cluster was also what worked for me. I needed to shut down all nodes and start booting them up one-by-one from scratch. It's quite surprising because it's the first time I can think of that could not be solved by delete /var/lib/mysql on the second node if the first node seems to be running without any problems. Or put differently: You cannot somehow "see" that the first node is in a weird state that doesn't allow the second node to come on. Everything seems normal. The first node is running. It accepts queries etc. And then you try to start the second node and it just won't work. Even if you delete /var/lib/mysql completely - it is impossible to bring up the second node. I tried it multiple times. Only after restarting/bootstrapping the first node, things start to work again.
Googling didn't immediately yield an answer for me so I hope people will find this now looking for:
SST script passed bogus GTID
Ranjan Ghosh
added a comment - @ihti: I haven't been able to reproduce it but your tip to bootstrap the whole cluster was also what worked for me. I needed to shut down all nodes and start booting them up one-by-one from scratch. It's quite surprising because it's the first time I can think of that could not be solved by delete /var/lib/mysql on the second node if the first node seems to be running without any problems. Or put differently: You cannot somehow "see" that the first node is in a weird state that doesn't allow the second node to come on. Everything seems normal. The first node is running. It accepts queries etc. And then you try to start the second node and it just won't work. Even if you delete /var/lib/mysql completely - it is impossible to bring up the second node. I tried it multiple times. Only after restarting/bootstrapping the first node, things start to work again.
Googling didn't immediately yield an answer for me so I hope people will find this now looking for:
SST script passed bogus GTID
1. Run mariadb-galera cluster with 3 peers in version 10.11.6
We use docker images bitnami/mariadb-galera:10.11.6-debian-11-r0
```
myname="$(hostname -f)";
export MARIADB_GALERA_NODE_NAME="$myname"
export MARIADB_EXTRA_FLAGS="--wsrep-sst-receive-address=$myname"
export MARIADB_GALERA_CLUSTER_ADDRESS="gcomm://$LIST_OF_PEERS
/opt/bitnami/scripts/mariadb-galera/entrypoint.sh /opt/bitnami/scripts/mariadb-galera/run.sh
```
2. Create some test entries in the database.
3. Stop one peer and upgrade it to 11.0.4 (e.g. docker image bitnami/mariadb-galera:11.0.4-debian-11-r1)
4. Create some more entries in the database.
5. Start the updated peer, it will fail.
Ihtisham ul Haq
added a comment - Steps to reproduce the issue:
1. Run mariadb-galera cluster with 3 peers in version 10.11.6
We use docker images bitnami/mariadb-galera:10.11.6-debian-11-r0
```
myname="$(hostname -f)";
export MARIADB_GALERA_NODE_NAME="$myname"
export MARIADB_EXTRA_FLAGS="--wsrep-sst-receive-address=$myname"
export MARIADB_GALERA_CLUSTER_ADDRESS="gcomm://$LIST_OF_PEERS
/opt/bitnami/scripts/mariadb-galera/entrypoint.sh /opt/bitnami/scripts/mariadb-galera/run.sh
```
2. Create some test entries in the database.
3. Stop one peer and upgrade it to 11.0.4 (e.g. docker image bitnami/mariadb-galera:11.0.4-debian-11-r1)
4. Create some more entries in the database.
5. Start the updated peer, it will fail.
If we set innodb_undo_tablespaces to 0 the start works.
Max Lamprecht
added a comment - - edited We have narrowed it down to commit: https://github.com/MariaDB/server/commit/44dce3b2077e64a1efc607668d0d7b42a7c4ee78
If we set innodb_undo_tablespaces to 0 the start works.
We figured out that we are not able to change the innodb_undo_tablespaces setting online to 3 with the mariabackup wsrep_sst method. This is because with mariabackup there is no clean shutdown of the innodb donor.
We used wsrep_sst_method=mysqldump as a workaround to migrate our galera clusters to the new default
Max Lamprecht
added a comment - We figured out that we are not able to change the innodb_undo_tablespaces setting online to 3 with the mariabackup wsrep_sst method. This is because with mariabackup there is no clean shutdown of the innodb donor.
We used wsrep_sst_method=mysqldump as a workaround to migrate our galera clusters to the new default
Thank you for the detailed bug report, including clear steps to reproduce.
MDEV-29986 indeed changed innodb_undo_tablespaces=3 by default. This was possible thanks to MDEV-19229, which allows the parameter to be changed at server startup, provided that the undo logs are logically empty. Usually this is guaranteed by a prior shutdown with innodb_fast_shutdown=0.
It is unclear to me why the following error would be caused by a change of innodb_undo_tablespaces:
InnoDB does not store any GTID information in its internal data structures, such as undo logs. For normal replication, there is a table mysql.gtid_slave_pos, which is in InnoDB format by default. I am not familiar with the GTID and how it plays with Galera.
For Galera, we store a wsrep_checkpoint in a rollback segment header. The storage format was last changed in MariaDB Server 10.3, related to MDEV-15132 and MDEV-15158. In MDEV-29986, in the function srv_undo_tablespaces_reinit(), we made effort to preserve this information. Maybe there is some flaw in this logic?
I’m assigning this to the Galera developers for root cause analysis.
Marko Mäkelä
added a comment - Thank you for the detailed bug report, including clear steps to reproduce.
MDEV-29986 indeed changed innodb_undo_tablespaces=3 by default. This was possible thanks to MDEV-19229 , which allows the parameter to be changed at server startup, provided that the undo logs are logically empty. Usually this is guaranteed by a prior shutdown with innodb_fast_shutdown=0 .
It is unclear to me why the following error would be caused by a change of innodb_undo_tablespaces :
[ERROR] WSREP: sst_received failed: SST script passed bogus GTID: 6aa53efc-db72-11ec-880f-a282ce494905:117376. Preceding view GTID: 6aa53efc-db72-11ec-880f-a282ce494905:128934
InnoDB does not store any GTID information in its internal data structures, such as undo logs. For normal replication, there is a table mysql.gtid_slave_pos , which is in InnoDB format by default. I am not familiar with the GTID and how it plays with Galera.
For Galera, we store a wsrep_checkpoint in a rollback segment header. The storage format was last changed in MariaDB Server 10.3, related to MDEV-15132 and MDEV-15158 . In MDEV-29986 , in the function srv_undo_tablespaces_reinit() , we made effort to preserve this information. Maybe there is some flaw in this logic?
I’m assigning this to the Galera developers for root cause analysis.
marko I tested with 2-node cluster so that node1 uses 10.11 and node2 uses 11.3. I can reproduce the issue i.e. node2 does not join the cluster. I can't reproduce if wsrep_sst_method=rsync or if innodb-undo-tablespaces=3 used on 10.11. Therefore, either mariabackup or server behaves incorrectly on upgrade 10.11 --> 11.3 and it seems to be related on fact that innodb-undo-tablespaces=3 by default on 11.3. From logs I see that joiner does send correct gtid to donor and this information is sent correctly to joiner. However, for some reason either on mariabackup --prepare or next server startup wrong gtid is read from storage. From extended logs I can see that donor has correct gtid stored in storage at the point when mariabackup --backup is done. I'm not familiar how this creation of undo log tablespaces work on upgrade but based on my analysis something goes wrong.
Jan Lindström
added a comment - marko I tested with 2-node cluster so that node1 uses 10.11 and node2 uses 11.3. I can reproduce the issue i.e. node2 does not join the cluster. I can't reproduce if wsrep_sst_method=rsync or if innodb-undo-tablespaces=3 used on 10.11. Therefore, either mariabackup or server behaves incorrectly on upgrade 10.11 --> 11.3 and it seems to be related on fact that innodb-undo-tablespaces=3 by default on 11.3. From logs I see that joiner does send correct gtid to donor and this information is sent correctly to joiner. However, for some reason either on mariabackup --prepare or next server startup wrong gtid is read from storage. From extended logs I can see that donor has correct gtid stored in storage at the point when mariabackup --backup is done. I'm not familiar how this creation of undo log tablespaces work on upgrade but based on my analysis something goes wrong.
marko rr is not working on my kernel (or more precisely perf). I have provided full datadir, let me see if I can provide same before and after --prepare.
Jan Lindström
added a comment - marko rr is not working on my kernel (or more precisely perf). I have provided full datadir, let me see if I can provide same before and after --prepare.
janlindstrom, which difference should we be looking at? To diagnose this, I would ideally use an rr replay trace of the wsrep_sst_method=mariabackup donor server as well as a copy of the files created by mariadb-backup --backup, and a detailed description of the fault of the restored backup. It could be useful to have an rr replay trace of the snapshot recipient as well.
Marko Mäkelä
added a comment - janlindstrom , which difference should we be looking at? To diagnose this, I would ideally use an rr replay trace of the wsrep_sst_method=mariabackup donor server as well as a copy of the files created by mariadb-backup --backup , and a detailed description of the fault of the restored backup. It could be useful to have an rr replay trace of the snapshot recipient as well.
janlindstrom, as far as I can tell, node2_before_prepare.tgz contains a backup of a data directory that had been created with a non-default setting innodb_undo_tablespaces=0. My invocation of applying log to it:
_RR_TRACE_DIR=rr rr record \
mariadb-backup --prepare --target-dir node2/.sst
finishes as follows:
2024-03-04 11:42:57 0 [Note] InnoDB: End of log at LSN=76798
2024-03-04 11:42:57 0 [Note] InnoDB: To recover: 158 pages
2024-03-04 11:42:57 0 [Note] InnoDB: Last binlog file './binlog.000001', position 1506
[00] 2024-03-04 11:42:57 Last binlog file ./binlog.000001, position 1506
The WSREP position information was fetched by trx_rseg_read_wsrep_checkpoint(), and the extra blank line is part of the msg() call in xb_write_galera_info(). As far as I understand, no information was lost at this point yet. How should I invoke the server on this data directory, and where and how should I check for the unexpected result?
Marko Mäkelä
added a comment - janlindstrom , as far as I can tell, node2_before_prepare.tgz contains a backup of a data directory that had been created with a non-default setting innodb_undo_tablespaces=0 . My invocation of applying log to it:
_RR_TRACE_DIR=rr rr record \
mariadb-backup --prepare --target-dir node2/.sst
finishes as follows:
2024-03-04 11:42:57 0 [Note] InnoDB: End of log at LSN=76798
2024-03-04 11:42:57 0 [Note] InnoDB: To recover: 158 pages
2024-03-04 11:42:57 0 [Note] InnoDB: Last binlog file './binlog.000001', position 1506
[00] 2024-03-04 11:42:57 Last binlog file ./binlog.000001, position 1506
[00] 2024-03-04 11:42:57 mariabackup: Recovered WSREP position: e8b3b373-d798-11ee-a970-8f678cb12371:8 domain_id: 0
[00] 2024-03-04 11:42:57 completed OK!
The WSREP position information was fetched by trx_rseg_read_wsrep_checkpoint() , and the extra blank line is part of the msg() call in xb_write_galera_info() . As far as I understand, no information was lost at this point yet. How should I invoke the server on this data directory, and where and how should I check for the unexpected result?
I see that trx_sys_t::reset_page(), which initializes the page of the first rollback segment in the system tablespace, fails to write back the information that trx_rseg_read_wsrep_checkpoint() is looking for.
Marko Mäkelä
added a comment - I see that trx_sys_t::reset_page() , which initializes the page of the first rollback segment in the system tablespace, fails to write back the information that trx_rseg_read_wsrep_checkpoint() is looking for.
Sorry, I realized that srv_undo_tablespaces_reinit(), which was added in MDEV-19229, does make an effort to preserve the additional Galera metadata. It is part of the first mini-transaction that will reinitialize the TRX_SYS page and all rollback segment header pages for innodb_undo_tablespaces=0:
This is only advancing the persistent sequence by 1, instead of advancing it by srv_undo_tablespaces.
Marko Mäkelä
added a comment - Sorry, I realized that srv_undo_tablespaces_reinit() , which was added in MDEV-19229 , does make an effort to preserve the additional Galera metadata. It is part of the first mini-transaction that will reinitialize the TRX_SYS page and all rollback segment header pages for innodb_undo_tablespaces=0 :
#ifdef WITH_WSREP
if (!trx_sys.recovered_wsrep_xid.is_null())
trx_rseg_update_wsrep_checkpoint(
first_rseg_hdr, &trx_sys.recovered_wsrep_xid, &mtr);
#endif /* WITH_WSREP */
}
dict_hdr->page.fix();
mtr.commit();
A subsequent mini-transaction will initialize the undo log tablespaces.
While reviewing the logic, I noticed that we probably fail to allocate all undo tablespace IDs:
if (fil_assign_new_space_id(&srv_undo_space_id_start))
mtr.write<4>(*dict_hdr, DICT_HDR + DICT_HDR_MAX_SPACE_ID
+ dict_hdr->page.frame, srv_undo_space_id_start);
This is only advancing the persistent sequence by 1, instead of advancing it by srv_undo_tablespaces .
The bug occurs due to a failure to read the Galera metadata on startup, before the MDEV-19229 logic for changing innodb_undo_tablespaces (MariaDB Server 10.11) is executed.
It is clear that this needs to be fixed in 10.11. It is also somewhat understandable to fix this in earlier versions. Do we understand the exact impact of this outside MDEV-19229? Could this explain some Galera SST failures or inconsistencies in earlier versions?
Marko Mäkelä
added a comment - The bug occurs due to a failure to read the Galera metadata on startup, before the MDEV-19229 logic for changing innodb_undo_tablespaces (MariaDB Server 10.11) is executed.
It is clear that this needs to be fixed in 10.11. It is also somewhat understandable to fix this in earlier versions. Do we understand the exact impact of this outside MDEV-19229 ? Could this explain some Galera SST failures or inconsistencies in earlier versions?
I would like to see a regression test that fails before the code change and passes with the change. The current one was passing for me also without the code change.
Marko Mäkelä
added a comment - I would like to see a regression test that fails before the code change and passes with the change. The current one was passing for me also without the code change.
marko It seems I do not know how to create one. If I manually start node1 with innodb-undo-tablespaces=0 , create sst user, table and some data, and finally start node2 with innodb-undo-tablespaces=3 it does fail wihtout the code change. If I use mtr even if I empty node2 datadir totally, I do not see the failure.
Jan Lindström
added a comment - - edited marko It seems I do not know how to create one. If I manually start node1 with innodb-undo-tablespaces=0 , create sst user, table and some data, and finally start node2 with innodb-undo-tablespaces=3 it does fail wihtout the code change. If I use mtr even if I empty node2 datadir totally, I do not see the failure.
Marko Mäkelä
added a comment - janlindstrom , to review this, I need the exact steps how to reproduce the error. Please update https://github.com/MariaDB/server/pull/3122 with the details.
marko I added steps to pull request. My 3rd test case using only one node and wsrep-recover did not fail when undo tablespaces were increased, I do not know why.
Jan Lindström
added a comment - marko I added steps to pull request. My 3rd test case using only one node and wsrep-recover did not fail when undo tablespaces were increased, I do not know why.
I tried to run the steps to reproduce this, but the steps are incomplete. First, I had to create a node2 subdirectory before starting the server using a modified copy of node2.cnf, to avoid the following:
2024-04-04 9:53:02 0 [Warning] Can't create test file '/dev/shm/galeratest/node2/HOSTNAME.lower-test' (Errcode: 2 "No such file or directory")
/dev/shm/11.0/sql/mariadbd: Can't change dir to '/dev/shm/galeratest/node2/' (Errcode: 2 "No such file or directory")
Then, the code that the patch would cover would not run because of the following error:
2024-04-04 9:57:15 0 [Note] WSREP: Joiner monitor thread started to monitor
2024-04-04 10:01:04 0 [Warning] WSREP: 1.0 (HOSTNAME): State transfer to 0.0 (HOSTNAME) failed: -2 (No such file or directory)
2024-04-04 10:01:04 0 [ERROR] WSREP: ./gcs/src/gcs_group.cpp:gcs_group_handle_join_msg():1172: Will never receive state. Need to abort.
Can you provide a self-contained test case?
Marko Mäkelä
added a comment - I tried to run the steps to reproduce this, but the steps are incomplete. First, I had to create a node2 subdirectory before starting the server using a modified copy of node2.cnf , to avoid the following:
2024-04-04 9:53:02 0 [Warning] Can't create test file '/dev/shm/galeratest/node2/HOSTNAME.lower-test' (Errcode: 2 "No such file or directory")
/dev/shm/11.0/sql/mariadbd: Can't change dir to '/dev/shm/galeratest/node2/' (Errcode: 2 "No such file or directory")
Then, the code that the patch would cover would not run because of the following error:
2024-04-04 9:57:15 0 [Note] WSREP: Joiner monitor thread started to monitor
sh: 1: wsrep_sst_mariabackup: not found
A third attempt was not much better:
PATH=/dev/shm/11.0/scripts:/dev/shm/11.0/extra/mariabackup:"$PATH" /dev/shm/11.0/sql/mariadbd --defaults-file=/dev/shm/node2.cnf
2024-04-04 10:01:04 0 [Warning] WSREP: 1.0 (HOSTNAME): State transfer to 0.0 (HOSTNAME) failed: -2 (No such file or directory)
2024-04-04 10:01:04 0 [ERROR] WSREP: ./gcs/src/gcs_group.cpp:gcs_group_handle_join_msg():1172: Will never receive state. Need to abort.
Can you provide a self-contained test case?
marko Updated my instructions but it depends your setup so can't give test case that will work on your laptop. Your problem most likely is that it can't find necessary binaries i.e. you did not have 10.11 installed.
Jan Lindström
added a comment - marko Updated my instructions but it depends your setup so can't give test case that will work on your laptop. Your problem most likely is that it can't find necessary binaries i.e. you did not have 10.11 installed.
I do not (want to) have a VM or Docker setup where I could install ephemeral packages without a risk of breaking my environment. I was thinking if this could be reproduced simply by something like the following:
With the above modification to the test, I successfully reproduced an error:
10.11 af4df93cf855228f094f2a19f7dd0bdc005035cf with the above test patch
CURRENT_TEST: galera.galera_sst_mariabackup
mysqltest: In included file "./include/galera_wait_ready.inc":
included from /mariadb/10.11/mysql-test/suite/galera/include/start_mysqld.inc at line 16:
included from ./suite/galera/include/galera_st_kill_slave.inc at line 63:
included from /mariadb/10.11/mysql-test/suite/galera/t/galera_sst_mariabackup.test at line 15:
At line 28: "Server did not transition to READY state"
If I apply your patch, the test fails with just a cosmetic error, which I suppose could be fixed by adding an extra restart before the first work step.
10.11 af4df93cf855228f094f2a19f7dd0bdc005035cf with the 2 patches
galera.galera_sst_mariabackup 'debug,innodb' [ fail ] Found warnings/errors in server log file!
Test ended at 2024-04-04 17:45:21
line
2024-04-04 17:44:40 0 [Warning] InnoDB: Cannot change innodb_undo_tablespaces=3 because previous shutdown was not with innodb_fast_shutdown=0
2024-04-04 17:45:15 0 [Warning] InnoDB: Cannot change innodb_undo_tablespaces=3 because previous shutdown was not with innodb_fast_shutdown=0
^ Found warnings in /dev/shm/10.11/mysql-test/var/log/mysqld.2.err
Please update the pull request with this idea and rebase it to the latest 10.11, so that we can see it in action on our CI systems.
Marko Mäkelä
added a comment - I do not (want to) have a VM or Docker setup where I could install ephemeral packages without a risk of breaking my environment. I was thinking if this could be reproduced simply by something like the following:
diff --git a/mysql-test/suite/galera/t/galera_sst_mariabackup.cnf b/mysql-test/suite/galera/t/galera_sst_mariabackup.cnf
index 857a4101406..43638ace9b3 100644
--- a/mysql-test/suite/galera/t/galera_sst_mariabackup.cnf
+++ b/mysql-test/suite/galera/t/galera_sst_mariabackup.cnf
@@ -7,9 +7,13 @@ wsrep_debug=1
[mysqld.1]
wsrep_provider_options='base_port=@mysqld.1.#galera_port;gcache.size=1;pc.ignore_sb=true'
+innodb_fast_shutdown=0
+innodb_undo_tablespaces=0
[mysqld.2]
wsrep_provider_options='base_port=@mysqld.2.#galera_port;gcache.size=1;pc.ignore_sb=true'
+innodb_fast_shutdown=0
+innodb_undo_tablespaces=3
[sst]
transferfmt=@ENV.MTR_GALERA_TFMT
With the above modification to the test, I successfully reproduced an error:
10.11 af4df93cf855228f094f2a19f7dd0bdc005035cf with the above test patch
CURRENT_TEST: galera.galera_sst_mariabackup
mysqltest: In included file "./include/galera_wait_ready.inc":
included from /mariadb/10.11/mysql-test/suite/galera/include/start_mysqld.inc at line 16:
included from ./suite/galera/include/galera_st_kill_slave.inc at line 63:
included from /mariadb/10.11/mysql-test/suite/galera/t/galera_sst_mariabackup.test at line 15:
At line 28: "Server did not transition to READY state"
If I apply your patch, the test fails with just a cosmetic error, which I suppose could be fixed by adding an extra restart before the first work step.
10.11 af4df93cf855228f094f2a19f7dd0bdc005035cf with the 2 patches
galera.galera_sst_mariabackup 'debug,innodb' [ fail ] Found warnings/errors in server log file!
Test ended at 2024-04-04 17:45:21
line
2024-04-04 17:44:40 0 [Warning] InnoDB: Cannot change innodb_undo_tablespaces=3 because previous shutdown was not with innodb_fast_shutdown=0
2024-04-04 17:45:15 0 [Warning] InnoDB: Cannot change innodb_undo_tablespaces=3 because previous shutdown was not with innodb_fast_shutdown=0
^ Found warnings in /dev/shm/10.11/mysql-test/var/log/mysqld.2.err
Please update the pull request with this idea and rebase it to the latest 10.11, so that we can see it in action on our CI systems.
marko You are correct on that test does not pass with those settings. However, it does not reproduce original problem i.e. I cant find error message similar to "[ERROR] WSREP: sst_received failed: SST script passed bogus GTID: 6aa53efc-db72-11ec-880f-a282ce494905:117376. Preceding view GTID: 6aa53efc-db72-11ec-880f-a282ce494905:128934". Using manual testing I could reproduce this error. In last test did you verify that InnoDB really did create 3 tablespaces ?
Jan Lindström
added a comment - marko You are correct on that test does not pass with those settings. However, it does not reproduce original problem i.e. I cant find error message similar to " [ERROR] WSREP: sst_received failed: SST script passed bogus GTID: 6aa53efc-db72-11ec-880f-a282ce494905:117376. Preceding view GTID: 6aa53efc-db72-11ec-880f-a282ce494905:128934". Using manual testing I could reproduce this error. In last test did you verify that InnoDB really did create 3 tablespaces ?
10.11 af4df93cf855228f094f2a19f7dd0bdc005035cf with the 2 patches
var/log/mysqld.2.err:2024-04-05 16:19:49 0 [Note] InnoDB: Reinitializing innodb_undo_tablespaces= 3 from 0
var/log/mysqld.2.err:2024-04-05 16:20:24 0 [Note] InnoDB: Reinitializing innodb_undo_tablespaces= 3 from 0
If you want more information, inject an abort() to somewhere in srv_undo_tablespaces_reinit() or simply use rr record and set a breakpoint in rr replay.
Marko Mäkelä
added a comment - I thought that it was obvious that my suggested change to the existing test does cover the scenario. I reran it for you:
./mtr galera.galera_sst_mariabackup
grep 'InnoDB: Reinitializing innodb_undo_tablespaces' var/log/mysqld.*err
10.11 af4df93cf855228f094f2a19f7dd0bdc005035cf with the 2 patches
var/log/mysqld.2.err:2024-04-05 16:19:49 0 [Note] InnoDB: Reinitializing innodb_undo_tablespaces= 3 from 0
var/log/mysqld.2.err:2024-04-05 16:20:24 0 [Note] InnoDB: Reinitializing innodb_undo_tablespaces= 3 from 0
If you want more information, inject an abort() to somewhere in srv_undo_tablespaces_reinit() or simply use rr record and set a breakpoint in rr replay .
I have the same problem and I didn't even update from a very old version. Somehow I cannot start the second node anymore even if I wipe /var/lib/mysql completely on the joining node. The RSync-SST finishes but then I get: "SST script passed bogus GTID" and the node fails. How can I reset this and get out of this situation?