Details
-
Bug
-
Status: Open (View Workflow)
-
Major
-
Resolution: Unresolved
-
10.6.7
-
None
-
CentOS Stream 8 x86_64
Description
SHOW SLAVE STATUS \G
|
Master_Host: 172.31.31.10
|
Master_User: orchestrator
|
Master_Port: 3306
|
Slave_IO_Running: No
|
Slave_SQL_Running: Yes
|
Last_IO_Errno: 1593
|
Last_IO_Error: Fatal error: Failed to run 'after_read_event' hook
|
 |
2022-04-11 18:00:11 83 [Note] Slave I/O thread: connected to master 'orchestrator@172.31.31.10:3306',replication starts at GTID position '1-1-338'
|
2022-04-11 18:00:11 83 [ERROR] Missing magic number for semi-sync packet, packet len: 53
|
2022-04-11 18:00:11 83 [ERROR] Slave I/O: Fatal error: Failed to run 'after_read_event' hook, Internal MariaDB error code: 1593
|
2022-04-11 18:00:11 83 [Note] Slave I/O thread exiting, read up to log 'mysql-bin-13306.000001', position 86652; GTID position 1-1-338, master 172.31.31.10:3306
|
|
I have a simple MariaDB cluster, the master(172.31.31.10), and a slave(172.31.31.12).
And I randomly drop ingress packets in master and slave by the following commands:
iptables -A INPUT -p tcp --destination-port 3306 -j DROP
|
iptables -A INPUT -p icmp -j DROP
|
also I'll randomly remove above iptables rules to simulate network jitter.
And under some circumstances, I met the Fatal error: Failed to run 'after_read_event' hook, I don't know about the root cause, it seems some kinda race condition when initializing the semi-sync replication.
Also note that, when I met this kinda error, I ran START SLAVE IO_THREAD the replication will back to normal.
Attachments
Issue Links
- relates to
-
MDEV-32551 "Read semi-sync reply magic number error" warnings on master
- Closed