Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-33610

Circular replication breaks after upgrading

Details

    • Bug
    • Status: Open (View Workflow)
    • Major
    • Resolution: Unresolved
    • 10.11.7, 10.11.8
    • 10.11
    • Replication
    • None
    • RockyLinux 8.9

    Description

      We have a circular replication between a 3 node Galera Cluster and a standalone MariaDB node. Replication in both direction was working well in 10.11.6.

      After upgrading to 10.11.7, the following error occur and 1 of the cluster node was automatically removed from the cluster:

      2024-03-06 13:34:19 792 [Note] Slave I/O thread: Start asynchronous replication to master 'repl_user@sunny:3306' in log 'mysql-bin.000302' at position 752714051
      2024-03-06 13:34:19 793 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.000302' at position 752714051, relay log './rezel-relay-bin.000001' position: 4
      2024-03-06 13:34:19 793 [Note] WSREP: ready state reached
      2024-03-06 13:34:19 792 [Note] Slave I/O thread: connected to master 'repl_user@sunny:3306',replication started in log 'mysql-bin.000302' at position 752714051
      2024-03-06 13:34:20 0 [Note] WSREP: Member 2(unicorn) initiates vote on e171100d-322d-11e8-9957-624639ca8561:397983341,cb28eb6c0c3860ee:  Duplicate entry '0-121295558' for key 'PRIMARY', Error_code: 1062;
      2024-03-06 13:34:20 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:397983341:
         cb28eb6c0c3860ee:   1/3
      Waiting for more votes.
      2024-03-06 13:34:20 2 [Note] WSREP: Got vote request for seqno e171100d-322d-11e8-9957-624639ca8561:397983341
      2024-03-06 13:34:20 0 [Note] WSREP: Member 0(phenex) initiates vote on e171100d-322d-11e8-9957-624639ca8561:397983341,cb28eb6c0c3860ee:  Duplicate entry '0-121295558' for key 'PRIMARY', Error_code: 1062;
      2024-03-06 13:34:20 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:397983341:
         cb28eb6c0c3860ee:   2/3
      Winner: cb28eb6c0c3860ee
      2024-03-06 13:34:20 0 [Note] WSREP: Recovering vote result from history: e171100d-322d-11e8-9957-624639ca8561:397983341,cb28eb6c0c3860ee
      2024-03-06 13:34:20 2 [ERROR] WSREP: Vote 0 (success) on e171100d-322d-11e8-9957-624639ca8561:397983341 is inconsistent with group. Leaving cluster.
      

      Rolling back to 10.11.6 resolves the issue.

      Attachments

        Issue Links

          Activity

            eric@geniqtech.com Eric Ang created issue -
            serg Sergei Golubchik made changes -
            Field Original Value New Value
            Fix Version/s 10.11 [ 27614 ]

            janlindstrom, do you know of any galera related changes between 10.11.6 and 10.11.7 that could have possibly caused that? Can you look at the git log mariadb-10.11.6..mariadb-10.11.7 to see if there's anything particularly suspicious?

            serg Sergei Golubchik added a comment - janlindstrom , do you know of any galera related changes between 10.11.6 and 10.11.7 that could have possibly caused that? Can you look at the git log mariadb-10.11.6..mariadb-10.11.7 to see if there's anything particularly suspicious?
            serg Sergei Golubchik made changes -
            Assignee Jan Lindström [ JIRAUSER53125 ]
            serg Sergei Golubchik made changes -
            Description We have a circular replication between a 3 node Galera Cluster and a standalone MariaDB node. Replication in both direction was working well in 10.11.6.

            After upgrading to 10.11.7, the following error occur and 1 of the cluster node was automatically removed from the cluster:

            2024-03-06 13:34:19 792 [Note] Slave I/O thread: Start asynchronous replication to master 'repl_user@sunny:3306' in log 'mysql-bin.000302' at position 752714051
            2024-03-06 13:34:19 793 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.000302' at position 752714051, relay log './rezel-relay-bin.000001' position: 4
            2024-03-06 13:34:19 793 [Note] WSREP: ready state reached
            2024-03-06 13:34:19 792 [Note] Slave I/O thread: connected to master 'repl_user@sunny:3306',replication started in log 'mysql-bin.000302' at position 752714051
            2024-03-06 13:34:20 0 [Note] WSREP: Member 2(unicorn) initiates vote on e171100d-322d-11e8-9957-624639ca8561:397983341,cb28eb6c0c3860ee: Duplicate entry '0-121295558' for key 'PRIMARY', Error_code: 1062;
            2024-03-06 13:34:20 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:397983341:
               cb28eb6c0c3860ee: 1/3
            Waiting for more votes.
            2024-03-06 13:34:20 2 [Note] WSREP: Got vote request for seqno e171100d-322d-11e8-9957-624639ca8561:397983341
            2024-03-06 13:34:20 0 [Note] WSREP: Member 0(phenex) initiates vote on e171100d-322d-11e8-9957-624639ca8561:397983341,cb28eb6c0c3860ee: Duplicate entry '0-121295558' for key 'PRIMARY', Error_code: 1062;
            2024-03-06 13:34:20 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:397983341:
               cb28eb6c0c3860ee: 2/3
            Winner: cb28eb6c0c3860ee
            2024-03-06 13:34:20 0 [Note] WSREP: Recovering vote result from history: e171100d-322d-11e8-9957-624639ca8561:397983341,cb28eb6c0c3860ee
            2024-03-06 13:34:20 2 [ERROR] WSREP: Vote 0 (success) on e171100d-322d-11e8-9957-624639ca8561:397983341 is inconsistent with group. Leaving cluster.

            Rolling back to 10.11.6 resolves the issue.
            We have a circular replication between a 3 node Galera Cluster and a standalone MariaDB node. Replication in both direction was working well in 10.11.6.

            After upgrading to 10.11.7, the following error occur and 1 of the cluster node was automatically removed from the cluster:
            {noformat}
            2024-03-06 13:34:19 792 [Note] Slave I/O thread: Start asynchronous replication to master 'repl_user@sunny:3306' in log 'mysql-bin.000302' at position 752714051
            2024-03-06 13:34:19 793 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.000302' at position 752714051, relay log './rezel-relay-bin.000001' position: 4
            2024-03-06 13:34:19 793 [Note] WSREP: ready state reached
            2024-03-06 13:34:19 792 [Note] Slave I/O thread: connected to master 'repl_user@sunny:3306',replication started in log 'mysql-bin.000302' at position 752714051
            2024-03-06 13:34:20 0 [Note] WSREP: Member 2(unicorn) initiates vote on e171100d-322d-11e8-9957-624639ca8561:397983341,cb28eb6c0c3860ee: Duplicate entry '0-121295558' for key 'PRIMARY', Error_code: 1062;
            2024-03-06 13:34:20 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:397983341:
               cb28eb6c0c3860ee: 1/3
            Waiting for more votes.
            2024-03-06 13:34:20 2 [Note] WSREP: Got vote request for seqno e171100d-322d-11e8-9957-624639ca8561:397983341
            2024-03-06 13:34:20 0 [Note] WSREP: Member 0(phenex) initiates vote on e171100d-322d-11e8-9957-624639ca8561:397983341,cb28eb6c0c3860ee: Duplicate entry '0-121295558' for key 'PRIMARY', Error_code: 1062;
            2024-03-06 13:34:20 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:397983341:
               cb28eb6c0c3860ee: 2/3
            Winner: cb28eb6c0c3860ee
            2024-03-06 13:34:20 0 [Note] WSREP: Recovering vote result from history: e171100d-322d-11e8-9957-624639ca8561:397983341,cb28eb6c0c3860ee
            2024-03-06 13:34:20 2 [ERROR] WSREP: Vote 0 (success) on e171100d-322d-11e8-9957-624639ca8561:397983341 is inconsistent with group. Leaving cluster.
            {noformat}
            Rolling back to 10.11.6 resolves the issue.

            eric@geniqtech.com Can you please provide full error log and node configuration. Do you use gtids ? Can you do show create table mysql.gtid_slave_pos;

            janlindstrom Jan Lindström added a comment - eric@geniqtech.com Can you please provide full error log and node configuration. Do you use gtids ? Can you do show create table mysql.gtid_slave_pos;
            janlindstrom Jan Lindström made changes -
            Status Open [ 1 ] Needs Feedback [ 10501 ]
            eric@geniqtech.com Eric Ang added a comment - - edited

            We are not using GTID and here's the configuration:
            Galera Cluster Node1
            [mysqld]
            server-id=999
            log_bin=mysql-bin
            binlog_format=ROW
            expire_logs_days=10
            log-error=mysqld.log
            slow_query_log=1
            slow_query_log_file=phenex_slow.log
            default_storage_engine=InnoDB
            innodb_autoinc_lock_mode=2
            innodb_flush_log_at_trx_commit=0
            innodb_buffer_pool_size=1G
            max_connections=2000
            log_slave_updates=on

            [galera]
            wsrep_on=ON
            wsrep_provider=/usr/lib64/galera-4/libgalera_smm.so
            wsrep_provider_options="gcache.size=2G"
            wsrep_cluster_name="GENIQ"
            wsrep_cluster_address="gcomm://10.130.248.131,10.130.28.224,10.130.2.245"
            wsrep_sst_method=mariabackup
            wsrep_sst_auth=xxx:yyy
            wsrep_node_name=phenex
            wsrep_node_address="10.130.2.245"

            Node2 (This node will also replicate from standalone MariaDB)
            [mysqld]
            server-id=999
            log_bin=mysql-bin
            binlog_format=ROW
            expire_logs_days=10
            log-error=mysqld.log
            slow_query_log=1
            slow_query_log_file=rezel_slow.log
            default_storage_engine=InnoDB
            innodb_autoinc_lock_mode=2
            innodb_flush_log_at_trx_commit=0
            innodb_buffer_pool_size=1G
            max_connections=2000
            log_slave_updates=on
            slave-skip-errors = 1062,1047
            slave_exec_mode=IDEMPOTENT

            [galera]
            wsrep_on=ON
            wsrep_provider=/usr/lib64/galera-4/libgalera_smm.so
            wsrep_provider_options="gcache.size=2G"
            wsrep_cluster_name="GENIQ"
            wsrep_cluster_address="gcomm://10.130.248.131,10.130.28.224,10.130.2.245"
            wsrep_sst_method=mariabackup
            wsrep_sst_auth=xxx:yyy
            wsrep_node_name=rezel
            wsrep_node_address="10.130.248.131"
            wsrep_restart_slave=ON

            Node3
            [mysqld]
            server-id=999
            log_bin=mysql-bin
            binlog_format=ROW
            expire_logs_days=10
            log-error=mysqld.log
            slow_query_log=1
            slow_query_log_file=unicorn_slow.log
            default_storage_engine=InnoDB
            innodb_autoinc_lock_mode=2
            innodb_flush_log_at_trx_commit=0
            innodb_buffer_pool_size=1G
            max_connections=2000
            log_slave_updates=on

            [galera]
            wsrep_on=ON
            wsrep_provider=/usr/lib64/galera-4/libgalera_smm.so
            wsrep_provider_options="gcache.size=2G"
            wsrep_cluster_name="GENIQ"
            wsrep_cluster_address="gcomm://10.130.248.131,10.130.28.224,10.130.2.245"
            wsrep_sst_method=mariabackup
            wsrep_sst_auth=xxx:yyy
            wsrep_node_name=unicorn
            wsrep_node_address="10.130.28.224"

            Standalone MariaDB:
            [mysqld]
            server-id=8
            log_bin=mysql-bin
            log_slave_updates=1
            expire_logs_days=7
            log-error=mysqld.log
            binlog_format=ROW
            default_storage_engine=InnoDB
            innodb_autoinc_lock_mode=2
            innodb_flush_log_at_trx_commit=0
            innodb_buffer_pool_size=1G
            max_connections=2000
            slow_query_log=1
            slow_query_log_file=sunny_slow.log
            auto_increment_increment=4
            auto_increment_offset=5
            gtid_domain_id=1
            wsrep_gtid_domain_id=3
            slave-skip-errors = 1062,1032

            [galera]

            MySQL Slave Status for Galera cluster node 2:
            Slave_IO_State: Waiting for master to send event
            Master_Host: sunny
            Master_User: repl_user
            Master_Port: 3306
            Connect_Retry: 60
            Master_Log_File: mysql-bin.000304
            Read_Master_Log_Pos: 283356399
            Relay_Log_File: rezel-relay-bin.000004
            Relay_Log_Pos: 30321205
            Relay_Master_Log_File: mysql-bin.000304
            Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
            Replicate_Rewrite_DB:
            Replicate_Do_DB:
            Replicate_Ignore_DB:
            Replicate_Do_Table:
            Replicate_Ignore_Table:
            Replicate_Wild_Do_Table:
            Replicate_Wild_Ignore_Table:
            Last_Errno: 0
            Last_Error:
            Skip_Counter: 0
            Exec_Master_Log_Pos: 283356399
            Relay_Log_Space: 30321561
            Until_Condition: None
            Until_Log_File:
            Until_Log_Pos: 0
            Master_SSL_Allowed: No
            Master_SSL_CA_File:
            Master_SSL_CA_Path:
            Master_SSL_Cert:
            Master_SSL_Cipher:
            Master_SSL_Key:
            Seconds_Behind_Master: 0
            Master_SSL_Verify_Server_Cert: No
            Last_IO_Errno: 0
            Last_IO_Error:
            Last_SQL_Errno: 0
            Last_SQL_Error:
            Replicate_Ignore_Server_Ids: 999
            Master_Server_Id: 8
            Master_SSL_Crl:
            Master_SSL_Crlpath:
            Using_Gtid: No
            Gtid_IO_Pos:
            Replicate_Do_Domain_Ids:
            Replicate_Ignore_Domain_Ids:
            Parallel_Mode: optimistic
            SQL_Delay: 0
            SQL_Remaining_Delay: NULL
            Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
            Slave_DDL_Groups: 0
            Slave_Non_Transactional_Groups: 0
            Slave_Transactional_Groups: 213498

            MySQL Slave Status for Standalone MariaDB:
            Slave_IO_State: Waiting for master to send event
            Master_Host: phenex
            Master_User: repl_user
            Master_Port: 23306
            Connect_Retry: 60
            Master_Log_File: mysql-bin.000679
            Read_Master_Log_Pos: 279545343
            Relay_Log_File: sunny-relay-bin.000384
            Relay_Log_Pos: 249622554
            Relay_Master_Log_File: mysql-bin.000679
            Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
            Replicate_Rewrite_DB:
            Replicate_Do_DB:
            Replicate_Ignore_DB:
            Replicate_Do_Table:
            Replicate_Ignore_Table:
            Replicate_Wild_Do_Table:
            Replicate_Wild_Ignore_Table:
            Last_Errno: 0
            Last_Error:
            Skip_Counter: 0
            Exec_Master_Log_Pos: 279545343
            Relay_Log_Space: 249621902
            Until_Condition: None
            Until_Log_File:
            Until_Log_Pos: 0
            Master_SSL_Allowed: No
            Master_SSL_CA_File:
            Master_SSL_CA_Path:
            Master_SSL_Cert:
            Master_SSL_Cipher:
            Master_SSL_Key:
            Seconds_Behind_Master: 0
            Master_SSL_Verify_Server_Cert: No
            Last_IO_Errno: 0
            Last_IO_Error:
            Last_SQL_Errno: 0
            Last_SQL_Error:
            Replicate_Ignore_Server_Ids:
            Master_Server_Id: 999
            Master_SSL_Crl:
            Master_SSL_Crlpath:
            Using_Gtid: No
            Gtid_IO_Pos:
            Replicate_Do_Domain_Ids:
            Replicate_Ignore_Domain_Ids:
            Parallel_Mode: optimistic
            SQL_Delay: 0
            SQL_Remaining_Delay: NULL
            Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
            Slave_DDL_Groups: 0
            Slave_Non_Transactional_Groups: 0
            Slave_Transactional_Groups: 1052196

            Output of show create table mysql.gtid_slave_pos
            Table: gtid_slave_pos
            Create Table: CREATE TABLE `gtid_slave_pos` (
            `domain_id` int(10) unsigned NOT NULL,
            `sub_id` bigint(20) unsigned NOT NULL,
            `server_id` int(10) unsigned NOT NULL,
            `seq_no` bigint(20) unsigned NOT NULL,
            PRIMARY KEY (`domain_id`,`sub_id`)
            ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_swedish_ci COMMENT='Replication slave GTID position'

            eric@geniqtech.com Eric Ang added a comment - - edited We are not using GTID and here's the configuration: Galera Cluster Node1 [mysqld] server-id=999 log_bin=mysql-bin binlog_format=ROW expire_logs_days=10 log-error=mysqld.log slow_query_log=1 slow_query_log_file=phenex_slow.log default_storage_engine=InnoDB innodb_autoinc_lock_mode=2 innodb_flush_log_at_trx_commit=0 innodb_buffer_pool_size=1G max_connections=2000 log_slave_updates=on [galera] wsrep_on=ON wsrep_provider=/usr/lib64/galera-4/libgalera_smm.so wsrep_provider_options="gcache.size=2G" wsrep_cluster_name="GENIQ" wsrep_cluster_address="gcomm://10.130.248.131,10.130.28.224,10.130.2.245" wsrep_sst_method=mariabackup wsrep_sst_auth=xxx:yyy wsrep_node_name=phenex wsrep_node_address="10.130.2.245" Node2 (This node will also replicate from standalone MariaDB) [mysqld] server-id=999 log_bin=mysql-bin binlog_format=ROW expire_logs_days=10 log-error=mysqld.log slow_query_log=1 slow_query_log_file=rezel_slow.log default_storage_engine=InnoDB innodb_autoinc_lock_mode=2 innodb_flush_log_at_trx_commit=0 innodb_buffer_pool_size=1G max_connections=2000 log_slave_updates=on slave-skip-errors = 1062,1047 slave_exec_mode=IDEMPOTENT [galera] wsrep_on=ON wsrep_provider=/usr/lib64/galera-4/libgalera_smm.so wsrep_provider_options="gcache.size=2G" wsrep_cluster_name="GENIQ" wsrep_cluster_address="gcomm://10.130.248.131,10.130.28.224,10.130.2.245" wsrep_sst_method=mariabackup wsrep_sst_auth=xxx:yyy wsrep_node_name=rezel wsrep_node_address="10.130.248.131" wsrep_restart_slave=ON Node3 [mysqld] server-id=999 log_bin=mysql-bin binlog_format=ROW expire_logs_days=10 log-error=mysqld.log slow_query_log=1 slow_query_log_file=unicorn_slow.log default_storage_engine=InnoDB innodb_autoinc_lock_mode=2 innodb_flush_log_at_trx_commit=0 innodb_buffer_pool_size=1G max_connections=2000 log_slave_updates=on [galera] wsrep_on=ON wsrep_provider=/usr/lib64/galera-4/libgalera_smm.so wsrep_provider_options="gcache.size=2G" wsrep_cluster_name="GENIQ" wsrep_cluster_address="gcomm://10.130.248.131,10.130.28.224,10.130.2.245" wsrep_sst_method=mariabackup wsrep_sst_auth=xxx:yyy wsrep_node_name=unicorn wsrep_node_address="10.130.28.224" Standalone MariaDB: [mysqld] server-id=8 log_bin=mysql-bin log_slave_updates=1 expire_logs_days=7 log-error=mysqld.log binlog_format=ROW default_storage_engine=InnoDB innodb_autoinc_lock_mode=2 innodb_flush_log_at_trx_commit=0 innodb_buffer_pool_size=1G max_connections=2000 slow_query_log=1 slow_query_log_file=sunny_slow.log auto_increment_increment=4 auto_increment_offset=5 gtid_domain_id=1 wsrep_gtid_domain_id=3 slave-skip-errors = 1062,1032 [galera] MySQL Slave Status for Galera cluster node 2: Slave_IO_State: Waiting for master to send event Master_Host: sunny Master_User: repl_user Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000304 Read_Master_Log_Pos: 283356399 Relay_Log_File: rezel-relay-bin.000004 Relay_Log_Pos: 30321205 Relay_Master_Log_File: mysql-bin.000304 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Rewrite_DB: Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 283356399 Relay_Log_Space: 30321561 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: 999 Master_Server_Id: 8 Master_SSL_Crl: Master_SSL_Crlpath: Using_Gtid: No Gtid_IO_Pos: Replicate_Do_Domain_Ids: Replicate_Ignore_Domain_Ids: Parallel_Mode: optimistic SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Slave_DDL_Groups: 0 Slave_Non_Transactional_Groups: 0 Slave_Transactional_Groups: 213498 MySQL Slave Status for Standalone MariaDB: Slave_IO_State: Waiting for master to send event Master_Host: phenex Master_User: repl_user Master_Port: 23306 Connect_Retry: 60 Master_Log_File: mysql-bin.000679 Read_Master_Log_Pos: 279545343 Relay_Log_File: sunny-relay-bin.000384 Relay_Log_Pos: 249622554 Relay_Master_Log_File: mysql-bin.000679 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Rewrite_DB: Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 279545343 Relay_Log_Space: 249621902 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 999 Master_SSL_Crl: Master_SSL_Crlpath: Using_Gtid: No Gtid_IO_Pos: Replicate_Do_Domain_Ids: Replicate_Ignore_Domain_Ids: Parallel_Mode: optimistic SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Slave_DDL_Groups: 0 Slave_Non_Transactional_Groups: 0 Slave_Transactional_Groups: 1052196 Output of show create table mysql.gtid_slave_pos Table: gtid_slave_pos Create Table: CREATE TABLE `gtid_slave_pos` ( `domain_id` int(10) unsigned NOT NULL, `sub_id` bigint(20) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `seq_no` bigint(20) unsigned NOT NULL, PRIMARY KEY (`domain_id`,`sub_id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_swedish_ci COMMENT='Replication slave GTID position'
            eric@geniqtech.com Eric Ang made changes -
            Attachment mysqld.log [ 73255 ]
            alice Alice Sherepa made changes -
            Status Needs Feedback [ 10501 ] Open [ 1 ]
            alice Alice Sherepa made changes -

            I think this problem will be fixed by : https://github.com/MariaDB/server/pull/3111

            janlindstrom Jan Lindström added a comment - I think this problem will be fixed by : https://github.com/MariaDB/server/pull/3111
            eric@geniqtech.com Eric Ang added a comment -

            Sorry it's not very clear to me. The link you show states that it only fixes for version 10.4?
            The upgrade that we performed is from version 10.11.6 to 10.11.7, so something must have broke between these 2 versions.

            eric@geniqtech.com Eric Ang added a comment - Sorry it's not very clear to me. The link you show states that it only fixes for version 10.4? The upgrade that we performed is from version 10.11.6 to 10.11.7, so something must have broke between these 2 versions.

            eric@geniqtech.com When pull request is merged it will then merged (later) on other versions also. Proposed fix should merge cleanly to 10.11 also.

            janlindstrom Jan Lindström added a comment - eric@geniqtech.com When pull request is merged it will then merged (later) on other versions also. Proposed fix should merge cleanly to 10.11 also.
            eric@geniqtech.com Eric Ang added a comment - - edited

            Upgraded to v10.11.8 but problem is still not fixed.
            Tried converting to use GTID replication but still failed with same issue.

            Replication from cluster to standalone MariaDB is working fine but when I tried to start replication from the other direction, cluster will throw the following error:
            2024-05-20 12:24:30 1 [ERROR] Slave SQL: Could not execute Write_rows_v1 event on table mysql.gtid_slave_pos; Duplicate entry '999-160089208' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log FIRST, end_log_pos 111, Internal MariaDB error code: 1062
            2024-05-20 12:24:30 1 [Warning] WSREP: Event 2 Write_rows_v1 apply failed: 121, seqno 477369369
            2024-05-20 12:24:30 0 [Note] WSREP: Member 0(phenex) initiates vote on e171100d-322d-11e8-9957-624639ca8561:477369369,b1f8b91a415d0521: Duplicate entry '999-160089208' for key 'PRIMARY', Error_code: 1062;
            2024-05-20 12:24:30 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:477369369:
            b1f8b91a415d0521: 1/3
            Waiting for more votes.
            2024-05-20 12:24:30 0 [Note] WSREP: Member 1(unicorn) initiates vote on e171100d-322d-11e8-9957-624639ca8561:477369369,b1f8b91a415d0521: Duplicate entry '999-160089208' for key 'PRIMARY', Error_code: 1062;
            2024-05-20 12:24:30 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:477369369:
            b1f8b91a415d0521: 2/3
            Winner: b1f8b91a415d0521
            2024-05-20 12:24:30 1 [ERROR] Slave SQL: Could not execute Write_rows_v1 event on table mysql.gtid_slave_pos; Duplicate entry '999-160089209' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log FIRST, end_log_pos 111, Internal MariaDB error code: 1062
            2024-05-20 12:24:30 1 [Warning] WSREP: Event 2 Write_rows_v1 apply failed: 121, seqno 477369371
            2024-05-20 12:24:31 0 [Note] WSREP: (2a52e3a7-aec2, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.130.248.131:4567
            2024-05-20 12:24:31 0 [Note] WSREP: Member 0(phenex) initiates vote on e171100d-322d-11e8-9957-624639ca8561:477369371,938e13d1bd464741: Duplicate entry '999-160089209' for key 'PRIMARY', Error_code: 1062;
            2024-05-20 12:24:31 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:477369371:
            938e13d1bd464741: 1/3
            Waiting for more votes.
            2024-05-20 12:24:31 0 [Note] WSREP: Member 1(unicorn) initiates vote on e171100d-322d-11e8-9957-624639ca8561:477369371,938e13d1bd464741: Duplicate entry '999-160089209' for key 'PRIMARY', Error_code: 1062;
            2024-05-20 12:24:31 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:477369371:
            938e13d1bd464741: 2/3
            Winner: 938e13d1bd464741
            2024-05-20 12:24:31 0 [Note] WSREP: declaring 24e24d2f-8fdb at tcp://10.130.2.245:4567 stable
            2024-05-20 12:24:31 0 [Note] WSREP: forgetting 303b7b5f-8042 (tcp://10.130.248.131:4567)
            2024-05-20 12:24:31 0 [Note] WSREP: (2a52e3a7-aec2, 'tcp://0.0.0.0:4567') turning message relay requesting off
            2024-05-20 12:24:31 1 [Note] WSREP: Got vote request for seqno e171100d-322d-11e8-9957-624639ca8561:477369369
            2024-05-20 12:24:31 1 [Note] WSREP: e171100d-322d-11e8-9957-624639ca8561:477369369 already voted on. Continue.
            2024-05-20 12:24:31 1 [ERROR] Slave SQL: Could not execute Write_rows_v1 event on table mysql.gtid_slave_pos; Duplicate entry '999-160089210' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log FIRST, end_log_pos 111, Internal MariaDB error code: 1062
            2024-05-20 12:24:31 1 [Warning] WSREP: Event 2 Write_rows_v1 apply failed: 121, seqno 477369372
            2024-05-20 12:24:31 0 [Note] WSREP: Node 24e24d2f-8fdb state prim
            2024-05-20 12:24:31 0 [Note] WSREP: view(view_id(PRIM,24e24d2f-8fdb,4) memb

            { 24e24d2f-8fdb,0 2a52e3a7-aec2,0 }

            joined {
            } left {
            } partitioned

            { 303b7b5f-8042,0 }

            )
            2024-05-20 12:24:31 0 [Note] WSREP: save pc into disk
            2024-05-20 12:24:31 0 [Note] WSREP: forgetting 303b7b5f-8042 (tcp://10.130.248.131:4567)
            2024-05-20 12:24:31 0 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2
            2024-05-20 12:24:31 0 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
            2024-05-20 12:24:31 0 [Note] WSREP: STATE EXCHANGE: sent state msg: da838621-1660-11ef-8614-d6a98b4414ac
            2024-05-20 12:24:31 0 [Note] WSREP: STATE EXCHANGE: got state msg: da838621-1660-11ef-8614-d6a98b4414ac from 0 (phenex)
            2024-05-20 12:24:31 0 [Note] WSREP: Member 1(unicorn) initiates vote on e171100d-322d-11e8-9957-624639ca8561:477369372,a3353ec3955c0b15: Duplicate entry '999-160089210' for key 'PRIMARY', Error_code: 1062;
            2024-05-20 12:24:31 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:477369372:
            a3353ec3955c0b15: 1/2
            Waiting for more votes.
            2024-05-20 12:24:31 0 [Note] WSREP: STATE EXCHANGE: got state msg: da838621-1660-11ef-8614-d6a98b4414ac from 1 (unicorn)
            2024-05-20 12:24:31 0 [Note] WSREP: Quorum results:
            version = 6,
            component = PRIMARY,
            conf_id = 3,
            members = 2/2 (joined/total),
            act_id = 477369372,
            last_appl. = 477369290,
            protocols = 3/11/4 (gcs/repl/appl),
            vote policy= 0,
            group UUID = e171100d-322d-11e8-9957-624639ca8561
            2024-05-20 12:24:31 0 [Note] WSREP: Flow-control interval: [23, 23]
            2024-05-20 12:24:31 0 [Note] WSREP: Member 0(phenex) initiates vote on e171100d-322d-11e8-9957-624639ca8561:477369372,a3353ec3955c0b15: Duplicate entry '999-160089210' for key 'PRIMARY', Error_code: 1062;
            2024-05-20 12:24:31 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:477369372:
            a3353ec3955c0b15: 1/2
            Waiting for more votes.
            2024-05-20 12:24:36 0 [Note] WSREP: cleaning up 303b7b5f-8042 (tcp://10.130.248.131:4567)
            2024-05-20 12:24:58 0 [Note] WSREP: (2a52e3a7-aec2, 'tcp://0.0.0.0:4567') connection established to eb156904-894a tcp://10.130.248.131:4567
            2024-05-20 12:24:59 0 [Note] WSREP: declaring 24e24d2f-8fdb at tcp://10.130.2.245:4567 stable
            2024-05-20 12:24:59 0 [Note] WSREP: declaring eb156904-894a at tcp://10.130.248.131:4567 stable
            2024-05-20 12:24:59 0 [Note] WSREP: Node 24e24d2f-8fdb state prim
            2024-05-20 12:24:59 0 [Note] WSREP: view(view_id(PRIM,24e24d2f-8fdb,5) memb

            { 24e24d2f-8fdb,0 2a52e3a7-aec2,0 eb156904-894a,0 }

            joined {
            } left {
            } partitioned {
            })
            2024-05-20 12:24:59 0 [Note] WSREP: save pc into disk
            2024-05-20 12:24:59 0 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 3
            2024-05-20 12:24:59 0 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
            2024-05-20 12:24:59 0 [Note] WSREP: STATE EXCHANGE: sent state msg: eb6405db-1660-11ef-b852-7adc67637148
            2024-05-20 12:24:59 0 [Note] WSREP: STATE EXCHANGE: got state msg: eb6405db-1660-11ef-b852-7adc67637148 from 0 (phenex)
            2024-05-20 12:24:59 0 [Note] WSREP: STATE EXCHANGE: got state msg: eb6405db-1660-11ef-b852-7adc67637148 from 1 (unicorn)
            2024-05-20 12:24:59 0 [Note] WSREP: STATE EXCHANGE: got state msg: eb6405db-1660-11ef-b852-7adc67637148 from 2 (rezel)
            2024-05-20 12:24:59 0 [Note] WSREP: Quorum results:
            version = 6,
            component = PRIMARY,
            conf_id = 4,
            members = 2/3 (joined/total),
            act_id = 477369395,
            last_appl. = 477369290,
            protocols = 3/11/4 (gcs/repl/appl),
            vote policy= 0,
            group UUID = e171100d-322d-11e8-9957-624639ca8561
            2024-05-20 12:24:59 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:477369372:
            a3353ec3955c0b15: 1/2
            Waiting for more votes.
            2024-05-20 12:24:59 0 [Note] WSREP: Flow-control interval: [28, 28]
            2024-05-20 12:25:02 0 [Note] WSREP: (2a52e3a7-aec2, 'tcp://0.0.0.0:4567') turning message relay requesting off

            My steps to enable replication from standalone to cluster:
            1) Get latest GTID position from standalone MariaDB using
            SELECT @@global.gtid_binlog_pos;
            2) On 1 of the node from cluster
            SET GLOBAL gtid_slave_pos = "XXX";
            CHANGE MASTER TO
            MASTER_HOST="xxx",
            MASTER_PORT=3306,
            MASTER_USER="xxx",
            MASTER_PASSWORD="xxx",
            MASTER_USE_GTID=slave_pos;
            START SLAVE;

            Here's my updated server configurations:
            Cluster node 1
            [mysqld]
            server-id=999
            gtid_domain_id=245
            log_bin=mysql-bin
            log_slave_updates=on
            …

            [galera]
            wsrep_gtid_mode=ON
            wsrep_gtid_domain_id=999
            …

            Cluster node 2
            [mysqld]
            server-id=999
            gtid_domain_id=224
            log_bin=mysql-bin
            log_slave_updates=on
            …

            [galera]
            wsrep_gtid_mode=ON
            wsrep_gtid_domain_id=999
            …

            Cluster node 3
            [mysqld]
            server-id=999
            gtid_domain_id=131
            log_bin=mysql-bin
            log_slave_updates=on
            …

            [galera]
            wsrep_gtid_mode=ON
            wsrep_gtid_domain_id=999
            …

            Standalone MariaDB
            [mysqld]
            server-id=8
            gtid_domain_id=8
            log_bin=mysql-bin
            log_slave_updates=on
            …

            eric@geniqtech.com Eric Ang added a comment - - edited Upgraded to v10.11.8 but problem is still not fixed. Tried converting to use GTID replication but still failed with same issue. Replication from cluster to standalone MariaDB is working fine but when I tried to start replication from the other direction, cluster will throw the following error: 2024-05-20 12:24:30 1 [ERROR] Slave SQL: Could not execute Write_rows_v1 event on table mysql.gtid_slave_pos; Duplicate entry '999-160089208' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log FIRST, end_log_pos 111, Internal MariaDB error code: 1062 2024-05-20 12:24:30 1 [Warning] WSREP: Event 2 Write_rows_v1 apply failed: 121, seqno 477369369 2024-05-20 12:24:30 0 [Note] WSREP: Member 0(phenex) initiates vote on e171100d-322d-11e8-9957-624639ca8561:477369369,b1f8b91a415d0521: Duplicate entry '999-160089208' for key 'PRIMARY', Error_code: 1062; 2024-05-20 12:24:30 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:477369369: b1f8b91a415d0521: 1/3 Waiting for more votes. 2024-05-20 12:24:30 0 [Note] WSREP: Member 1(unicorn) initiates vote on e171100d-322d-11e8-9957-624639ca8561:477369369,b1f8b91a415d0521: Duplicate entry '999-160089208' for key 'PRIMARY', Error_code: 1062; 2024-05-20 12:24:30 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:477369369: b1f8b91a415d0521: 2/3 Winner: b1f8b91a415d0521 2024-05-20 12:24:30 1 [ERROR] Slave SQL: Could not execute Write_rows_v1 event on table mysql.gtid_slave_pos; Duplicate entry '999-160089209' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log FIRST, end_log_pos 111, Internal MariaDB error code: 1062 2024-05-20 12:24:30 1 [Warning] WSREP: Event 2 Write_rows_v1 apply failed: 121, seqno 477369371 2024-05-20 12:24:31 0 [Note] WSREP: (2a52e3a7-aec2, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers: tcp://10.130.248.131:4567 2024-05-20 12:24:31 0 [Note] WSREP: Member 0(phenex) initiates vote on e171100d-322d-11e8-9957-624639ca8561:477369371,938e13d1bd464741: Duplicate entry '999-160089209' for key 'PRIMARY', Error_code: 1062; 2024-05-20 12:24:31 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:477369371: 938e13d1bd464741: 1/3 Waiting for more votes. 2024-05-20 12:24:31 0 [Note] WSREP: Member 1(unicorn) initiates vote on e171100d-322d-11e8-9957-624639ca8561:477369371,938e13d1bd464741: Duplicate entry '999-160089209' for key 'PRIMARY', Error_code: 1062; 2024-05-20 12:24:31 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:477369371: 938e13d1bd464741: 2/3 Winner: 938e13d1bd464741 2024-05-20 12:24:31 0 [Note] WSREP: declaring 24e24d2f-8fdb at tcp://10.130.2.245:4567 stable 2024-05-20 12:24:31 0 [Note] WSREP: forgetting 303b7b5f-8042 (tcp://10.130.248.131:4567) 2024-05-20 12:24:31 0 [Note] WSREP: (2a52e3a7-aec2, 'tcp://0.0.0.0:4567') turning message relay requesting off 2024-05-20 12:24:31 1 [Note] WSREP: Got vote request for seqno e171100d-322d-11e8-9957-624639ca8561:477369369 2024-05-20 12:24:31 1 [Note] WSREP: e171100d-322d-11e8-9957-624639ca8561:477369369 already voted on. Continue. 2024-05-20 12:24:31 1 [ERROR] Slave SQL: Could not execute Write_rows_v1 event on table mysql.gtid_slave_pos; Duplicate entry '999-160089210' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log FIRST, end_log_pos 111, Internal MariaDB error code: 1062 2024-05-20 12:24:31 1 [Warning] WSREP: Event 2 Write_rows_v1 apply failed: 121, seqno 477369372 2024-05-20 12:24:31 0 [Note] WSREP: Node 24e24d2f-8fdb state prim 2024-05-20 12:24:31 0 [Note] WSREP: view(view_id(PRIM,24e24d2f-8fdb,4) memb { 24e24d2f-8fdb,0 2a52e3a7-aec2,0 } joined { } left { } partitioned { 303b7b5f-8042,0 } ) 2024-05-20 12:24:31 0 [Note] WSREP: save pc into disk 2024-05-20 12:24:31 0 [Note] WSREP: forgetting 303b7b5f-8042 (tcp://10.130.248.131:4567) 2024-05-20 12:24:31 0 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2 2024-05-20 12:24:31 0 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID. 2024-05-20 12:24:31 0 [Note] WSREP: STATE EXCHANGE: sent state msg: da838621-1660-11ef-8614-d6a98b4414ac 2024-05-20 12:24:31 0 [Note] WSREP: STATE EXCHANGE: got state msg: da838621-1660-11ef-8614-d6a98b4414ac from 0 (phenex) 2024-05-20 12:24:31 0 [Note] WSREP: Member 1(unicorn) initiates vote on e171100d-322d-11e8-9957-624639ca8561:477369372,a3353ec3955c0b15: Duplicate entry '999-160089210' for key 'PRIMARY', Error_code: 1062; 2024-05-20 12:24:31 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:477369372: a3353ec3955c0b15: 1/2 Waiting for more votes. 2024-05-20 12:24:31 0 [Note] WSREP: STATE EXCHANGE: got state msg: da838621-1660-11ef-8614-d6a98b4414ac from 1 (unicorn) 2024-05-20 12:24:31 0 [Note] WSREP: Quorum results: version = 6, component = PRIMARY, conf_id = 3, members = 2/2 (joined/total), act_id = 477369372, last_appl. = 477369290, protocols = 3/11/4 (gcs/repl/appl), vote policy= 0, group UUID = e171100d-322d-11e8-9957-624639ca8561 2024-05-20 12:24:31 0 [Note] WSREP: Flow-control interval: [23, 23] 2024-05-20 12:24:31 0 [Note] WSREP: Member 0(phenex) initiates vote on e171100d-322d-11e8-9957-624639ca8561:477369372,a3353ec3955c0b15: Duplicate entry '999-160089210' for key 'PRIMARY', Error_code: 1062; 2024-05-20 12:24:31 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:477369372: a3353ec3955c0b15: 1/2 Waiting for more votes. 2024-05-20 12:24:36 0 [Note] WSREP: cleaning up 303b7b5f-8042 (tcp://10.130.248.131:4567) 2024-05-20 12:24:58 0 [Note] WSREP: (2a52e3a7-aec2, 'tcp://0.0.0.0:4567') connection established to eb156904-894a tcp://10.130.248.131:4567 2024-05-20 12:24:59 0 [Note] WSREP: declaring 24e24d2f-8fdb at tcp://10.130.2.245:4567 stable 2024-05-20 12:24:59 0 [Note] WSREP: declaring eb156904-894a at tcp://10.130.248.131:4567 stable 2024-05-20 12:24:59 0 [Note] WSREP: Node 24e24d2f-8fdb state prim 2024-05-20 12:24:59 0 [Note] WSREP: view(view_id(PRIM,24e24d2f-8fdb,5) memb { 24e24d2f-8fdb,0 2a52e3a7-aec2,0 eb156904-894a,0 } joined { } left { } partitioned { }) 2024-05-20 12:24:59 0 [Note] WSREP: save pc into disk 2024-05-20 12:24:59 0 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 3 2024-05-20 12:24:59 0 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID. 2024-05-20 12:24:59 0 [Note] WSREP: STATE EXCHANGE: sent state msg: eb6405db-1660-11ef-b852-7adc67637148 2024-05-20 12:24:59 0 [Note] WSREP: STATE EXCHANGE: got state msg: eb6405db-1660-11ef-b852-7adc67637148 from 0 (phenex) 2024-05-20 12:24:59 0 [Note] WSREP: STATE EXCHANGE: got state msg: eb6405db-1660-11ef-b852-7adc67637148 from 1 (unicorn) 2024-05-20 12:24:59 0 [Note] WSREP: STATE EXCHANGE: got state msg: eb6405db-1660-11ef-b852-7adc67637148 from 2 (rezel) 2024-05-20 12:24:59 0 [Note] WSREP: Quorum results: version = 6, component = PRIMARY, conf_id = 4, members = 2/3 (joined/total), act_id = 477369395, last_appl. = 477369290, protocols = 3/11/4 (gcs/repl/appl), vote policy= 0, group UUID = e171100d-322d-11e8-9957-624639ca8561 2024-05-20 12:24:59 0 [Note] WSREP: Votes over e171100d-322d-11e8-9957-624639ca8561:477369372: a3353ec3955c0b15: 1/2 Waiting for more votes. 2024-05-20 12:24:59 0 [Note] WSREP: Flow-control interval: [28, 28] 2024-05-20 12:25:02 0 [Note] WSREP: (2a52e3a7-aec2, 'tcp://0.0.0.0:4567') turning message relay requesting off My steps to enable replication from standalone to cluster: 1) Get latest GTID position from standalone MariaDB using SELECT @@global.gtid_binlog_pos; 2) On 1 of the node from cluster SET GLOBAL gtid_slave_pos = "XXX"; CHANGE MASTER TO MASTER_HOST="xxx", MASTER_PORT=3306, MASTER_USER="xxx", MASTER_PASSWORD="xxx", MASTER_USE_GTID=slave_pos; START SLAVE; Here's my updated server configurations: Cluster node 1 [mysqld] server-id=999 gtid_domain_id=245 log_bin=mysql-bin log_slave_updates=on … [galera] wsrep_gtid_mode=ON wsrep_gtid_domain_id=999 … Cluster node 2 [mysqld] server-id=999 gtid_domain_id=224 log_bin=mysql-bin log_slave_updates=on … [galera] wsrep_gtid_mode=ON wsrep_gtid_domain_id=999 … Cluster node 3 [mysqld] server-id=999 gtid_domain_id=131 log_bin=mysql-bin log_slave_updates=on … [galera] wsrep_gtid_mode=ON wsrep_gtid_domain_id=999 … Standalone MariaDB [mysqld] server-id=8 gtid_domain_id=8 log_bin=mysql-bin log_slave_updates=on …
            eric@geniqtech.com Eric Ang made changes -
            Attachment image-2024-05-20-12-57-53-002.png [ 73517 ]
            eric@geniqtech.com Eric Ang made changes -
            Attachment image-2024-05-20-12-57-53-002.png [ 73517 ]
            eric@geniqtech.com Eric Ang made changes -
            Affects Version/s 10.11.8 [ 29630 ]

            eric@geniqtech.com It seems this happens because wsrep_gtid_mode=OFF as in this setting GTIDs are not unique across cluster.

            janlindstrom Jan Lindström added a comment - eric@geniqtech.com It seems this happens because wsrep_gtid_mode=OFF as in this setting GTIDs are not unique across cluster.
            eric@geniqtech.com Eric Ang added a comment -

            I ran the following statements on all cluster nodes and all are reporting the same values.
            According to the documentation from https://mariadb.com/kb/en/using-mariadb-replication-with-mariadb-galera-cluster-configuring-mariadb-r/, all cluster nodes need to use the same "server-id" and "wsrep_gtid_domain_id". Only "gtid_domain_id" is different for each node.

            SHOW GLOBAL VARIABLES LIKE 'wsrep_gtid_mode';
            +-----------------+-------+
            | Variable_name   | Value |
            +-----------------+-------+
            | wsrep_gtid_mode | ON    |
            +-----------------+-------+
            1 row in set (0.001 sec)
             
            MariaDB [mysql]> SHOW GLOBAL VARIABLES LIKE 'wsrep_gtid_domain_id';
            +----------------------+-------+
            | Variable_name        | Value |
            +----------------------+-------+
            | wsrep_gtid_domain_id | 999   |
            +----------------------+-------+
            1 row in set (0.001 sec)
            

            eric@geniqtech.com Eric Ang added a comment - I ran the following statements on all cluster nodes and all are reporting the same values. According to the documentation from https://mariadb.com/kb/en/using-mariadb-replication-with-mariadb-galera-cluster-configuring-mariadb-r/ , all cluster nodes need to use the same "server-id" and "wsrep_gtid_domain_id". Only "gtid_domain_id" is different for each node. SHOW GLOBAL VARIABLES LIKE 'wsrep_gtid_mode'; +-----------------+-------+ | Variable_name | Value | +-----------------+-------+ | wsrep_gtid_mode | ON | +-----------------+-------+ 1 row in set (0.001 sec)   MariaDB [mysql]> SHOW GLOBAL VARIABLES LIKE 'wsrep_gtid_domain_id'; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | wsrep_gtid_domain_id | 999 | +----------------------+-------+ 1 row in set (0.001 sec)
            eric@geniqtech.com Eric Ang added a comment -

            One weird thing that I noticed when checking entries in gtid_slave_pos table, why some entries have value 0 for server_id?
            In my server configuration, there's only 2 server_ids
            999 => Cluster
            8 => Standalone

            SELECT * FROM mysql.gtid_slave_pos;
            +-----------+-----------+-----------+--------+
            | domain_id | sub_id    | server_id | seq_no |
            +-----------+-----------+-----------+--------+
            |         8 | 160091987 |         8 | 298010 |
            |         8 | 160091988 |         8 | 298011 |
            |         8 | 160091989 |         8 | 298012 |
            |         8 | 160091990 |         8 | 298013 |
            |         8 | 160091991 |         8 | 298014 |
            |         8 | 160092621 |         8 | 298015 |
            |         8 | 160092666 |         8 | 298016 |
            |         8 | 160092692 |         8 | 298017 |
            |         8 | 160092800 |         8 | 298018 |
            |         8 | 160092954 |         8 | 298019 |
            |         8 | 160093111 |         8 | 298020 |
            |         8 | 160093113 |         8 | 298021 |
            |         8 | 160093194 |         8 | 298022 |
            |       999 | 160091992 |       999 | 603666 |
            |       999 | 160091993 |       999 | 603667 |
            |       999 | 160091994 |         0 | 603674 |
            |       999 | 160091995 |       999 | 603677 |
            |       999 | 160091996 |         0 | 603674 |
            |       999 | 160091997 |         0 | 603674 |
            |       999 | 160091998 |         0 | 603674 |
            |       999 | 160091999 |         0 | 603674 |
            |       999 | 160092000 |         0 | 603674 |
            |       999 | 160092001 |         0 | 603674 |
            ...
            |       999 | 160093362 |         0 | 603674 |
            +-----------+-----------+-----------+--------+
            1375 rows in set (0.001 sec)
            

            eric@geniqtech.com Eric Ang added a comment - One weird thing that I noticed when checking entries in gtid_slave_pos table, why some entries have value 0 for server_id? In my server configuration, there's only 2 server_ids 999 => Cluster 8 => Standalone SELECT * FROM mysql.gtid_slave_pos; +-----------+-----------+-----------+--------+ | domain_id | sub_id | server_id | seq_no | +-----------+-----------+-----------+--------+ | 8 | 160091987 | 8 | 298010 | | 8 | 160091988 | 8 | 298011 | | 8 | 160091989 | 8 | 298012 | | 8 | 160091990 | 8 | 298013 | | 8 | 160091991 | 8 | 298014 | | 8 | 160092621 | 8 | 298015 | | 8 | 160092666 | 8 | 298016 | | 8 | 160092692 | 8 | 298017 | | 8 | 160092800 | 8 | 298018 | | 8 | 160092954 | 8 | 298019 | | 8 | 160093111 | 8 | 298020 | | 8 | 160093113 | 8 | 298021 | | 8 | 160093194 | 8 | 298022 | | 999 | 160091992 | 999 | 603666 | | 999 | 160091993 | 999 | 603667 | | 999 | 160091994 | 0 | 603674 | | 999 | 160091995 | 999 | 603677 | | 999 | 160091996 | 0 | 603674 | | 999 | 160091997 | 0 | 603674 | | 999 | 160091998 | 0 | 603674 | | 999 | 160091999 | 0 | 603674 | | 999 | 160092000 | 0 | 603674 | | 999 | 160092001 | 0 | 603674 | ... | 999 | 160093362 | 0 | 603674 | +-----------+-----------+-----------+--------+ 1375 rows in set (0.001 sec)

            eric@geniqtech.com Just to confirm do you have different wsrep-gtid-domain-id values on second cluster? Value needs to be same in a cluster but different clusters should have different value. In circular replication you have at least two clusters, right?

            janlindstrom Jan Lindström added a comment - eric@geniqtech.com Just to confirm do you have different wsrep-gtid-domain-id values on second cluster? Value needs to be same in a cluster but different clusters should have different value. In circular replication you have at least two clusters, right?
            eric@geniqtech.com Eric Ang added a comment -

            My setup is a circular replication between Galera cluster and a standalone MariaDB server.
            For the cluster, it is using same value for "wsrep_gtid_domain_id" on all nodes.
            The standalone MariaDB server does not have "wsrep_gtid_domain_id" as it is not a cluster.

            eric@geniqtech.com Eric Ang added a comment - My setup is a circular replication between Galera cluster and a standalone MariaDB server. For the cluster, it is using same value for "wsrep_gtid_domain_id" on all nodes. The standalone MariaDB server does not have "wsrep_gtid_domain_id" as it is not a cluster.
            michaeldg Michaël de groot made changes -

            People

              janlindstrom Jan Lindström
              eric@geniqtech.com Eric Ang
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.