Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-29661

WSREP GTID Sync Inconsistency Version 10.5.13 | Bionic & Focal

Details

    • Bug
    • Status: Open (View Workflow)
    • Major
    • Resolution: Unresolved
    • 10.5.13
    • 10.5
    • None
    • Ubuntu Bionic and Focal

    Description

      I am utilising 4 Independent Mariadb Clusters on Focal and Bionic.

      Whilst running version 10.5.13 I am noticing inconsistencies with WSREP GTID synchronisation issues.
      This is an issue as I have replica's replication from either one of the Multimaster nodes, but switch to a new timeline and can not recover due to the `gtid_binlog` position being different.

      An example two instances within a cluster are:
      ```node2:
      Variable_name Value
      gtid_binlog_pos 2-2-268
      gtid_binlog_state 2-2-268
      gtid_cleanup_batch_size 64
      gtid_current_pos 2-2-268
      gtid_domain_id 203
      gtid_ignore_duplicates OFF
      gtid_pos_auto_engines
      gtid_slave_pos
      gtid_strict_mode OFF
      wsrep_gtid_domain_id 2
      wsrep_gtid_mode ON
      node0:
      Variable_name Value
      gtid_binlog_pos 2-2-310
      gtid_binlog_state 2-2-310
      gtid_cleanup_batch_size 64
      gtid_current_pos 2-2-310
      gtid_domain_id 200
      gtid_ignore_duplicates OFF
      gtid_pos_auto_engines
      gtid_slave_pos
      gtid_strict_mode OFF
      wsrep_gtid_domain_id 2
      wsrep_gtid_mode ON```

      I have verified the configurations for log_slave_updates, gtid_domain_id, server_id details are correct according to the documentation.

      This bug feels very remnant of the Ticket MDEV-10227

      Attachments

        Issue Links

          Activity

            dye michael added a comment - - edited

            created a new cluster on Focal and Bionic using version 10.5.13.

            The cluster at the beginning in consistent state:

            node0:
                Variable_name	Value
                gtid_binlog_pos	2-2-370
                gtid_binlog_state	2-2-370
                gtid_cleanup_batch_size	64
                gtid_current_pos	2-2-370
                gtid_domain_id	200
                gtid_ignore_duplicates	OFF
                gtid_pos_auto_engines	
                gtid_slave_pos	
                gtid_strict_mode	OFF
                wsrep_gtid_domain_id	2
                wsrep_gtid_mode	ON
            node1:
                Variable_name	Value
                gtid_binlog_pos	2-2-370
                gtid_binlog_state	2-2-370
                gtid_cleanup_batch_size	64
                gtid_current_pos	2-2-370
                gtid_domain_id	201
                gtid_ignore_duplicates	OFF
                gtid_pos_auto_engines	
                gtid_slave_pos	
                gtid_strict_mode	OFF
                wsrep_gtid_domain_id	2
                wsrep_gtid_mode	ON
            node2:
                Variable_name	Value
                gtid_binlog_pos	2-2-370
                gtid_binlog_state	2-2-370
                gtid_cleanup_batch_size	64
                gtid_current_pos	2-2-370
                gtid_domain_id	202
                gtid_ignore_duplicates	OFF
                gtid_pos_auto_engines	
                gtid_slave_pos	
                gtid_strict_mode	OFF
                wsrep_gtid_domain_id	2
                wsrep_gtid_mode	ON
            

            Running a single DELETE command:

            DELETE FROM `TokenRealm` WHERE `TokenRealm`.token_id = 313
            

            GTID_BINLOG After single DELETE on Node0

            node2:
                Variable_name	Value
                gtid_binlog_pos	2-2-371
                gtid_binlog_state	2-2-371
            node1:
                Variable_name	Value
                gtid_binlog_pos	2-2-371
                gtid_binlog_state	2-2-371
            node0:
                Variable_name	Value
                gtid_binlog_pos	2-2-381
                gtid_binlog_state	2-2-381
            
            

            dye michael added a comment - - edited created a new cluster on Focal and Bionic using version 10.5.13. The cluster at the beginning in consistent state: node0: Variable_name Value gtid_binlog_pos 2 - 2 - 370 gtid_binlog_state 2 - 2 - 370 gtid_cleanup_batch_size 64 gtid_current_pos 2 - 2 - 370 gtid_domain_id 200 gtid_ignore_duplicates OFF gtid_pos_auto_engines gtid_slave_pos gtid_strict_mode OFF wsrep_gtid_domain_id 2 wsrep_gtid_mode ON node1: Variable_name Value gtid_binlog_pos 2 - 2 - 370 gtid_binlog_state 2 - 2 - 370 gtid_cleanup_batch_size 64 gtid_current_pos 2 - 2 - 370 gtid_domain_id 201 gtid_ignore_duplicates OFF gtid_pos_auto_engines gtid_slave_pos gtid_strict_mode OFF wsrep_gtid_domain_id 2 wsrep_gtid_mode ON node2: Variable_name Value gtid_binlog_pos 2 - 2 - 370 gtid_binlog_state 2 - 2 - 370 gtid_cleanup_batch_size 64 gtid_current_pos 2 - 2 - 370 gtid_domain_id 202 gtid_ignore_duplicates OFF gtid_pos_auto_engines gtid_slave_pos gtid_strict_mode OFF wsrep_gtid_domain_id 2 wsrep_gtid_mode ON Running a single DELETE command: DELETE FROM `TokenRealm` WHERE `TokenRealm`.token_id = 313 GTID_BINLOG After single DELETE on Node0 node2: Variable_name Value gtid_binlog_pos 2 - 2 - 371 gtid_binlog_state 2 - 2 - 371 node1: Variable_name Value gtid_binlog_pos 2 - 2 - 371 gtid_binlog_state 2 - 2 - 371 node0: Variable_name Value gtid_binlog_pos 2 - 2 - 381 gtid_binlog_state 2 - 2 - 381

            People

              sysprg Julius Goryavsky
              dye michael
              Votes:
              2 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.