Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-22136

wsrep_restart_slave = 1 does not always work

Details

    • Bug
    • Status: Closed (View Workflow)
    • Critical
    • Resolution: Not a Bug
    • 10.2.12, 10.2.32, 10.3(EOL)
    • N/A
    • Galera, Replication
    • None

    Description

      There is an async replication setup between two Galera clusters of 3 nodes each. In some cases SQL thread on slave cluster stops with error 1047. Error log content is the following:

      ...
      2020-03-26 18:06:27 139923264952064 [Note] Master 'to_master_0': Slave SQL thread initialized, starting replication in log 'binlog.000009' at position 140233980, relay log '/var/lib/mybinlog/relaylog-to_master_0.000012' position: 126225898; GTID position '100-100-5514,200-200-524506'
      2020-03-26 18:06:55 139923543996160 [Note] WSREP: Trying to continue unpaused monitor
      2020-03-26 18:09:26 139923535603456 [Note] InnoDB: *** Priority TRANSACTION:
      TRANSACTION 11656002, ACTIVE 0 sec starting index read
      mysql tables in use 1, locked 1
      MySQL thread id 1, OS thread handle 139923535603456, query id 6370090 Update_rows_log_event::find_row(5817624)
      UPDATE impu SET user_state=0 WHERE id=40382
      2020-03-26 18:09:26 139923535603456 [Note] InnoDB: *** Victim TRANSACTION:
      TRANSACTION 11655993, ACTIVE 0 sec
      , undo log entries 3
      MySQL thread id 66, OS thread handle 139923264952064, query id 6370086 Unlocking tables
      2020-03-26 18:09:26 139923535603456 [Note] InnoDB: *** WAITING FOR THIS LOCK TO BE GRANTED:
      RECORD LOCKS space id 74 page no 94 n bits 720 index PRIMARY of table `bf_lock_test`.`impu` trx id 11655993 lock_mode X locks rec but not gap
      Record lock, heap no 531 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
       0: len 4; hex 80009dbe; asc     ;;
       1: len 6; hex 000000b1db39; asc      9;;
       2: len 7; hex 13000001d813c6; asc        ;;
       3: len 1; hex 80; asc  ;;
       
      2020-03-26 18:09:26 139923535603456 [Note] InnoDB:  SQL1: UPDATE impu SET user_state=0 WHERE id=40382��|^d
      2020-03-26 18:09:26 139923535603456 [Note] InnoDB:  SQL2: NULL
      2020-03-26 18:09:26 139923535603456 [Note] WSREP: cluster conflict due to high priority abort for threads:
      2020-03-26 18:09:26 139923535603456 [Note] WSREP: Winning thread: 
         THD: 1, mode: applier, state: executing, conflict: no conflict, seqno: 5817624
         SQL: UPDATE impu SET user_state=0 WHERE id=40382��|^d
      2020-03-26 18:09:26 139923535603456 [Note] WSREP: Victim thread: 
         THD: 66, mode: local, state: committing, conflict: no conflict, seqno: -1
         SQL: NULL
      2020-03-26 18:09:26 139923264952064 [ERROR] Master 'to_master_0': Slave SQL: Node has dropped from cluster, Gtid 200-200-549803, Internal MariaDB error code: 1047
      2020-03-26 18:09:26 139923264952064 [Note] Master 'to_master_0': Slave SQL thread exiting, replication stopped in log 'binlog.000009' at position 146989842; GTID position '100-100-5514,200-200-549802'
      2020-03-26 18:09:26 139923264952064 [Note] Master 'to_master_0': WSREP: Slave error due to node temporarily non-primarySQL slave will continue
      2020-03-26 18:09:26 139923264952064 [Note] Master 'to_master_0': WSREP: slave restart: 3
      2020-03-26 18:09:26 139923264952064 [Note] Master 'to_master_0': WSREP: ready state reached
      2020-03-26 18:09:26 139923264952064 [Note] Master 'to_master_0': Slave SQL thread initialized, starting replication in log 'binlog.000009' at position 146989842, relay log '/var/lib/mybinlog/relaylog-to_master_0.000012' position: 132322234; GTID position '100-100-5514,200-200-549802'
      2020-03-26 18:11:38 139923543996160 [Note] WSREP: Trying to continue unpaused monitor
      2020-03-26 18:13:53 139923543996160 [Note] WSREP: Trying to continue unpaused monitor
      2020-03-26 18:23:45 139923535603456 [Note] InnoDB: *** Priority TRANSACTION:
      TRANSACTION 13221641, ACTIVE 0 sec starting index read
      mysql tables in use 1, locked 1
      MySQL thread id 1, OS thread handle 139923535603456, query id 7230511 Update_rows_log_event::find_row(6600241)
      UPDATE impu SET user_state=1 WHERE id=20997
      2020-03-26 18:23:45 139923535603456 [Note] InnoDB: *** Victim TRANSACTION:
      TRANSACTION 13221628, ACTIVE 0 sec
      , undo log entries 3
      MySQL thread id 67, OS thread handle 139923264952064, query id 7230504 Unlocking tables
      2020-03-26 18:23:45 139923535603456 [Note] InnoDB: *** WAITING FOR THIS LOCK TO BE GRANTED:
      RECORD LOCKS space id 74 page no 64 n bits 720 index PRIMARY of table `bf_lock_test`.`impu` trx id 13221628 lock_mode X locks rec but not gap
      Record lock, heap no 586 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
       0: len 4; hex 80005205; asc   R ;;
       1: len 6; hex 000000c9befc; asc       ;;
       2: len 7; hex 6c000002b40ef4; asc l      ;;
       3: len 1; hex 81; asc  ;;
       
      2020-03-26 18:23:45 139923535603456 [Note] InnoDB:  SQL1: UPDATE impu SET user_state=1 WHERE id=20997!�|^d
      2020-03-26 18:23:45 139923535603456 [Note] InnoDB:  SQL2: NULL
      2020-03-26 18:23:45 139923535603456 [Note] WSREP: cluster conflict due to high priority abort for threads:
      2020-03-26 18:23:45 139923535603456 [Note] WSREP: Winning thread: 
         THD: 1, mode: applier, state: executing, conflict: no conflict, seqno: 6600241
         SQL: UPDATE impu SET user_state=1 WHERE id=20997!�|^d
      2020-03-26 18:23:45 139923535603456 [Note] WSREP: Victim thread: 
         THD: 67, mode: local, state: committing, conflict: no conflict, seqno: -1
         SQL: NULL
      2020-03-26 18:23:45 139923264952064 [ERROR] Master 'to_master_0': Slave SQL: Node has dropped from cluster, Gtid 200-200-626547, Internal MariaDB error code: 1047
      2020-03-26 18:23:45 139923264952064 [Note] Master 'to_master_0': Slave SQL thread exiting, replication stopped in log 'binlog.000009' at position 167400561; GTID position '100-100-5514,200-200-626546'
      2020-03-26 18:23:45 139923264952064 [Note] Master 'to_master_0': WSREP: Slave error due to node temporarily non-primarySQL slave will continue
      2020-03-26 18:23:45 139923264952064 [Note] Master 'to_master_0': WSREP: slave restart: 3
      2020-03-26 18:23:45 139923264952064 [Note] Master 'to_master_0': WSREP: ready state reached
      2020-03-26 18:23:45 139923264952064 [Note] Master 'to_master_0': Slave SQL thread initialized, starting replication in log 'binlog.000009' at position 167400561, relay log '/var/lib/mybinlog/relaylog-to_master_0.000012' position: 150809766; GTID position '100-100-5514,200-200-626546'
      ...
      

      So, normally SQL thread restarts to (maybe) hit a conflict on some other row (see different id values in messages above. But in some cases it stops and replication does not continue. Like this:

      2020-03-29  0:48:27 139923535603456 [Note] InnoDB: *** Priority TRANSACTION:
      TRANSACTION 360061759, ACTIVE 0 sec starting index read
      mysql tables in use 1, locked 1
      MySQL thread id 1, OS thread handle 139923535603456, query id 195459563 Update_rows_log_event::find_row(179994284)
      UPDATE impu SET user_state=0 WHERE id=46832
      2020-03-29  0:48:27 139923535603456 [Note] InnoDB: *** Victim TRANSACTION:
      TRANSACTION 360061748, ACTIVE 0 sec
      , undo log entries 3
      MySQL thread id 1830, OS thread handle 139923258246912, query id 195459555 Unlocking tables
      2020-03-29  0:48:27 139923535603456 [Note] InnoDB: *** WAITING FOR THIS LOCK TO BE GRANTED:
      RECORD LOCKS space id 74 page no 104 n bits 720 index PRIMARY of table `bf_lock_test`.`impu` trx id 360061748 lock_mode X locks rec but not gap
      Record lock, heap no 501 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
       0: len 4; hex 8000b6f0; asc     ;;
       1: len 6; hex 000015761b34; asc    v 4;;
       2: len 7; hex 7f0000028920e5; asc        ;;
       3: len 1; hex 80; asc  ;;
       
      2020-03-29  0:48:27 139923535603456 [Note] InnoDB:  SQL1: UPDATE impu SET user_state=0 WHERE id=46832K�^d
      2020-03-29  0:48:27 139923535603456 [Note] InnoDB:  SQL2: NULL
      2020-03-29  0:48:27 139923535603456 [Note] WSREP: cluster conflict due to high priority abort for threads:
      2020-03-29  0:48:27 139923535603456 [Note] WSREP: Winning thread: 
         THD: 1, mode: applier, state: executing, conflict: no conflict, seqno: 179994284
         SQL: UPDATE impu SET user_state=0 WHERE id=46832K�^d
      2020-03-29  0:48:27 139923535603456 [Note] WSREP: Victim thread: 
         THD: 1830, mode: local, state: committing, conflict: no conflict, seqno: -1
         SQL: NULL
      2020-03-29  0:48:27 139923258246912 [ERROR] Master 'to_master_0': Slave SQL: Node has dropped from cluster, Gtid 200-200-15220900, Internal MariaDB error code: 1047
      2020-03-29  0:48:27 139923258246912 [Note] Master 'to_master_0': Slave SQL thread exiting, replication stopped in log 'binlog.000002' at position 353365130; GTID position '100-100-8368143,200-200-15220899'
      2020-03-29  0:48:27 139923258246912 [Note] Master 'to_master_0': WSREP: Slave error due to node temporarily non-primarySQL slave will continue
      2020-03-29  0:48:27 139923258246912 [Note] Master 'to_master_0': WSREP: slave restart: 3
      2020-03-29  0:48:27 139923258246912 [Note] Master 'to_master_0': WSREP: ready state reached
      2020-03-29  0:48:27 139923258246912 [Note] Master 'to_master_0': Slave SQL thread initialized, starting replication in log 'binlog.000002' at position 353365130, relay log '/var/lib/mybinlog/relaylog-to_master_0.000004' position: 313622044; GTID position '100-100-8368143,200-200-15220899'
      2020-03-29  0:48:27 139923535603456 [Note] InnoDB: *** Priority TRANSACTION:
      TRANSACTION 360061781, ACTIVE 0 sec starting index read
      mysql tables in use 1, locked 1
      MySQL thread id 1, OS thread handle 139923535603456, query id 195459577 Update_rows_log_event::find_row(179994293)
      UPDATE impu SET user_state=1 WHERE id=46841
      2020-03-29  0:48:27 139923535603456 [Note] InnoDB: *** Victim TRANSACTION:
      TRANSACTION 360061773, ACTIVE 0 sec
      , undo log entries 3
      MySQL thread id 1838, OS thread handle 139923258246912, query id 195459574 Unlocking tables
      2020-03-29  0:48:27 139923535603456 [Note] InnoDB: *** WAITING FOR THIS LOCK TO BE GRANTED:
      RECORD LOCKS space id 74 page no 104 n bits 720 index PRIMARY of table `bf_lock_test`.`impu` trx id 360061773 lock_mode X locks rec but not gap
      Record lock, heap no 510 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
       0: len 4; hex 8000b6f9; asc     ;;
       1: len 6; hex 000015761b4d; asc    v M;;
       2: len 7; hex 0e000001ce0547; asc       G;;
       3: len 1; hex 81; asc  ;;
       
      2020-03-29  0:48:27 139923535603456 [Note] InnoDB:  SQL1: UPDATE impu SET user_state=1 WHERE id=46841K�^d
      2020-03-29  0:48:27 139923535603456 [Note] InnoDB:  SQL2: NULL
      2020-03-29  0:48:27 139923535603456 [Note] WSREP: cluster conflict due to high priority abort for threads:
      2020-03-29  0:48:27 139923535603456 [Note] WSREP: Winning thread: 
         THD: 1, mode: applier, state: executing, conflict: no conflict, seqno: 179994293
         SQL: UPDATE impu SET user_state=1 WHERE id=46841K�^d
      2020-03-29  0:48:27 139923535603456 [Note] WSREP: Victim thread: 
         THD: 1838, mode: local, state: committing, conflict: no conflict, seqno: -1
         SQL: NULL
      2020-03-29  0:48:27 139923258246912 [ERROR] Master 'to_master_0': Slave SQL: Node has dropped from cluster, Gtid 200-200-15220902, Internal MariaDB error code: 1047
      2020-03-29  0:48:27 139923258246912 [Note] Master 'to_master_0': Slave SQL thread exiting, replication stopped in log 'binlog.000002' at position 353365612; GTID position '100-100-8368143,200-200-15220901'
      2020-03-29  0:48:27 139923258246912 [Note] Master 'to_master_0': WSREP: Slave error due to node temporarily non-primarySQL slave will continue
      2020-03-29  0:48:27 139923258246912 [Note] Master 'to_master_0': WSREP: slave restart: 3
      2020-03-29  0:48:27 139923258246912 [Note] Master 'to_master_0': WSREP: ready state reached
      2020-03-29  0:48:27 139923258246912 [Note] Master 'to_master_0': Slave SQL thread initialized, starting replication in log 'binlog.000002' at position 353365612, relay log '/var/lib/mybinlog/relaylog-to_master_0.000004' position: 313622526; GTID position '100-100-8368143,200-200-15220901'
      2020-03-29  0:48:27 139923535603456 [Note] InnoDB: *** Priority TRANSACTION:
      TRANSACTION 360061796, ACTIVE 0 sec starting index read
      mysql tables in use 1, locked 1
      MySQL thread id 1, OS thread handle 139923535603456, query id 195459587 Update_rows_log_event::find_row(179994299)
      UPDATE impu SET user_state=0 WHERE id=46840
      2020-03-29  0:48:27 139923535603456 [Note] InnoDB: *** Victim TRANSACTION:
      TRANSACTION 360061795, ACTIVE 0 sec starting index read
      mysql tables in use 1, locked 1
      , undo log entries 1
      MySQL thread id 1839, OS thread handle 139923258246912, query id 195459586
      2020-03-29  0:48:27 139923535603456 [Note] InnoDB: *** WAITING FOR THIS LOCK TO BE GRANTED:
      RECORD LOCKS space id 74 page no 104 n bits 720 index PRIMARY of table `bf_lock_test`.`impu` trx id 360061795 lock_mode X locks rec but not gap
      Record lock, heap no 509 PHYSICAL RECORD: n_fields 4; compact format; info bits 0
       0: len 4; hex 8000b6f8; asc     ;;
       1: len 6; hex 000015761b4f; asc    v O;;
       2: len 7; hex 0f000001c80803; asc        ;;
       3: len 1; hex 81; asc  ;;
       
      2020-03-29  0:48:27 139923535603456 [Note] InnoDB:  SQL1: UPDATE impu SET user_state=0 WHERE id=46840K�^d
      2020-03-29  0:48:27 139923535603456 [Note] InnoDB:  SQL2: NULL
      2020-03-29  0:48:27 139923535603456 [Note] WSREP: cluster conflict due to high priority abort for threads:
      2020-03-29  0:48:27 139923535603456 [Note] WSREP: Winning thread: 
         THD: 1, mode: applier, state: executing, conflict: no conflict, seqno: 179994299
         SQL: UPDATE impu SET user_state=0 WHERE id=46840K�^d
      2020-03-29  0:48:27 139923535603456 [Note] WSREP: Victim thread: 
         THD: 1839, mode: local, state: executing, conflict: no conflict, seqno: -1
         SQL: NULL
      2020-03-29  0:48:27 139923258246912 [Note] Master 'to_master_0': Slave SQL thread exiting, replication stopped in log 'binlog.000002' at position 353366094; GTID position '100-100-8368143,200-200-15220903'
      2020-03-29  0:51:31 139923543996160 [Note] WSREP: Trying to continue unpaused monitor
      2020-03-29  0:51:31 139923543996160 [Note] WSREP: Trying to continue unpaused monitor
      2020-03-29  0:55:08 139923543996160 [Note] WSREP: Trying to continue unpaused monitor
      2020-03-29  1:06:04 139923543996160 [Note] WSREP: Trying to continue unpaused monitor
      2020-03-29  1:08:24 139923543996160 [Note] WSREP: Trying to continue unpaused monitor
      2020-03-29  1:34:54 139923543996160 [Note] WSREP: Trying to continue unpaused monitor
      

      So, async replication had to be monitored and restarted "manually". After that:

      2020-03-29 13:38:01 139923259475712 [Note] Master 'to_master_0': WSREP: ready state reached
      2020-03-29 13:38:01 139923259475712 [Note] Master 'to_master_0': Slave SQL thread initialized, starting replication in log 'binlog.000002' at position 353366094, relay log '/var/lib/mybinlog/relaylog-to_master_0.000004' position: 313623008; GTID position '100-100-8368143,200-200-15220903'
      

      Question is: why with wsrep_restart_slave = 1 there are cases when slave restart does not happen automatically? It looks like a bug.

      Attachments

        1. mysqld.1.err
          18 kB
        2. mysqld.2.err
          175 kB
        3. mysqld.3.err
          20 kB
        4. stdout.log
          144 kB

        Activity

          valerii Valerii Kravchuk created issue -
          elenst Elena Stepanova made changes -
          Field Original Value New Value
          Fix Version/s 10.2 [ 14601 ]
          Assignee Jan Lindström [ jplindst ]

          In case when slave did not restart automatically did node go Non-Primary state? MariaDB 10.2.36 contains some important fixes but not sure if they would help here. To further analyze, we would need full error logs and some instructions how to reproduce.

          jplindst Jan Lindström (Inactive) added a comment - In case when slave did not restart automatically did node go Non-Primary state? MariaDB 10.2.36 contains some important fixes but not sure if they would help here. To further analyze, we would need full error logs and some instructions how to reproduce.
          jplindst Jan Lindström (Inactive) made changes -
          Labels need_feedback
          valerii Valerii Kravchuk made changes -
          Labels need_feedback
          jplindst Jan Lindström (Inactive) made changes -
          Status Open [ 1 ] In Progress [ 3 ]
          jplindst Jan Lindström (Inactive) made changes -
          Status In Progress [ 3 ] Stalled [ 10000 ]
          jplindst Jan Lindström (Inactive) made changes -
          Status Stalled [ 10000 ] In Progress [ 3 ]
          julien.fritsch Julien Fritsch made changes -
          Priority Major [ 3 ] Critical [ 2 ]
          jplindst Jan Lindström (Inactive) made changes -
          Assignee Jan Lindström [ jplindst ] Seppo Jaakola [ seppo ]
          jplindst Jan Lindström (Inactive) made changes -
          Assignee Seppo Jaakola [ seppo ] Jan Lindström [ jplindst ]
          jplindst Jan Lindström (Inactive) made changes -
          Status In Progress [ 3 ] Stalled [ 10000 ]
          jplindst Jan Lindström (Inactive) made changes -
          Assignee Jan Lindström [ jplindst ] Seppo Jaakola [ seppo ]
          jplindst Jan Lindström (Inactive) made changes -
          Attachment mysqld.1.err [ 55243 ]
          Attachment mysqld.2.err [ 55244 ]
          Attachment mysqld.3.err [ 55245 ]
          Attachment stdout.log [ 55246 ]
          jplindst Jan Lindström (Inactive) made changes -
          Affects Version/s 10.3 [ 22126 ]
          seppo Seppo Jaakola made changes -
          Status Stalled [ 10000 ] In Progress [ 3 ]
          seppo Seppo Jaakola added a comment -

          async replication slave restarting feature (as configured by wsrep_restart_slave parameter) was developed for automating the slave thread restart in situation, where the node operating as replication slave in the cluster would drop out from cluster and later joins back. In the old versions, the async slave thread would stop as soon as node drops from cluster, and when node joins back, the async slave thread remains stopped, although the slave node is now healthy and capable of applying async replication stream.
          This is also what documentation says currently about wsrep_restart_slave parameter.

          However, I can see that later development has extended the effect of wsrep_restart_slave to cover also cases where async replication event handling fails for error during applying. The checked errors are conflicts with Galera replication only, and if applying fails for "natural", not Galera related, problem then slave thread restarting does not kick in. Reading the above error logs, it appears that the apply time error checking does not detect all possible Galera replication conflicts, and slave thread remains stopped because of this.

          All in all, the behavior of wsrep_restart_slave parameter has deviated from the original requirement specification. It would be possible to continue developing this deviated behavior further, but there are some risks and "unknows" involved. Note that the original idea of wsrep_restart_slave is that async replication works successfully, and restarting happens only when node is joining back to cluster. In the deviated behavior, async replication has conflicts with Galera replication, and we now need to decide how to resolve these conflicts. There raises questions like:

          • Should async replication continue retrying the same conflicting event?
          • what if retrying the same event always fails, would slave sql thread remain in eternal loop?
          • or should it skip the conflicting event and continue with the next event?
          • how does skipping or retrying affect the application's business logic?
          • The async master server will probably be inconsistent with the slave node, and this may amplify more conflicts in the future

          There may be users who use wsrep_restart_slave with the original design, and this automatic replication conflict resolving already violates their use case and extending it further would just violate even more. To handle the backward compatibility, it would be best to have additional configuration for enabling conflict resolution. e.g. wsrep_restart_slave could be bit field, with following flags to trigger restart on:

          • node join
          • conflict with Galera replication
          • conflict with local transaction in slave node
          • data inconsistency

          There are also variables for handling replication slave operation, like: slave_skip_errors and slave_transaction_retry_errors. These might be possible to extend to cover conflicts with Galera replication as well.

          seppo Seppo Jaakola added a comment - async replication slave restarting feature (as configured by wsrep_restart_slave parameter) was developed for automating the slave thread restart in situation, where the node operating as replication slave in the cluster would drop out from cluster and later joins back. In the old versions, the async slave thread would stop as soon as node drops from cluster, and when node joins back, the async slave thread remains stopped, although the slave node is now healthy and capable of applying async replication stream. This is also what documentation says currently about wsrep_restart_slave parameter. However, I can see that later development has extended the effect of wsrep_restart_slave to cover also cases where async replication event handling fails for error during applying. The checked errors are conflicts with Galera replication only, and if applying fails for "natural", not Galera related, problem then slave thread restarting does not kick in. Reading the above error logs, it appears that the apply time error checking does not detect all possible Galera replication conflicts, and slave thread remains stopped because of this. All in all, the behavior of wsrep_restart_slave parameter has deviated from the original requirement specification. It would be possible to continue developing this deviated behavior further, but there are some risks and "unknows" involved. Note that the original idea of wsrep_restart_slave is that async replication works successfully, and restarting happens only when node is joining back to cluster. In the deviated behavior, async replication has conflicts with Galera replication, and we now need to decide how to resolve these conflicts. There raises questions like: Should async replication continue retrying the same conflicting event? what if retrying the same event always fails, would slave sql thread remain in eternal loop? or should it skip the conflicting event and continue with the next event? how does skipping or retrying affect the application's business logic? The async master server will probably be inconsistent with the slave node, and this may amplify more conflicts in the future There may be users who use wsrep_restart_slave with the original design, and this automatic replication conflict resolving already violates their use case and extending it further would just violate even more. To handle the backward compatibility, it would be best to have additional configuration for enabling conflict resolution. e.g. wsrep_restart_slave could be bit field, with following flags to trigger restart on: node join conflict with Galera replication conflict with local transaction in slave node data inconsistency There are also variables for handling replication slave operation, like: slave_skip_errors and slave_transaction_retry_errors. These might be possible to extend to cover conflicts with Galera replication as well.
          seppo Seppo Jaakola made changes -
          Labels need_feedback
          seppo Seppo Jaakola made changes -
          Status In Progress [ 3 ] Stalled [ 10000 ]
          julien.fritsch Julien Fritsch made changes -
          Assignee Seppo Jaakola [ seppo ] Valerii Kravchuk [ valerii ]
          julien.fritsch Julien Fritsch made changes -
          Assignee Valerii Kravchuk [ valerii ] Jan Lindström [ jplindst ]
          julien.fritsch Julien Fritsch made changes -
          Labels need_feedback

          I would say this is not-a-bug as documented and designed way how wsrep_restart_slave parameter should work is still there. There is some effort done to extent this with best-effort try but it does not work in all error cases. In those cases it is then not intended to work.

          jplindst Jan Lindström (Inactive) added a comment - I would say this is not-a-bug as documented and designed way how wsrep_restart_slave parameter should work is still there. There is some effort done to extent this with best-effort try but it does not work in all error cases. In those cases it is then not intended to work.
          jplindst Jan Lindström (Inactive) made changes -
          Fix Version/s N/A [ 14700 ]
          Fix Version/s 10.2 [ 14601 ]
          Resolution Not a Bug [ 6 ]
          Status Stalled [ 10000 ] Closed [ 6 ]
          serg Sergei Golubchik made changes -
          Workflow MariaDB v3 [ 106696 ] MariaDB v4 [ 157548 ]
          mariadb-jira-automation Jira Automation (IT) made changes -
          Zendesk Related Tickets 158900

          People

            jplindst Jan Lindström (Inactive)
            valerii Valerii Kravchuk
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Git Integration

                Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.