Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-27568

Parallel async replication hangs on a Galera node

Details

    Description

      This is still an issue in latest GA. Can be replicated via sysbench

      In a setup where a master replicates to a Galera node acting as a slave with slave-parallel-threads > 1, the slave threads hang which brings replication to a halt.

      How to reproduce:

      Run the attached script on master.
      On slave:

      MariaDB [test]> show processlist;
      +----+-------------+-----------------+------+---------+------+--------------------------------------------------------------------------------+------------------+----------+
      | Id | User        | Host            | db   | Command | Time | State                                                                          | Info             | Progress |
      +----+-------------+-----------------+------+---------+------+--------------------------------------------------------------------------------+------------------+----------+
      |  1 | system user |                 | NULL | Sleep   |  840 | NULL                                                                           | NULL             |    0.000 |
      |  2 | system user |                 | NULL | Sleep   |  840 | wsrep aborter idle                                                             | NULL             |    0.000 |
      |  5 | system user |                 | NULL | Sleep   |  838 | NULL                                                                           | NULL             |    0.000 |
      |  6 | system user |                 | NULL | Sleep   |  838 | NULL                                                                           | NULL             |    0.000 |
      |  7 | system user |                 | NULL | Sleep   |  838 | NULL                                                                           | NULL             |    0.000 |
      |  8 | system user |                 | NULL | Connect |  263 | Waiting for prior transaction to start commit before starting next transaction | NULL             |    0.000 |
      |  9 | system user |                 | NULL | Connect |  263 | closing tables                                                                 | NULL             |    0.000 |
      | 10 | system user |                 | NULL | Connect |  263 | closing tables                                                                 | NULL             |    0.000 |
      | 11 | system user |                 | NULL | Connect |  263 | closing tables                                                                 | NULL             |    0.000 |
      | 12 | root        | localhost:52597 | test | Query   |    0 | init                                                                           | show processlist |    0.000 |
      | 13 | system user |                 | NULL | Connect |  336 | Waiting for master to send event                                               | NULL             |    0.000 |
      | 14 | system user |                 | NULL | Connect |  336 | Slave has read all relay log; waiting for the slave I/O thread to update it    | NULL             |    0.000 |
      +----+-------------+-----------------+------+---------+------+--------------------------------------------------------------------------------+------------------+----------+
      12 rows in set (0.00 sec)
      

      MariaDB [test]> show slave status\G
      *************************** 1. row ***************************
                     Slave_IO_State: Waiting for master to send event
                        Master_Host: 127.0.0.1
                        Master_User: root
                        Master_Port: 15999
                      Connect_Retry: 60
                    Master_Log_File: nirbhay-VirtualBox-bin.000001
                Read_Master_Log_Pos: 146163
                     Relay_Log_File: nirbhay-VirtualBox-relay-bin.000002
                      Relay_Log_Pos: 1463
              Relay_Master_Log_File: nirbhay-VirtualBox-bin.000001
                   Slave_IO_Running: Yes
                  Slave_SQL_Running: Yes
                    Replicate_Do_DB: 
                Replicate_Ignore_DB: 
                 Replicate_Do_Table: 
             Replicate_Ignore_Table: 
            Replicate_Wild_Do_Table: 
        Replicate_Wild_Ignore_Table: 
                         Last_Errno: 0
                         Last_Error: 
                       Skip_Counter: 0
                Exec_Master_Log_Pos: 1163
                    Relay_Log_Space: 146773
                    Until_Condition: None
                     Until_Log_File: 
                      Until_Log_Pos: 0
                 Master_SSL_Allowed: No
                 Master_SSL_CA_File: 
                 Master_SSL_CA_Path: 
                    Master_SSL_Cert: 
                  Master_SSL_Cipher: 
                     Master_SSL_Key: 
              Seconds_Behind_Master: 268
      Master_SSL_Verify_Server_Cert: No
                      Last_IO_Errno: 0
                      Last_IO_Error: 
                     Last_SQL_Errno: 0
                     Last_SQL_Error: 
        Replicate_Ignore_Server_Ids: 
                   Master_Server_Id: 1
                     Master_SSL_Crl: 
                 Master_SSL_Crlpath: 
                         Using_Gtid: Slave_Pos
                        Gtid_IO_Pos: 0-1-1006
      1 row in set (0.00 sec)
      

      The node also hangs when shutdown.

      141009 11:38:17 [Note] ./bin/mysqld: Normal shutdown
       
      141009 11:38:17 [Note] WSREP: Stop replication
      141009 11:38:17 [Note] WSREP: Provider disconnect
      141009 11:38:17 [Note] WSREP: Closing send monitor...
      141009 11:38:17 [Note] WSREP: Closed send monitor.
      141009 11:38:17 [Note] WSREP: gcomm: terminating thread
      141009 11:38:17 [Note] WSREP: gcomm: joining thread
      141009 11:38:17 [Note] WSREP: gcomm: closing backend
      141009 11:38:17 [Note] WSREP: view((empty))
      141009 11:38:17 [Note] WSREP: Received self-leave message.
      141009 11:38:17 [Note] WSREP: gcomm: closed
      141009 11:38:17 [Note] WSREP: Flow-control interval: [0, 0]
      141009 11:38:17 [Note] WSREP: Received SELF-LEAVE. Closing connection.
      141009 11:38:17 [Note] WSREP: Shifting SYNCED -> CLOSED (TO: 10)
      141009 11:38:17 [Note] WSREP: RECV thread exiting 0: Success
      141009 11:38:17 [Note] WSREP: recv_thread() joined.
      141009 11:38:17 [Note] WSREP: Closing replication queue.
      141009 11:38:17 [Note] WSREP: Closing slave action queue.
      141009 11:38:19 [Note] WSREP: waiting for client connections to close: 5
      

      Why this is happening

      So the actual issue , Galera sends event at the time of prepare phase
      and And creates deadlock.

      For example lets us consider the replication A -> B <==> C (A,B
      parallel replication optimistic, B,C Galera cluster nodes)
      Lets assume 2 inserts(T1 gtid x-x-1 and T2 x-x-2) from master A arrive
      to slave B.
      2nd insert prepares faster then 1st insert, So it has already sent the
      writeset to node C. Now it is the queue waiting for its turn to commit
      While the first insert does prepare on galera
      (wsrep_run_wsrep_commit), but it is stuck because T2 transaction still
      haven't run post_commit on galera
      so galera state is still in S_WAITING
      T2 cant run post_commit on galera because it is waiting for T1 commit
      , T1 cant commit because it is waiting in prepare stage for
      transaction T2 to clear the galera state.
      Backtrace from gdb

      Gtid_seq_no= 2
      Thread 34 (Thread 0x7fcd966d2700 (LWP 23891)):
      #0  0x00007fcda6d56415 in pthread_cond_wait@@GLIBC_2.3.2 () from
      /usr/lib/libpthread.so.0
      #1  0x00005569d607d380 in safe_cond_wait (cond=0x7fcd854078e8,
      mp=0x7fcd85407838, file=0x5569d6240360
      "/home/sachin/10.1/server/include/mysql/psi/mysql_thread.h",
      line=1154) at /home/sachin/10.1/server/mysys/thr_mutex.c:493
      #2  0x00005569d5aec4d0 in inline_mysql_cond_wait (that=0x7fcd854078e8,
      mutex=0x7fcd85407838, src_file=0x5569d6240cb8
      "/home/sachin/10.1/server/sql/log.cc", src_line=7387) at
      /home/sachin/10.1/server/include/mysql/psi/mysql_thread.h:1154
      #3  0x00005569d5afeee5 in MYSQL_BIN_LOG::queue_for_group_commit
      (this=0x5569d692d7c0 <mysql_bin_log>, orig_entry=0x7fcd966cf440) at
      /home/sachin/10.1/server/sql/log.cc:7387
      #4  0x00005569d5aff5c9 in
      MYSQL_BIN_LOG::write_transaction_to_binlog_events (this=0x5569d692d7c0
      <mysql_bin_log>, entry=0x7fcd966cf440) at
      /home/sachin/10.1/server/sql/log.cc:7607
      #5  0x00005569d5afecff in MYSQL_BIN_LOG::write_transaction_to_binlog
      (this=0x5569d692d7c0 <mysql_bin_log>, thd=0x7fcd84c068b0,
      cache_mngr=0x7fcd84c72c70, end_ev=0x7fcd966cf5e0, all=true,
      using_stmt_cache=true, using_trx_cache=true) at
      /home/sachin/10.1/server/sql/log.cc:7290
      #6  0x00005569d5af0ce6 in binlog_flush_cache (thd=0x7fcd84c068b0,
      cache_mngr=0x7fcd84c72c70, end_ev=0x7fcd966cf5e0, all=true,
      using_stmt=true, using_trx=true) at
      /home/sachin/10.1/server/sql/log.cc:1751
      #7  0x00005569d5af11bb in binlog_commit_flush_xid_caches
      (thd=0x7fcd84c068b0, cache_mngr=0x7fcd84c72c70, all=true, xid=2) at
      /home/sachin/10.1/server/sql/log.cc:1859
      #8  0x00005569d5b045c8 in MYSQL_BIN_LOG::log_and_order
      (this=0x5569d692d7c0 <mysql_bin_log>, thd=0x7fcd84c068b0, xid=2,
      all=true, need_prepare_ordered=false, need_commit_ordered=true) at
      /home/sachin/10.1/server/sql/log.cc:9575
      #9  0x00005569d5a1ec0d in ha_commit_trans (thd=0x7fcd84c068b0,
      all=true) at /home/sachin/10.1/server/sql/handler.cc:1497
      #10 0x00005569d5925e7e in trans_commit (thd=0x7fcd84c068b0) at
      /home/sachin/10.1/server/sql/transaction.cc:235
      #11 0x00005569d5b1b1fa in Xid_log_event::do_apply_event
      (this=0x7fcd8542a770, rgi=0x7fcd85407800) at
      /home/sachin/10.1/server/sql/log_event.cc:7720
      #12 0x00005569d5743fa1 in Log_event::apply_event (this=0x7fcd8542a770,
      rgi=0x7fcd85407800) at /home/sachin/10.1/server/sql/log_event.h:1343
      #13 0x00005569d573987e in apply_event_and_update_pos_apply
      (ev=0x7fcd8542a770, thd=0x7fcd84c068b0, rgi=0x7fcd85407800, reason=0)
      at /home/sachin/10.1/server/sql/slave.cc:3479
      #14 0x00005569d5739deb in apply_event_and_update_pos_for_parallel
      (ev=0x7fcd8542a770, thd=0x7fcd84c068b0, rgi=0x7fcd85407800) at
      /home/sachin/10.1/server/sql/slave.cc:3623
      #15 0x00005569d597bfbe in rpt_handle_event (qev=0x7fcd85424770,
      rpt=0x7fcd85421c88) at /home/sachin/10.1/server/sql/rpl_parallel.cc:50
      #16 0x00005569d597ed57 in handle_rpl_parallel_thread
      (arg=0x7fcd85421c88) at
      /home/sachin/10.1/server/sql/rpl_parallel.cc:1258
      #17 0x00005569d5d42aa4 in pfs_spawn_thread (arg=0x7fcd85415570) at
      /home/sachin/10.1/server/storage/perfschema/pfs.cc:1861
      #18 0x00007fcda6d5057f in start_thread () from /usr/lib/libpthread.so.0
      #19 0x00007fcda5f0f0e3 in clone () from /usr/lib/libc.so.6
       
      Gtid_seq_no= 1
      Thread 33 (Thread 0x7fcd9671d700 (LWP 23890)):
      #0  0x00007fcda6d56415 in pthread_cond_wait@@GLIBC_2.3.2 () from
      /usr/lib/libpthread.so.0
      #1  0x00007fcd9e7778ab in gu::Lock::wait (this=0x7fcd9671a0c0,
      cond=...) at galerautils/src/gu_mutex.hpp:40
      #2  galera::Monitor<galera::ReplicatorSMM::CommitOrder>::enter
      (this=this@entry=0x7fcda12d5da0, obj=...) at
      galera/src/monitor.hpp:124
      #3  0x00007fcd9e771f28 in galera::ReplicatorSMM::pre_commit
      (this=0x7fcda12d5000, trx=0x7fcd8507e000, meta=<optimized out>) at
      galera/src/replicator_smm.cpp:796
      #5  0x00005569d59864d0 in wsrep_run_wsrep_commit (thd=0x7fcd85006a70,
      all=true) at /home/sachin/10.1/server/sql/wsrep_hton.cc:492
      #6  0x00005569d5984d6a in wsrep_prepare (hton=0x7fcda583e270,
      thd=0x7fcd85006a70, all=true) at
      /home/sachin/10.1/server/sql/wsrep_hton.cc:208
      #7  0x00005569d5a1e1b0 in prepare_or_error (ht=0x7fcda583e270,
      thd=0x7fcd85006a70, all=true) at
      /home/sachin/10.1/server/sql/handler.cc:1196
      #8  0x00005569d5a1ea1c in ha_commit_trans (thd=0x7fcd85006a70,
      all=true) at /home/sachin/10.1/server/sql/handler.cc:1475
      #9  0x00005569d5925e7e in trans_commit (thd=0x7fcd85006a70) at
      /home/sachin/10.1/server/sql/transaction.cc:235
      #10 0x00005569d5b1b1fa in Xid_log_event::do_apply_event
      (this=0x7fcd8542a570, rgi=0x7fcd85407000) at
      /home/sachin/10.1/server/sql/log_event.cc:7720
      #11 0x00005569d5743fa1 in Log_event::apply_event (this=0x7fcd8542a570,
      rgi=0x7fcd85407000) at /home/sachin/10.1/server/sql/log_event.h:1343
      #12 0x00005569d573987e in apply_event_and_update_pos_apply
      (ev=0x7fcd8542a570, thd=0x7fcd85006a70, rgi=0x7fcd85407000, reason=0)
      at /home/sachin/10.1/server/sql/slave.cc:3479
      #13 0x00005569d5739deb in apply_event_and_update_pos_for_parallel
      (ev=0x7fcd8542a570, thd=0x7fcd85006a70, rgi=0x7fcd85407000) at
      /home/sachin/10.1/server/sql/slave.cc:3623
      #14 0x00005569d597bfbe in rpt_handle_event (qev=0x7fcd85423870,
      rpt=0x7fcd85421a80) at /home/sachin/10.1/server/sql/rpl_parallel.cc:50
      #15 0x00005569d597ed57 in handle_rpl_parallel_thread
      (arg=0x7fcd85421a80) at
      /home/sachin/10.1/server/sql/rpl_parallel.cc:1258
      #16 0x00005569d5d42aa4 in pfs_spawn_thread (arg=0x7fcd854152f0) at
      /home/sachin/10.1/server/storage/perfschema/pfs.cc:1861
      #17 0x00007fcda6d5057f in start_thread () from /usr/lib/libpthread.so.0
      #18 0x00007fcda5f0f0e3 in clone () from /usr/lib/libc.so.6
      

      Attachments

        1. my.cnf
          0.4 kB
          Ramesh Sivaraman
        2. n1.cnf
          1.0 kB
          Ramesh Sivaraman
        3. n2.cnf
          0.9 kB
          Ramesh Sivaraman
        4. node1.err
          117 kB
          Ramesh Sivaraman

        Issue Links

          Activity

            bbancroft Bryan Bancroft (Inactive) created issue -
            bbancroft Bryan Bancroft (Inactive) made changes -
            Field Original Value New Value
            bbancroft Bryan Bancroft (Inactive) made changes -
            Description This is still an issue in latest GA. Can be replicated via sysbench


            In a setup where a master replicates to a Galera node acting as a slave with slave-parallel-threads > 1, the slave threads hang which brings replication to a halt.

            How to reproduce:

            Run the attached script on master.
            On slave:

            MariaDB [test]> show processlist;
            +----+-------------+-----------------+------+---------+------+--------------------------------------------------------------------------------+------------------+----------+
            | Id | User | Host | db | Command | Time | State | Info | Progress |
            +----+-------------+-----------------+------+---------+------+--------------------------------------------------------------------------------+------------------+----------+
            | 1 | system user | | NULL | Sleep | 840 | NULL | NULL | 0.000 |
            | 2 | system user | | NULL | Sleep | 840 | wsrep aborter idle | NULL | 0.000 |
            | 5 | system user | | NULL | Sleep | 838 | NULL | NULL | 0.000 |
            | 6 | system user | | NULL | Sleep | 838 | NULL | NULL | 0.000 |
            | 7 | system user | | NULL | Sleep | 838 | NULL | NULL | 0.000 |
            | 8 | system user | | NULL | Connect | 263 | Waiting for prior transaction to start commit before starting next transaction | NULL | 0.000 |
            | 9 | system user | | NULL | Connect | 263 | closing tables | NULL | 0.000 |
            | 10 | system user | | NULL | Connect | 263 | closing tables | NULL | 0.000 |
            | 11 | system user | | NULL | Connect | 263 | closing tables | NULL | 0.000 |
            | 12 | root | localhost:52597 | test | Query | 0 | init | show processlist | 0.000 |
            | 13 | system user | | NULL | Connect | 336 | Waiting for master to send event | NULL | 0.000 |
            | 14 | system user | | NULL | Connect | 336 | Slave has read all relay log; waiting for the slave I/O thread to update it | NULL | 0.000 |
            +----+-------------+-----------------+------+---------+------+--------------------------------------------------------------------------------+------------------+----------+
            12 rows in set (0.00 sec)
             {code}
            MariaDB [test]> show slave status\G
            *************************** 1. row ***************************
                           Slave_IO_State: Waiting for master to send event
                              Master_Host: 127.0.0.1
                              Master_User: root
                              Master_Port: 15999
                            Connect_Retry: 60
                          Master_Log_File: nirbhay-VirtualBox-bin.000001
                      Read_Master_Log_Pos: 146163
                           Relay_Log_File: nirbhay-VirtualBox-relay-bin.000002
                            Relay_Log_Pos: 1463
                    Relay_Master_Log_File: nirbhay-VirtualBox-bin.000001
                         Slave_IO_Running: Yes
                        Slave_SQL_Running: Yes
                          Replicate_Do_DB:
                      Replicate_Ignore_DB:
                       Replicate_Do_Table:
                   Replicate_Ignore_Table:
                  Replicate_Wild_Do_Table:
              Replicate_Wild_Ignore_Table:
                               Last_Errno: 0
                               Last_Error:
                             Skip_Counter: 0
                      Exec_Master_Log_Pos: 1163
                          Relay_Log_Space: 146773
                          Until_Condition: None
                           Until_Log_File:
                            Until_Log_Pos: 0
                       Master_SSL_Allowed: No
                       Master_SSL_CA_File:
                       Master_SSL_CA_Path:
                          Master_SSL_Cert:
                        Master_SSL_Cipher:
                           Master_SSL_Key:
                    Seconds_Behind_Master: 268
            Master_SSL_Verify_Server_Cert: No
                            Last_IO_Errno: 0
                            Last_IO_Error:
                           Last_SQL_Errno: 0
                           Last_SQL_Error:
              Replicate_Ignore_Server_Ids:
                         Master_Server_Id: 1
                           Master_SSL_Crl:
                       Master_SSL_Crlpath:
                               Using_Gtid: Slave_Pos
                              Gtid_IO_Pos: 0-1-1006
            1 row in set (0.00 sec)
             {code}
            The node also hangs when shutdown.

             {code}
            141009 11:38:17 [Note] ./bin/mysqld: Normal shutdown
            141009 11:38:17 [Note] WSREP: Stop replication
            141009 11:38:17 [Note] WSREP: Provider disconnect
            141009 11:38:17 [Note] WSREP: Closing send monitor...
            141009 11:38:17 [Note] WSREP: Closed send monitor.
            141009 11:38:17 [Note] WSREP: gcomm: terminating thread
            141009 11:38:17 [Note] WSREP: gcomm: joining thread
            141009 11:38:17 [Note] WSREP: gcomm: closing backend
            141009 11:38:17 [Note] WSREP: view((empty))
            141009 11:38:17 [Note] WSREP: Received self-leave message.
            141009 11:38:17 [Note] WSREP: gcomm: closed
            141009 11:38:17 [Note] WSREP: Flow-control interval: [0, 0]
            141009 11:38:17 [Note] WSREP: Received SELF-LEAVE. Closing connection.
            141009 11:38:17 [Note] WSREP: Shifting SYNCED -> CLOSED (TO: 10)
            141009 11:38:17 [Note] WSREP: RECV thread exiting 0: Success
            141009 11:38:17 [Note] WSREP: recv_thread() joined.
            141009 11:38:17 [Note] WSREP: Closing replication queue.
            141009 11:38:17 [Note] WSREP: Closing slave action queue.
            141009 11:38:19 [Note] WSREP: waiting for client connections to close: 5
            Why this is happening

            So the actual issue , Galera sends event at the time of prepare phase
            and And creates deadlock.

            For example lets us consider the replication A -> B <==> C (A,B
            parallel replication optimistic, B,C Galera cluster nodes)
            Lets assume 2 inserts(T1 gtid x-x-1 and T2 x-x-2) from master A arrive
            to slave B.
            2nd insert prepares faster then 1st insert, So it has already sent the
            writeset to node C. Now it is the queue waiting for its turn to commit
            While the first insert does prepare on galera
            (wsrep_run_wsrep_commit), but it is stuck because T2 transaction still
            haven't run post_commit on galera
            so galera state is still in S_WAITING
            T2 cant run post_commit on galera because it is waiting for T1 commit
            , T1 cant commit because it is waiting in prepare stage for
            transaction T2 to clear the galera state.
             {code}
            Backtrace from gdb

             
            {code}
            Gtid_seq_no= 2`
            ```Thread 34 (Thread 0x7fcd966d2700 (LWP 23891)):```
            #0 0x00007fcda6d56415 in pthread_cond_wait@@GLIBC_2.3.2 () from
            /usr/lib/libpthread.so.0
            #1 0x00005569d607d380 in safe_cond_wait (cond=0x7fcd854078e8,
            mp=0x7fcd85407838, file=0x5569d6240360
            "/home/sachin/10.1/server/include/mysql/psi/mysql_thread.h",
            line=1154) at /home/sachin/10.1/server/mysys/thr_mutex.c:493
            #2 0x00005569d5aec4d0 in inline_mysql_cond_wait (that=0x7fcd854078e8,
            mutex=0x7fcd85407838, src_file=0x5569d6240cb8
            "/home/sachin/10.1/server/sql/log.cc", src_line=7387) at
            /home/sachin/10.1/server/include/mysql/psi/mysql_thread.h:1154
            #3 0x00005569d5afeee5 in MYSQL_BIN_LOG::queue_for_group_commit
            (this=0x5569d692d7c0 <mysql_bin_log>, orig_entry=0x7fcd966cf440) at
            /home/sachin/10.1/server/sql/log.cc:7387
            #4 0x00005569d5aff5c9 in
            MYSQL_BIN_LOG::write_transaction_to_binlog_events (this=0x5569d692d7c0
            <mysql_bin_log>, entry=0x7fcd966cf440) at
            /home/sachin/10.1/server/sql/log.cc:7607
            #5 0x00005569d5afecff in MYSQL_BIN_LOG::write_transaction_to_binlog
            (this=0x5569d692d7c0 <mysql_bin_log>, thd=0x7fcd84c068b0,
            cache_mngr=0x7fcd84c72c70, end_ev=0x7fcd966cf5e0, all=true,
            using_stmt_cache=true, using_trx_cache=true) at
            /home/sachin/10.1/server/sql/log.cc:7290
            #6 0x00005569d5af0ce6 in binlog_flush_cache (thd=0x7fcd84c068b0,
            cache_mngr=0x7fcd84c72c70, end_ev=0x7fcd966cf5e0, all=true,
            using_stmt=true, using_trx=true) at
            /home/sachin/10.1/server/sql/log.cc:1751
            #7 0x00005569d5af11bb in binlog_commit_flush_xid_caches
            (thd=0x7fcd84c068b0, cache_mngr=0x7fcd84c72c70, all=true, xid=2) at
            /home/sachin/10.1/server/sql/log.cc:1859
            #8 0x00005569d5b045c8 in MYSQL_BIN_LOG::log_and_order
            (this=0x5569d692d7c0 <mysql_bin_log>, thd=0x7fcd84c068b0, xid=2,
            all=true, need_prepare_ordered=false, need_commit_ordered=true) at
            /home/sachin/10.1/server/sql/log.cc:9575
            #9 0x00005569d5a1ec0d in ha_commit_trans (thd=0x7fcd84c068b0,
            all=true) at /home/sachin/10.1/server/sql/handler.cc:1497
            #10 0x00005569d5925e7e in trans_commit (thd=0x7fcd84c068b0) at
            /home/sachin/10.1/server/sql/transaction.cc:235
            #11 0x00005569d5b1b1fa in Xid_log_event::do_apply_event
            (this=0x7fcd8542a770, rgi=0x7fcd85407800) at
            /home/sachin/10.1/server/sql/log_event.cc:7720
            #12 0x00005569d5743fa1 in Log_event::apply_event (this=0x7fcd8542a770,
            rgi=0x7fcd85407800) at /home/sachin/10.1/server/sql/log_event.h:1343
            #13 0x00005569d573987e in apply_event_and_update_pos_apply
            (ev=0x7fcd8542a770, thd=0x7fcd84c068b0, rgi=0x7fcd85407800, reason=0)
            at /home/sachin/10.1/server/sql/slave.cc:3479
            #14 0x00005569d5739deb in apply_event_and_update_pos_for_parallel
            (ev=0x7fcd8542a770, thd=0x7fcd84c068b0, rgi=0x7fcd85407800) at
            /home/sachin/10.1/server/sql/slave.cc:3623
            #15 0x00005569d597bfbe in rpt_handle_event (qev=0x7fcd85424770,
            rpt=0x7fcd85421c88) at /home/sachin/10.1/server/sql/rpl_parallel.cc:50
            #16 0x00005569d597ed57 in handle_rpl_parallel_thread
            (arg=0x7fcd85421c88) at
            /home/sachin/10.1/server/sql/rpl_parallel.cc:1258
            #17 0x00005569d5d42aa4 in pfs_spawn_thread (arg=0x7fcd85415570) at
            /home/sachin/10.1/server/storage/perfschema/pfs.cc:1861
            #18 0x00007fcda6d5057f in start_thread () from /usr/lib/libpthread.so.0
            #19 0x00007fcda5f0f0e3 in clone () from /usr/lib/libc.so.6
             
            Gtid_seq_no= 1
            Thread 33 (Thread 0x7fcd9671d700 (LWP 23890)):
            #0 0x00007fcda6d56415 in pthread_cond_wait@@GLIBC_2.3.2 () from
            /usr/lib/libpthread.so.0
            #1 0x00007fcd9e7778ab in gu::Lock::wait (this=0x7fcd9671a0c0,
            cond=...) at galerautils/src/gu_mutex.hpp:40
            #2 galera::Monitor<galera::ReplicatorSMM::CommitOrder>::enter
            (this=this@entry=0x7fcda12d5da0, obj=...) at
            galera/src/monitor.hpp:124
            #3 0x00007fcd9e771f28 in galera::ReplicatorSMM::pre_commit
            (this=0x7fcda12d5000, trx=0x7fcd8507e000, meta=<optimized out>) at
            galera/src/replicator_smm.cpp:796
            #5 0x00005569d59864d0 in wsrep_run_wsrep_commit (thd=0x7fcd85006a70,
            all=true) at /home/sachin/10.1/server/sql/wsrep_hton.cc:492
            #6 0x00005569d5984d6a in wsrep_prepare (hton=0x7fcda583e270,
            thd=0x7fcd85006a70, all=true) at
            /home/sachin/10.1/server/sql/wsrep_hton.cc:208
            #7 0x00005569d5a1e1b0 in prepare_or_error (ht=0x7fcda583e270,
            thd=0x7fcd85006a70, all=true) at
            /home/sachin/10.1/server/sql/handler.cc:1196
            #8 0x00005569d5a1ea1c in ha_commit_trans (thd=0x7fcd85006a70,
            all=true) at /home/sachin/10.1/server/sql/handler.cc:1475
            #9 0x00005569d5925e7e in trans_commit (thd=0x7fcd85006a70) at
            /home/sachin/10.1/server/sql/transaction.cc:235
            #10 0x00005569d5b1b1fa in Xid_log_event::do_apply_event
            (this=0x7fcd8542a570, rgi=0x7fcd85407000) at
            /home/sachin/10.1/server/sql/log_event.cc:7720
            #11 0x00005569d5743fa1 in Log_event::apply_event (this=0x7fcd8542a570,
            rgi=0x7fcd85407000) at /home/sachin/10.1/server/sql/log_event.h:1343
            #12 0x00005569d573987e in apply_event_and_update_pos_apply
            (ev=0x7fcd8542a570, thd=0x7fcd85006a70, rgi=0x7fcd85407000, reason=0)
            at /home/sachin/10.1/server/sql/slave.cc:3479
            #13 0x00005569d5739deb in apply_event_and_update_pos_for_parallel
            (ev=0x7fcd8542a570, thd=0x7fcd85006a70, rgi=0x7fcd85407000) at
            /home/sachin/10.1/server/sql/slave.cc:3623
            #14 0x00005569d597bfbe in rpt_handle_event (qev=0x7fcd85423870,
            rpt=0x7fcd85421a80) at /home/sachin/10.1/server/sql/rpl_parallel.cc:50
            #15 0x00005569d597ed57 in handle_rpl_parallel_thread
            (arg=0x7fcd85421a80) at
            /home/sachin/10.1/server/sql/rpl_parallel.cc:1258
            #16 0x00005569d5d42aa4 in pfs_spawn_thread (arg=0x7fcd854152f0) at
            /home/sachin/10.1/server/storage/perfschema/pfs.cc:1861
            #17 0x00007fcda6d5057f in start_thread () from /usr/lib/libpthread.so.0
            #18 0x00007fcda5f0f0e3 in clone () from /usr/lib/libc.so.6
            {code}
            This is still an issue in latest GA. Can be replicated via sysbench


            In a setup where a master replicates to a Galera node acting as a slave with slave-parallel-threads > 1, the slave threads hang which brings replication to a halt.

            How to reproduce:

            Run the attached script on master.
            On slave:

            MariaDB [test]> show processlist;
            +----+-------------+-----------------+------+---------+------+--------------------------------------------------------------------------------+------------------+----------+
            | Id | User | Host | db | Command | Time | State | Info | Progress |
            +----+-------------+-----------------+------+---------+------+--------------------------------------------------------------------------------+------------------+----------+
            | 1 | system user | | NULL | Sleep | 840 | NULL | NULL | 0.000 |
            | 2 | system user | | NULL | Sleep | 840 | wsrep aborter idle | NULL | 0.000 |
            | 5 | system user | | NULL | Sleep | 838 | NULL | NULL | 0.000 |
            | 6 | system user | | NULL | Sleep | 838 | NULL | NULL | 0.000 |
            | 7 | system user | | NULL | Sleep | 838 | NULL | NULL | 0.000 |
            | 8 | system user | | NULL | Connect | 263 | Waiting for prior transaction to start commit before starting next transaction | NULL | 0.000 |
            | 9 | system user | | NULL | Connect | 263 | closing tables | NULL | 0.000 |
            | 10 | system user | | NULL | Connect | 263 | closing tables | NULL | 0.000 |
            | 11 | system user | | NULL | Connect | 263 | closing tables | NULL | 0.000 |
            | 12 | root | localhost:52597 | test | Query | 0 | init | show processlist | 0.000 |
            | 13 | system user | | NULL | Connect | 336 | Waiting for master to send event | NULL | 0.000 |
            | 14 | system user | | NULL | Connect | 336 | Slave has read all relay log; waiting for the slave I/O thread to update it | NULL | 0.000 |
            +----+-------------+-----------------+------+---------+------+--------------------------------------------------------------------------------+------------------+----------+
            12 rows in set (0.00 sec)
             {code}
            MariaDB [test]> show slave status\G
            *************************** 1. row ***************************
                           Slave_IO_State: Waiting for master to send event
                              Master_Host: 127.0.0.1
                              Master_User: root
                              Master_Port: 15999
                            Connect_Retry: 60
                          Master_Log_File: nirbhay-VirtualBox-bin.000001
                      Read_Master_Log_Pos: 146163
                           Relay_Log_File: nirbhay-VirtualBox-relay-bin.000002
                            Relay_Log_Pos: 1463
                    Relay_Master_Log_File: nirbhay-VirtualBox-bin.000001
                         Slave_IO_Running: Yes
                        Slave_SQL_Running: Yes
                          Replicate_Do_DB:
                      Replicate_Ignore_DB:
                       Replicate_Do_Table:
                   Replicate_Ignore_Table:
                  Replicate_Wild_Do_Table:
              Replicate_Wild_Ignore_Table:
                               Last_Errno: 0
                               Last_Error:
                             Skip_Counter: 0
                      Exec_Master_Log_Pos: 1163
                          Relay_Log_Space: 146773
                          Until_Condition: None
                           Until_Log_File:
                            Until_Log_Pos: 0
                       Master_SSL_Allowed: No
                       Master_SSL_CA_File:
                       Master_SSL_CA_Path:
                          Master_SSL_Cert:
                        Master_SSL_Cipher:
                           Master_SSL_Key:
                    Seconds_Behind_Master: 268
            Master_SSL_Verify_Server_Cert: No
                            Last_IO_Errno: 0
                            Last_IO_Error:
                           Last_SQL_Errno: 0
                           Last_SQL_Error:
              Replicate_Ignore_Server_Ids:
                         Master_Server_Id: 1
                           Master_SSL_Crl:
                       Master_SSL_Crlpath:
                               Using_Gtid: Slave_Pos
                              Gtid_IO_Pos: 0-1-1006
            1 row in set (0.00 sec)
             {code}
            The node also hangs when shutdown.

             {code:sql}
            141009 11:38:17 [Note] ./bin/mysqld: Normal shutdown
            141009 11:38:17 [Note] WSREP: Stop replication
            141009 11:38:17 [Note] WSREP: Provider disconnect
            141009 11:38:17 [Note] WSREP: Closing send monitor...
            141009 11:38:17 [Note] WSREP: Closed send monitor.
            141009 11:38:17 [Note] WSREP: gcomm: terminating thread
            141009 11:38:17 [Note] WSREP: gcomm: joining thread
            141009 11:38:17 [Note] WSREP: gcomm: closing backend
            141009 11:38:17 [Note] WSREP: view((empty))
            141009 11:38:17 [Note] WSREP: Received self-leave message.
            141009 11:38:17 [Note] WSREP: gcomm: closed
            141009 11:38:17 [Note] WSREP: Flow-control interval: [0, 0]
            141009 11:38:17 [Note] WSREP: Received SELF-LEAVE. Closing connection.
            141009 11:38:17 [Note] WSREP: Shifting SYNCED -> CLOSED (TO: 10)
            141009 11:38:17 [Note] WSREP: RECV thread exiting 0: Success
            141009 11:38:17 [Note] WSREP: recv_thread() joined.
            141009 11:38:17 [Note] WSREP: Closing replication queue.
            141009 11:38:17 [Note] WSREP: Closing slave action queue.
            141009 11:38:19 [Note] WSREP: waiting for client connections to close: 5
             {code}
            Why this is happening

            So the actual issue , Galera sends event at the time of prepare phase
            and And creates deadlock.

            For example lets us consider the replication A -> B <==> C (A,B
            parallel replication optimistic, B,C Galera cluster nodes)
            Lets assume 2 inserts(T1 gtid x-x-1 and T2 x-x-2) from master A arrive
            to slave B.
            2nd insert prepares faster then 1st insert, So it has already sent the
            writeset to node C. Now it is the queue waiting for its turn to commit
            While the first insert does prepare on galera
            (wsrep_run_wsrep_commit), but it is stuck because T2 transaction still
            haven't run post_commit on galera
            so galera state is still in S_WAITING
            T2 cant run post_commit on galera because it is waiting for T1 commit
            , T1 cant commit because it is waiting in prepare stage for
            transaction T2 to clear the galera state.
             {code}
            Backtrace from gdb

             
            {code}
            Gtid_seq_no= 2`
            ```Thread 34 (Thread 0x7fcd966d2700 (LWP 23891)):```
            #0 0x00007fcda6d56415 in pthread_cond_wait@@GLIBC_2.3.2 () from
            /usr/lib/libpthread.so.0
            #1 0x00005569d607d380 in safe_cond_wait (cond=0x7fcd854078e8,
            mp=0x7fcd85407838, file=0x5569d6240360
            "/home/sachin/10.1/server/include/mysql/psi/mysql_thread.h",
            line=1154) at /home/sachin/10.1/server/mysys/thr_mutex.c:493
            #2 0x00005569d5aec4d0 in inline_mysql_cond_wait (that=0x7fcd854078e8,
            mutex=0x7fcd85407838, src_file=0x5569d6240cb8
            "/home/sachin/10.1/server/sql/log.cc", src_line=7387) at
            /home/sachin/10.1/server/include/mysql/psi/mysql_thread.h:1154
            #3 0x00005569d5afeee5 in MYSQL_BIN_LOG::queue_for_group_commit
            (this=0x5569d692d7c0 <mysql_bin_log>, orig_entry=0x7fcd966cf440) at
            /home/sachin/10.1/server/sql/log.cc:7387
            #4 0x00005569d5aff5c9 in
            MYSQL_BIN_LOG::write_transaction_to_binlog_events (this=0x5569d692d7c0
            <mysql_bin_log>, entry=0x7fcd966cf440) at
            /home/sachin/10.1/server/sql/log.cc:7607
            #5 0x00005569d5afecff in MYSQL_BIN_LOG::write_transaction_to_binlog
            (this=0x5569d692d7c0 <mysql_bin_log>, thd=0x7fcd84c068b0,
            cache_mngr=0x7fcd84c72c70, end_ev=0x7fcd966cf5e0, all=true,
            using_stmt_cache=true, using_trx_cache=true) at
            /home/sachin/10.1/server/sql/log.cc:7290
            #6 0x00005569d5af0ce6 in binlog_flush_cache (thd=0x7fcd84c068b0,
            cache_mngr=0x7fcd84c72c70, end_ev=0x7fcd966cf5e0, all=true,
            using_stmt=true, using_trx=true) at
            /home/sachin/10.1/server/sql/log.cc:1751
            #7 0x00005569d5af11bb in binlog_commit_flush_xid_caches
            (thd=0x7fcd84c068b0, cache_mngr=0x7fcd84c72c70, all=true, xid=2) at
            /home/sachin/10.1/server/sql/log.cc:1859
            #8 0x00005569d5b045c8 in MYSQL_BIN_LOG::log_and_order
            (this=0x5569d692d7c0 <mysql_bin_log>, thd=0x7fcd84c068b0, xid=2,
            all=true, need_prepare_ordered=false, need_commit_ordered=true) at
            /home/sachin/10.1/server/sql/log.cc:9575
            #9 0x00005569d5a1ec0d in ha_commit_trans (thd=0x7fcd84c068b0,
            all=true) at /home/sachin/10.1/server/sql/handler.cc:1497
            #10 0x00005569d5925e7e in trans_commit (thd=0x7fcd84c068b0) at
            /home/sachin/10.1/server/sql/transaction.cc:235
            #11 0x00005569d5b1b1fa in Xid_log_event::do_apply_event
            (this=0x7fcd8542a770, rgi=0x7fcd85407800) at
            /home/sachin/10.1/server/sql/log_event.cc:7720
            #12 0x00005569d5743fa1 in Log_event::apply_event (this=0x7fcd8542a770,
            rgi=0x7fcd85407800) at /home/sachin/10.1/server/sql/log_event.h:1343
            #13 0x00005569d573987e in apply_event_and_update_pos_apply
            (ev=0x7fcd8542a770, thd=0x7fcd84c068b0, rgi=0x7fcd85407800, reason=0)
            at /home/sachin/10.1/server/sql/slave.cc:3479
            #14 0x00005569d5739deb in apply_event_and_update_pos_for_parallel
            (ev=0x7fcd8542a770, thd=0x7fcd84c068b0, rgi=0x7fcd85407800) at
            /home/sachin/10.1/server/sql/slave.cc:3623
            #15 0x00005569d597bfbe in rpt_handle_event (qev=0x7fcd85424770,
            rpt=0x7fcd85421c88) at /home/sachin/10.1/server/sql/rpl_parallel.cc:50
            #16 0x00005569d597ed57 in handle_rpl_parallel_thread
            (arg=0x7fcd85421c88) at
            /home/sachin/10.1/server/sql/rpl_parallel.cc:1258
            #17 0x00005569d5d42aa4 in pfs_spawn_thread (arg=0x7fcd85415570) at
            /home/sachin/10.1/server/storage/perfschema/pfs.cc:1861
            #18 0x00007fcda6d5057f in start_thread () from /usr/lib/libpthread.so.0
            #19 0x00007fcda5f0f0e3 in clone () from /usr/lib/libc.so.6
             
            Gtid_seq_no= 1
            Thread 33 (Thread 0x7fcd9671d700 (LWP 23890)):
            #0 0x00007fcda6d56415 in pthread_cond_wait@@GLIBC_2.3.2 () from
            /usr/lib/libpthread.so.0
            #1 0x00007fcd9e7778ab in gu::Lock::wait (this=0x7fcd9671a0c0,
            cond=...) at galerautils/src/gu_mutex.hpp:40
            #2 galera::Monitor<galera::ReplicatorSMM::CommitOrder>::enter
            (this=this@entry=0x7fcda12d5da0, obj=...) at
            galera/src/monitor.hpp:124
            #3 0x00007fcd9e771f28 in galera::ReplicatorSMM::pre_commit
            (this=0x7fcda12d5000, trx=0x7fcd8507e000, meta=<optimized out>) at
            galera/src/replicator_smm.cpp:796
            #5 0x00005569d59864d0 in wsrep_run_wsrep_commit (thd=0x7fcd85006a70,
            all=true) at /home/sachin/10.1/server/sql/wsrep_hton.cc:492
            #6 0x00005569d5984d6a in wsrep_prepare (hton=0x7fcda583e270,
            thd=0x7fcd85006a70, all=true) at
            /home/sachin/10.1/server/sql/wsrep_hton.cc:208
            #7 0x00005569d5a1e1b0 in prepare_or_error (ht=0x7fcda583e270,
            thd=0x7fcd85006a70, all=true) at
            /home/sachin/10.1/server/sql/handler.cc:1196
            #8 0x00005569d5a1ea1c in ha_commit_trans (thd=0x7fcd85006a70,
            all=true) at /home/sachin/10.1/server/sql/handler.cc:1475
            #9 0x00005569d5925e7e in trans_commit (thd=0x7fcd85006a70) at
            /home/sachin/10.1/server/sql/transaction.cc:235
            #10 0x00005569d5b1b1fa in Xid_log_event::do_apply_event
            (this=0x7fcd8542a570, rgi=0x7fcd85407000) at
            /home/sachin/10.1/server/sql/log_event.cc:7720
            #11 0x00005569d5743fa1 in Log_event::apply_event (this=0x7fcd8542a570,
            rgi=0x7fcd85407000) at /home/sachin/10.1/server/sql/log_event.h:1343
            #12 0x00005569d573987e in apply_event_and_update_pos_apply
            (ev=0x7fcd8542a570, thd=0x7fcd85006a70, rgi=0x7fcd85407000, reason=0)
            at /home/sachin/10.1/server/sql/slave.cc:3479
            #13 0x00005569d5739deb in apply_event_and_update_pos_for_parallel
            (ev=0x7fcd8542a570, thd=0x7fcd85006a70, rgi=0x7fcd85407000) at
            /home/sachin/10.1/server/sql/slave.cc:3623
            #14 0x00005569d597bfbe in rpt_handle_event (qev=0x7fcd85423870,
            rpt=0x7fcd85421a80) at /home/sachin/10.1/server/sql/rpl_parallel.cc:50
            #15 0x00005569d597ed57 in handle_rpl_parallel_thread
            (arg=0x7fcd85421a80) at
            /home/sachin/10.1/server/sql/rpl_parallel.cc:1258
            #16 0x00005569d5d42aa4 in pfs_spawn_thread (arg=0x7fcd854152f0) at
            /home/sachin/10.1/server/storage/perfschema/pfs.cc:1861
            #17 0x00007fcda6d5057f in start_thread () from /usr/lib/libpthread.so.0
            #18 0x00007fcda5f0f0e3 in clone () from /usr/lib/libc.so.6
            {code}
            bbancroft Bryan Bancroft (Inactive) made changes -
            Description This is still an issue in latest GA. Can be replicated via sysbench


            In a setup where a master replicates to a Galera node acting as a slave with slave-parallel-threads > 1, the slave threads hang which brings replication to a halt.

            How to reproduce:

            Run the attached script on master.
            On slave:

            MariaDB [test]> show processlist;
            +----+-------------+-----------------+------+---------+------+--------------------------------------------------------------------------------+------------------+----------+
            | Id | User | Host | db | Command | Time | State | Info | Progress |
            +----+-------------+-----------------+------+---------+------+--------------------------------------------------------------------------------+------------------+----------+
            | 1 | system user | | NULL | Sleep | 840 | NULL | NULL | 0.000 |
            | 2 | system user | | NULL | Sleep | 840 | wsrep aborter idle | NULL | 0.000 |
            | 5 | system user | | NULL | Sleep | 838 | NULL | NULL | 0.000 |
            | 6 | system user | | NULL | Sleep | 838 | NULL | NULL | 0.000 |
            | 7 | system user | | NULL | Sleep | 838 | NULL | NULL | 0.000 |
            | 8 | system user | | NULL | Connect | 263 | Waiting for prior transaction to start commit before starting next transaction | NULL | 0.000 |
            | 9 | system user | | NULL | Connect | 263 | closing tables | NULL | 0.000 |
            | 10 | system user | | NULL | Connect | 263 | closing tables | NULL | 0.000 |
            | 11 | system user | | NULL | Connect | 263 | closing tables | NULL | 0.000 |
            | 12 | root | localhost:52597 | test | Query | 0 | init | show processlist | 0.000 |
            | 13 | system user | | NULL | Connect | 336 | Waiting for master to send event | NULL | 0.000 |
            | 14 | system user | | NULL | Connect | 336 | Slave has read all relay log; waiting for the slave I/O thread to update it | NULL | 0.000 |
            +----+-------------+-----------------+------+---------+------+--------------------------------------------------------------------------------+------------------+----------+
            12 rows in set (0.00 sec)
             {code}
            MariaDB [test]> show slave status\G
            *************************** 1. row ***************************
                           Slave_IO_State: Waiting for master to send event
                              Master_Host: 127.0.0.1
                              Master_User: root
                              Master_Port: 15999
                            Connect_Retry: 60
                          Master_Log_File: nirbhay-VirtualBox-bin.000001
                      Read_Master_Log_Pos: 146163
                           Relay_Log_File: nirbhay-VirtualBox-relay-bin.000002
                            Relay_Log_Pos: 1463
                    Relay_Master_Log_File: nirbhay-VirtualBox-bin.000001
                         Slave_IO_Running: Yes
                        Slave_SQL_Running: Yes
                          Replicate_Do_DB:
                      Replicate_Ignore_DB:
                       Replicate_Do_Table:
                   Replicate_Ignore_Table:
                  Replicate_Wild_Do_Table:
              Replicate_Wild_Ignore_Table:
                               Last_Errno: 0
                               Last_Error:
                             Skip_Counter: 0
                      Exec_Master_Log_Pos: 1163
                          Relay_Log_Space: 146773
                          Until_Condition: None
                           Until_Log_File:
                            Until_Log_Pos: 0
                       Master_SSL_Allowed: No
                       Master_SSL_CA_File:
                       Master_SSL_CA_Path:
                          Master_SSL_Cert:
                        Master_SSL_Cipher:
                           Master_SSL_Key:
                    Seconds_Behind_Master: 268
            Master_SSL_Verify_Server_Cert: No
                            Last_IO_Errno: 0
                            Last_IO_Error:
                           Last_SQL_Errno: 0
                           Last_SQL_Error:
              Replicate_Ignore_Server_Ids:
                         Master_Server_Id: 1
                           Master_SSL_Crl:
                       Master_SSL_Crlpath:
                               Using_Gtid: Slave_Pos
                              Gtid_IO_Pos: 0-1-1006
            1 row in set (0.00 sec)
             {code}
            The node also hangs when shutdown.

             {code:sql}
            141009 11:38:17 [Note] ./bin/mysqld: Normal shutdown
            141009 11:38:17 [Note] WSREP: Stop replication
            141009 11:38:17 [Note] WSREP: Provider disconnect
            141009 11:38:17 [Note] WSREP: Closing send monitor...
            141009 11:38:17 [Note] WSREP: Closed send monitor.
            141009 11:38:17 [Note] WSREP: gcomm: terminating thread
            141009 11:38:17 [Note] WSREP: gcomm: joining thread
            141009 11:38:17 [Note] WSREP: gcomm: closing backend
            141009 11:38:17 [Note] WSREP: view((empty))
            141009 11:38:17 [Note] WSREP: Received self-leave message.
            141009 11:38:17 [Note] WSREP: gcomm: closed
            141009 11:38:17 [Note] WSREP: Flow-control interval: [0, 0]
            141009 11:38:17 [Note] WSREP: Received SELF-LEAVE. Closing connection.
            141009 11:38:17 [Note] WSREP: Shifting SYNCED -> CLOSED (TO: 10)
            141009 11:38:17 [Note] WSREP: RECV thread exiting 0: Success
            141009 11:38:17 [Note] WSREP: recv_thread() joined.
            141009 11:38:17 [Note] WSREP: Closing replication queue.
            141009 11:38:17 [Note] WSREP: Closing slave action queue.
            141009 11:38:19 [Note] WSREP: waiting for client connections to close: 5
             {code}
            Why this is happening

            So the actual issue , Galera sends event at the time of prepare phase
            and And creates deadlock.

            For example lets us consider the replication A -> B <==> C (A,B
            parallel replication optimistic, B,C Galera cluster nodes)
            Lets assume 2 inserts(T1 gtid x-x-1 and T2 x-x-2) from master A arrive
            to slave B.
            2nd insert prepares faster then 1st insert, So it has already sent the
            writeset to node C. Now it is the queue waiting for its turn to commit
            While the first insert does prepare on galera
            (wsrep_run_wsrep_commit), but it is stuck because T2 transaction still
            haven't run post_commit on galera
            so galera state is still in S_WAITING
            T2 cant run post_commit on galera because it is waiting for T1 commit
            , T1 cant commit because it is waiting in prepare stage for
            transaction T2 to clear the galera state.
             {code}
            Backtrace from gdb

             
            {code}
            Gtid_seq_no= 2`
            ```Thread 34 (Thread 0x7fcd966d2700 (LWP 23891)):```
            #0 0x00007fcda6d56415 in pthread_cond_wait@@GLIBC_2.3.2 () from
            /usr/lib/libpthread.so.0
            #1 0x00005569d607d380 in safe_cond_wait (cond=0x7fcd854078e8,
            mp=0x7fcd85407838, file=0x5569d6240360
            "/home/sachin/10.1/server/include/mysql/psi/mysql_thread.h",
            line=1154) at /home/sachin/10.1/server/mysys/thr_mutex.c:493
            #2 0x00005569d5aec4d0 in inline_mysql_cond_wait (that=0x7fcd854078e8,
            mutex=0x7fcd85407838, src_file=0x5569d6240cb8
            "/home/sachin/10.1/server/sql/log.cc", src_line=7387) at
            /home/sachin/10.1/server/include/mysql/psi/mysql_thread.h:1154
            #3 0x00005569d5afeee5 in MYSQL_BIN_LOG::queue_for_group_commit
            (this=0x5569d692d7c0 <mysql_bin_log>, orig_entry=0x7fcd966cf440) at
            /home/sachin/10.1/server/sql/log.cc:7387
            #4 0x00005569d5aff5c9 in
            MYSQL_BIN_LOG::write_transaction_to_binlog_events (this=0x5569d692d7c0
            <mysql_bin_log>, entry=0x7fcd966cf440) at
            /home/sachin/10.1/server/sql/log.cc:7607
            #5 0x00005569d5afecff in MYSQL_BIN_LOG::write_transaction_to_binlog
            (this=0x5569d692d7c0 <mysql_bin_log>, thd=0x7fcd84c068b0,
            cache_mngr=0x7fcd84c72c70, end_ev=0x7fcd966cf5e0, all=true,
            using_stmt_cache=true, using_trx_cache=true) at
            /home/sachin/10.1/server/sql/log.cc:7290
            #6 0x00005569d5af0ce6 in binlog_flush_cache (thd=0x7fcd84c068b0,
            cache_mngr=0x7fcd84c72c70, end_ev=0x7fcd966cf5e0, all=true,
            using_stmt=true, using_trx=true) at
            /home/sachin/10.1/server/sql/log.cc:1751
            #7 0x00005569d5af11bb in binlog_commit_flush_xid_caches
            (thd=0x7fcd84c068b0, cache_mngr=0x7fcd84c72c70, all=true, xid=2) at
            /home/sachin/10.1/server/sql/log.cc:1859
            #8 0x00005569d5b045c8 in MYSQL_BIN_LOG::log_and_order
            (this=0x5569d692d7c0 <mysql_bin_log>, thd=0x7fcd84c068b0, xid=2,
            all=true, need_prepare_ordered=false, need_commit_ordered=true) at
            /home/sachin/10.1/server/sql/log.cc:9575
            #9 0x00005569d5a1ec0d in ha_commit_trans (thd=0x7fcd84c068b0,
            all=true) at /home/sachin/10.1/server/sql/handler.cc:1497
            #10 0x00005569d5925e7e in trans_commit (thd=0x7fcd84c068b0) at
            /home/sachin/10.1/server/sql/transaction.cc:235
            #11 0x00005569d5b1b1fa in Xid_log_event::do_apply_event
            (this=0x7fcd8542a770, rgi=0x7fcd85407800) at
            /home/sachin/10.1/server/sql/log_event.cc:7720
            #12 0x00005569d5743fa1 in Log_event::apply_event (this=0x7fcd8542a770,
            rgi=0x7fcd85407800) at /home/sachin/10.1/server/sql/log_event.h:1343
            #13 0x00005569d573987e in apply_event_and_update_pos_apply
            (ev=0x7fcd8542a770, thd=0x7fcd84c068b0, rgi=0x7fcd85407800, reason=0)
            at /home/sachin/10.1/server/sql/slave.cc:3479
            #14 0x00005569d5739deb in apply_event_and_update_pos_for_parallel
            (ev=0x7fcd8542a770, thd=0x7fcd84c068b0, rgi=0x7fcd85407800) at
            /home/sachin/10.1/server/sql/slave.cc:3623
            #15 0x00005569d597bfbe in rpt_handle_event (qev=0x7fcd85424770,
            rpt=0x7fcd85421c88) at /home/sachin/10.1/server/sql/rpl_parallel.cc:50
            #16 0x00005569d597ed57 in handle_rpl_parallel_thread
            (arg=0x7fcd85421c88) at
            /home/sachin/10.1/server/sql/rpl_parallel.cc:1258
            #17 0x00005569d5d42aa4 in pfs_spawn_thread (arg=0x7fcd85415570) at
            /home/sachin/10.1/server/storage/perfschema/pfs.cc:1861
            #18 0x00007fcda6d5057f in start_thread () from /usr/lib/libpthread.so.0
            #19 0x00007fcda5f0f0e3 in clone () from /usr/lib/libc.so.6
             
            Gtid_seq_no= 1
            Thread 33 (Thread 0x7fcd9671d700 (LWP 23890)):
            #0 0x00007fcda6d56415 in pthread_cond_wait@@GLIBC_2.3.2 () from
            /usr/lib/libpthread.so.0
            #1 0x00007fcd9e7778ab in gu::Lock::wait (this=0x7fcd9671a0c0,
            cond=...) at galerautils/src/gu_mutex.hpp:40
            #2 galera::Monitor<galera::ReplicatorSMM::CommitOrder>::enter
            (this=this@entry=0x7fcda12d5da0, obj=...) at
            galera/src/monitor.hpp:124
            #3 0x00007fcd9e771f28 in galera::ReplicatorSMM::pre_commit
            (this=0x7fcda12d5000, trx=0x7fcd8507e000, meta=<optimized out>) at
            galera/src/replicator_smm.cpp:796
            #5 0x00005569d59864d0 in wsrep_run_wsrep_commit (thd=0x7fcd85006a70,
            all=true) at /home/sachin/10.1/server/sql/wsrep_hton.cc:492
            #6 0x00005569d5984d6a in wsrep_prepare (hton=0x7fcda583e270,
            thd=0x7fcd85006a70, all=true) at
            /home/sachin/10.1/server/sql/wsrep_hton.cc:208
            #7 0x00005569d5a1e1b0 in prepare_or_error (ht=0x7fcda583e270,
            thd=0x7fcd85006a70, all=true) at
            /home/sachin/10.1/server/sql/handler.cc:1196
            #8 0x00005569d5a1ea1c in ha_commit_trans (thd=0x7fcd85006a70,
            all=true) at /home/sachin/10.1/server/sql/handler.cc:1475
            #9 0x00005569d5925e7e in trans_commit (thd=0x7fcd85006a70) at
            /home/sachin/10.1/server/sql/transaction.cc:235
            #10 0x00005569d5b1b1fa in Xid_log_event::do_apply_event
            (this=0x7fcd8542a570, rgi=0x7fcd85407000) at
            /home/sachin/10.1/server/sql/log_event.cc:7720
            #11 0x00005569d5743fa1 in Log_event::apply_event (this=0x7fcd8542a570,
            rgi=0x7fcd85407000) at /home/sachin/10.1/server/sql/log_event.h:1343
            #12 0x00005569d573987e in apply_event_and_update_pos_apply
            (ev=0x7fcd8542a570, thd=0x7fcd85006a70, rgi=0x7fcd85407000, reason=0)
            at /home/sachin/10.1/server/sql/slave.cc:3479
            #13 0x00005569d5739deb in apply_event_and_update_pos_for_parallel
            (ev=0x7fcd8542a570, thd=0x7fcd85006a70, rgi=0x7fcd85407000) at
            /home/sachin/10.1/server/sql/slave.cc:3623
            #14 0x00005569d597bfbe in rpt_handle_event (qev=0x7fcd85423870,
            rpt=0x7fcd85421a80) at /home/sachin/10.1/server/sql/rpl_parallel.cc:50
            #15 0x00005569d597ed57 in handle_rpl_parallel_thread
            (arg=0x7fcd85421a80) at
            /home/sachin/10.1/server/sql/rpl_parallel.cc:1258
            #16 0x00005569d5d42aa4 in pfs_spawn_thread (arg=0x7fcd854152f0) at
            /home/sachin/10.1/server/storage/perfschema/pfs.cc:1861
            #17 0x00007fcda6d5057f in start_thread () from /usr/lib/libpthread.so.0
            #18 0x00007fcda5f0f0e3 in clone () from /usr/lib/libc.so.6
            {code}
            This is still an issue in latest GA. Can be replicated via sysbench


            In a setup where a master replicates to a Galera node acting as a slave with slave-parallel-threads > 1, the slave threads hang which brings replication to a halt.

            How to reproduce:

            Run the attached script on master.
            On slave:
            {code}
            MariaDB [test]> show processlist;
            +----+-------------+-----------------+------+---------+------+--------------------------------------------------------------------------------+------------------+----------+
            | Id | User | Host | db | Command | Time | State | Info | Progress |
            +----+-------------+-----------------+------+---------+------+--------------------------------------------------------------------------------+------------------+----------+
            | 1 | system user | | NULL | Sleep | 840 | NULL | NULL | 0.000 |
            | 2 | system user | | NULL | Sleep | 840 | wsrep aborter idle | NULL | 0.000 |
            | 5 | system user | | NULL | Sleep | 838 | NULL | NULL | 0.000 |
            | 6 | system user | | NULL | Sleep | 838 | NULL | NULL | 0.000 |
            | 7 | system user | | NULL | Sleep | 838 | NULL | NULL | 0.000 |
            | 8 | system user | | NULL | Connect | 263 | Waiting for prior transaction to start commit before starting next transaction | NULL | 0.000 |
            | 9 | system user | | NULL | Connect | 263 | closing tables | NULL | 0.000 |
            | 10 | system user | | NULL | Connect | 263 | closing tables | NULL | 0.000 |
            | 11 | system user | | NULL | Connect | 263 | closing tables | NULL | 0.000 |
            | 12 | root | localhost:52597 | test | Query | 0 | init | show processlist | 0.000 |
            | 13 | system user | | NULL | Connect | 336 | Waiting for master to send event | NULL | 0.000 |
            | 14 | system user | | NULL | Connect | 336 | Slave has read all relay log; waiting for the slave I/O thread to update it | NULL | 0.000 |
            +----+-------------+-----------------+------+---------+------+--------------------------------------------------------------------------------+------------------+----------+
            12 rows in set (0.00 sec)
            {code}
            {code}
            MariaDB [test]> show slave status\G
            *************************** 1. row ***************************
                           Slave_IO_State: Waiting for master to send event
                              Master_Host: 127.0.0.1
                              Master_User: root
                              Master_Port: 15999
                            Connect_Retry: 60
                          Master_Log_File: nirbhay-VirtualBox-bin.000001
                      Read_Master_Log_Pos: 146163
                           Relay_Log_File: nirbhay-VirtualBox-relay-bin.000002
                            Relay_Log_Pos: 1463
                    Relay_Master_Log_File: nirbhay-VirtualBox-bin.000001
                         Slave_IO_Running: Yes
                        Slave_SQL_Running: Yes
                          Replicate_Do_DB:
                      Replicate_Ignore_DB:
                       Replicate_Do_Table:
                   Replicate_Ignore_Table:
                  Replicate_Wild_Do_Table:
              Replicate_Wild_Ignore_Table:
                               Last_Errno: 0
                               Last_Error:
                             Skip_Counter: 0
                      Exec_Master_Log_Pos: 1163
                          Relay_Log_Space: 146773
                          Until_Condition: None
                           Until_Log_File:
                            Until_Log_Pos: 0
                       Master_SSL_Allowed: No
                       Master_SSL_CA_File:
                       Master_SSL_CA_Path:
                          Master_SSL_Cert:
                        Master_SSL_Cipher:
                           Master_SSL_Key:
                    Seconds_Behind_Master: 268
            Master_SSL_Verify_Server_Cert: No
                            Last_IO_Errno: 0
                            Last_IO_Error:
                           Last_SQL_Errno: 0
                           Last_SQL_Error:
              Replicate_Ignore_Server_Ids:
                         Master_Server_Id: 1
                           Master_SSL_Crl:
                       Master_SSL_Crlpath:
                               Using_Gtid: Slave_Pos
                              Gtid_IO_Pos: 0-1-1006
            1 row in set (0.00 sec)
            {code}
            The node also hangs when shutdown.

            {code}
            141009 11:38:17 [Note] ./bin/mysqld: Normal shutdown
             
            141009 11:38:17 [Note] WSREP: Stop replication
            141009 11:38:17 [Note] WSREP: Provider disconnect
            141009 11:38:17 [Note] WSREP: Closing send monitor...
            141009 11:38:17 [Note] WSREP: Closed send monitor.
            141009 11:38:17 [Note] WSREP: gcomm: terminating thread
            141009 11:38:17 [Note] WSREP: gcomm: joining thread
            141009 11:38:17 [Note] WSREP: gcomm: closing backend
            141009 11:38:17 [Note] WSREP: view((empty))
            141009 11:38:17 [Note] WSREP: Received self-leave message.
            141009 11:38:17 [Note] WSREP: gcomm: closed
            141009 11:38:17 [Note] WSREP: Flow-control interval: [0, 0]
            141009 11:38:17 [Note] WSREP: Received SELF-LEAVE. Closing connection.
            141009 11:38:17 [Note] WSREP: Shifting SYNCED -> CLOSED (TO: 10)
            141009 11:38:17 [Note] WSREP: RECV thread exiting 0: Success
            141009 11:38:17 [Note] WSREP: recv_thread() joined.
            141009 11:38:17 [Note] WSREP: Closing replication queue.
            141009 11:38:17 [Note] WSREP: Closing slave action queue.
            141009 11:38:19 [Note] WSREP: waiting for client connections to close: 5
            {code}

            Why this is happening

            So the actual issue , Galera sends event at the time of prepare phase
            and And creates deadlock.

            For example lets us consider the replication A -> B <==> C (A,B
            parallel replication optimistic, B,C Galera cluster nodes)
            Lets assume 2 inserts(T1 gtid x-x-1 and T2 x-x-2) from master A arrive
            to slave B.
            2nd insert prepares faster then 1st insert, So it has already sent the
            writeset to node C. Now it is the queue waiting for its turn to commit
            While the first insert does prepare on galera
            (wsrep_run_wsrep_commit), but it is stuck because T2 transaction still
            haven't run post_commit on galera
            so galera state is still in S_WAITING
            T2 cant run post_commit on galera because it is waiting for T1 commit
            , T1 cant commit because it is waiting in prepare stage for
            transaction T2 to clear the galera state.
            Backtrace from gdb

             
             {code}
            Gtid_seq_no= 2
            Thread 34 (Thread 0x7fcd966d2700 (LWP 23891)):
            #0 0x00007fcda6d56415 in pthread_cond_wait@@GLIBC_2.3.2 () from
            /usr/lib/libpthread.so.0
            #1 0x00005569d607d380 in safe_cond_wait (cond=0x7fcd854078e8,
            mp=0x7fcd85407838, file=0x5569d6240360
            "/home/sachin/10.1/server/include/mysql/psi/mysql_thread.h",
            line=1154) at /home/sachin/10.1/server/mysys/thr_mutex.c:493
            #2 0x00005569d5aec4d0 in inline_mysql_cond_wait (that=0x7fcd854078e8,
            mutex=0x7fcd85407838, src_file=0x5569d6240cb8
            "/home/sachin/10.1/server/sql/log.cc", src_line=7387) at
            /home/sachin/10.1/server/include/mysql/psi/mysql_thread.h:1154
            #3 0x00005569d5afeee5 in MYSQL_BIN_LOG::queue_for_group_commit
            (this=0x5569d692d7c0 <mysql_bin_log>, orig_entry=0x7fcd966cf440) at
            /home/sachin/10.1/server/sql/log.cc:7387
            #4 0x00005569d5aff5c9 in
            MYSQL_BIN_LOG::write_transaction_to_binlog_events (this=0x5569d692d7c0
            <mysql_bin_log>, entry=0x7fcd966cf440) at
            /home/sachin/10.1/server/sql/log.cc:7607
            #5 0x00005569d5afecff in MYSQL_BIN_LOG::write_transaction_to_binlog
            (this=0x5569d692d7c0 <mysql_bin_log>, thd=0x7fcd84c068b0,
            cache_mngr=0x7fcd84c72c70, end_ev=0x7fcd966cf5e0, all=true,
            using_stmt_cache=true, using_trx_cache=true) at
            /home/sachin/10.1/server/sql/log.cc:7290
            #6 0x00005569d5af0ce6 in binlog_flush_cache (thd=0x7fcd84c068b0,
            cache_mngr=0x7fcd84c72c70, end_ev=0x7fcd966cf5e0, all=true,
            using_stmt=true, using_trx=true) at
            /home/sachin/10.1/server/sql/log.cc:1751
            #7 0x00005569d5af11bb in binlog_commit_flush_xid_caches
            (thd=0x7fcd84c068b0, cache_mngr=0x7fcd84c72c70, all=true, xid=2) at
            /home/sachin/10.1/server/sql/log.cc:1859
            #8 0x00005569d5b045c8 in MYSQL_BIN_LOG::log_and_order
            (this=0x5569d692d7c0 <mysql_bin_log>, thd=0x7fcd84c068b0, xid=2,
            all=true, need_prepare_ordered=false, need_commit_ordered=true) at
            /home/sachin/10.1/server/sql/log.cc:9575
            #9 0x00005569d5a1ec0d in ha_commit_trans (thd=0x7fcd84c068b0,
            all=true) at /home/sachin/10.1/server/sql/handler.cc:1497
            #10 0x00005569d5925e7e in trans_commit (thd=0x7fcd84c068b0) at
            /home/sachin/10.1/server/sql/transaction.cc:235
            #11 0x00005569d5b1b1fa in Xid_log_event::do_apply_event
            (this=0x7fcd8542a770, rgi=0x7fcd85407800) at
            /home/sachin/10.1/server/sql/log_event.cc:7720
            #12 0x00005569d5743fa1 in Log_event::apply_event (this=0x7fcd8542a770,
            rgi=0x7fcd85407800) at /home/sachin/10.1/server/sql/log_event.h:1343
            #13 0x00005569d573987e in apply_event_and_update_pos_apply
            (ev=0x7fcd8542a770, thd=0x7fcd84c068b0, rgi=0x7fcd85407800, reason=0)
            at /home/sachin/10.1/server/sql/slave.cc:3479
            #14 0x00005569d5739deb in apply_event_and_update_pos_for_parallel
            (ev=0x7fcd8542a770, thd=0x7fcd84c068b0, rgi=0x7fcd85407800) at
            /home/sachin/10.1/server/sql/slave.cc:3623
            #15 0x00005569d597bfbe in rpt_handle_event (qev=0x7fcd85424770,
            rpt=0x7fcd85421c88) at /home/sachin/10.1/server/sql/rpl_parallel.cc:50
            #16 0x00005569d597ed57 in handle_rpl_parallel_thread
            (arg=0x7fcd85421c88) at
            /home/sachin/10.1/server/sql/rpl_parallel.cc:1258
            #17 0x00005569d5d42aa4 in pfs_spawn_thread (arg=0x7fcd85415570) at
            /home/sachin/10.1/server/storage/perfschema/pfs.cc:1861
            #18 0x00007fcda6d5057f in start_thread () from /usr/lib/libpthread.so.0
            #19 0x00007fcda5f0f0e3 in clone () from /usr/lib/libc.so.6
             
            Gtid_seq_no= 1
            Thread 33 (Thread 0x7fcd9671d700 (LWP 23890)):
            #0 0x00007fcda6d56415 in pthread_cond_wait@@GLIBC_2.3.2 () from
            /usr/lib/libpthread.so.0
            #1 0x00007fcd9e7778ab in gu::Lock::wait (this=0x7fcd9671a0c0,
            cond=...) at galerautils/src/gu_mutex.hpp:40
            #2 galera::Monitor<galera::ReplicatorSMM::CommitOrder>::enter
            (this=this@entry=0x7fcda12d5da0, obj=...) at
            galera/src/monitor.hpp:124
            #3 0x00007fcd9e771f28 in galera::ReplicatorSMM::pre_commit
            (this=0x7fcda12d5000, trx=0x7fcd8507e000, meta=<optimized out>) at
            galera/src/replicator_smm.cpp:796
            #5 0x00005569d59864d0 in wsrep_run_wsrep_commit (thd=0x7fcd85006a70,
            all=true) at /home/sachin/10.1/server/sql/wsrep_hton.cc:492
            #6 0x00005569d5984d6a in wsrep_prepare (hton=0x7fcda583e270,
            thd=0x7fcd85006a70, all=true) at
            /home/sachin/10.1/server/sql/wsrep_hton.cc:208
            #7 0x00005569d5a1e1b0 in prepare_or_error (ht=0x7fcda583e270,
            thd=0x7fcd85006a70, all=true) at
            /home/sachin/10.1/server/sql/handler.cc:1196
            #8 0x00005569d5a1ea1c in ha_commit_trans (thd=0x7fcd85006a70,
            all=true) at /home/sachin/10.1/server/sql/handler.cc:1475
            #9 0x00005569d5925e7e in trans_commit (thd=0x7fcd85006a70) at
            /home/sachin/10.1/server/sql/transaction.cc:235
            #10 0x00005569d5b1b1fa in Xid_log_event::do_apply_event
            (this=0x7fcd8542a570, rgi=0x7fcd85407000) at
            /home/sachin/10.1/server/sql/log_event.cc:7720
            #11 0x00005569d5743fa1 in Log_event::apply_event (this=0x7fcd8542a570,
            rgi=0x7fcd85407000) at /home/sachin/10.1/server/sql/log_event.h:1343
            #12 0x00005569d573987e in apply_event_and_update_pos_apply
            (ev=0x7fcd8542a570, thd=0x7fcd85006a70, rgi=0x7fcd85407000, reason=0)
            at /home/sachin/10.1/server/sql/slave.cc:3479
            #13 0x00005569d5739deb in apply_event_and_update_pos_for_parallel
            (ev=0x7fcd8542a570, thd=0x7fcd85006a70, rgi=0x7fcd85407000) at
            /home/sachin/10.1/server/sql/slave.cc:3623
            #14 0x00005569d597bfbe in rpt_handle_event (qev=0x7fcd85423870,
            rpt=0x7fcd85421a80) at /home/sachin/10.1/server/sql/rpl_parallel.cc:50
            #15 0x00005569d597ed57 in handle_rpl_parallel_thread
            (arg=0x7fcd85421a80) at
            /home/sachin/10.1/server/sql/rpl_parallel.cc:1258
            #16 0x00005569d5d42aa4 in pfs_spawn_thread (arg=0x7fcd854152f0) at
            /home/sachin/10.1/server/storage/perfschema/pfs.cc:1861
            #17 0x00007fcda6d5057f in start_thread () from /usr/lib/libpthread.so.0
            #18 0x00007fcda5f0f0e3 in clone () from /usr/lib/libc.so.6
            {code}
            elenst Elena Stepanova made changes -
            Status Open [ 1 ] Needs Feedback [ 10501 ]
            maxmether Max Mether made changes -
            Assignee Jan Lindström [ jplindst ]
            jplindst Jan Lindström (Inactive) made changes -
            Fix Version/s N/A [ 14700 ]
            Resolution Incomplete [ 4 ]
            Status Needs Feedback [ 10501 ] Closed [ 6 ]
            jplindst Jan Lindström (Inactive) made changes -
            Resolution Incomplete [ 4 ]
            Status Closed [ 6 ] Stalled [ 10000 ]
            jplindst Jan Lindström (Inactive) made changes -
            Status Stalled [ 10000 ] In Testing [ 10301 ]
            jplindst Jan Lindström (Inactive) made changes -
            Assignee Jan Lindström [ jplindst ] Ramesh Sivaraman [ JIRAUSER48189 ]
            jplindst Jan Lindström (Inactive) made changes -
            Priority Major [ 3 ] Critical [ 2 ]
            ramesh Ramesh Sivaraman made changes -
            Attachment n2.cnf [ 62200 ]
            Attachment n1.cnf [ 62201 ]
            Attachment my.cnf [ 62202 ]
            ramesh Ramesh Sivaraman made changes -
            Attachment node1.err [ 62203 ]
            ramesh Ramesh Sivaraman made changes -
            Assignee Ramesh Sivaraman [ JIRAUSER48189 ] Jan Lindström [ jplindst ]
            Status In Testing [ 10301 ] Stalled [ 10000 ]
            jplindst Jan Lindström (Inactive) made changes -
            Assignee Jan Lindström [ jplindst ] Seppo Jaakola [ seppo ]
            seppo Seppo Jaakola made changes -
            Assignee Seppo Jaakola [ seppo ] Mario Karuza [ mkaruza ]
            jplindst Jan Lindström (Inactive) made changes -
            Assignee Mario Karuza [ mkaruza ] Jan Lindström [ jplindst ]
            jplindst Jan Lindström (Inactive) made changes -
            Status Stalled [ 10000 ] In Progress [ 3 ]
            julien.fritsch Julien Fritsch made changes -
            Fix Version/s 10.4 [ 22408 ]
            Fix Version/s 10.5 [ 23123 ]
            Fix Version/s 10.6 [ 24028 ]
            Fix Version/s N/A [ 14700 ]
            jplindst Jan Lindström (Inactive) made changes -
            issue.field.resolutiondate 2022-03-18 08:02:52.0 2022-03-18 08:02:52.605
            jplindst Jan Lindström (Inactive) made changes -
            Fix Version/s 10.4.25 [ 27510 ]
            Fix Version/s 10.5.16 [ 27508 ]
            Fix Version/s 10.6.8 [ 27506 ]
            Fix Version/s 10.7.4 [ 27504 ]
            Fix Version/s 10.4 [ 22408 ]
            Fix Version/s 10.5 [ 23123 ]
            Fix Version/s 10.6 [ 24028 ]
            Resolution Fixed [ 1 ]
            Status In Progress [ 3 ] Closed [ 6 ]
            mariadb-jira-automation Jira Automation (IT) made changes -
            Zendesk Related Tickets 185420

            People

              jplindst Jan Lindström (Inactive)
              bbancroft Bryan Bancroft (Inactive)
              Votes:
              1 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.