Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-9162

MariaDB Galera Cluster memory leak on async slave node

Details

    • Bug
    • Status: Closed (View Workflow)
    • Critical
    • Resolution: Fixed
    • 10.0.22-galera
    • 10.0.23-galera
    • Galera, Replication
    • None
    • cat /etc/debian_version
      7.9
      root@node1:~# uname -a
      Linux node1 3.2.0-4-amd64 #1 SMP Debian 3.2.68-1+deb7u3 x86_64 GNU/Linux
    • 10.0.23

    Description

      An async slave node clearly leaks memory on some DDL statements coming from async master (also MariaDB 10).
      Steps to reproduce:
      – node4 is async master to node1, node1 and node2 are Galera cluster members. Run this simple query multiple times:

      root@node4:~# mysqlslap --delimiter=";" --number-of-queries=100000 --create-schema=test --query="CREATE TABLE IF NOT EXISTS xxxx (id int)"

      (...)

      – the more times the above query is executed, the bigger memory usage difference between the nodes and the more clear the memory leak is:

      node1 {root} ((none)) > select @@version,@@version_comment;
      +------------------------------------+-------------------------------------------------------+
      | @@version                          | @@version_comment                                     |
      +------------------------------------+-------------------------------------------------------+
      | 10.0.22-MariaDB-1~wheezy-wsrep-log | mariadb.org binary distribution, wsrep_25.11.r21a2415 |
      +------------------------------------+-------------------------------------------------------+
      1 row in set (0.00 sec)
      node1 {root} ((none)) > show status like 'ws%version';
      +------------------------+------------+
      | Variable_name          | Value      |
      +------------------------+------------+
      | wsrep_protocol_version | 7          |
      | wsrep_provider_version | 3.9(rXXXX) |
      +------------------------+------------+
      2 rows in set (0.00 sec)
      node1 {root} ((none)) > \! ps aux|egrep 'RSS|mysqld '|grep -v grep
      USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
      mysql    10140  2.0 66.2 2946844 1365228 ?     Sl   Nov21  10:18 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --wsrep_provider=/usr/lib/galera/libgalera_smm.so --log-error=/var/

      node1 {root} ((none)) > show global status like 'mem%';
      +---------------+-----------+
      | Variable_name | Value     |
      +---------------+-----------+
      | Memory_used   | 274914112 |
      +---------------+-----------+
      1 row in set (0.01 sec)

      – normal cluster node:

      node2 {root} ((none)) > \! ps aux|egrep 'RSS|mysqld '|grep -v grep
      USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
      mysql     6781  0.4 26.9 1066696 270304 pts/1  Sl   Nov19  16:29 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysql --wsrep_provider=/usr/lib/galera/libgalera_smm.so --log-error=/var/

      node2 {root} ((none)) > show global status like 'mem%';
      +---------------+-----------+
      | Variable_name | Value     |
      +---------------+-----------+
      | Memory_used   | 274792608 |
      +---------------+-----------+
      1 row in set (0.00 sec)

      – node1 shutdown sequence diagnostic output:

      151122  0:50:06 [Note] Slave I/O thread killed while reading event
      151122  0:50:06 [Note] Slave I/O thread exiting, read up to log 'mysql-bin.000005', position 34245035
      151122  0:50:06 [Note] WSREP: dtor state: CLOSED
      151122  0:50:06 [Note] WSREP: mon: entered 240000 oooe fraction 0 oool fraction 0
      151122  0:50:07 [Note] WSREP: mon: entered 240000 oooe fraction 0 oool fraction 0
      151122  0:50:07 [Note] WSREP: mon: entered 241908 oooe fraction 0 oool fraction 0
      151122  0:50:07 [Note] WSREP: cert index usage at exit 0
      151122  0:50:07 [Note] WSREP: cert trx map usage at exit 96
      151122  0:50:07 [Note] WSREP: deps set usage at exit 0
      151122  0:50:07 [Note] WSREP: avg deps dist 1
      151122  0:50:07 [Note] WSREP: avg cert interval 0
      151122  0:50:07 [Note] WSREP: cert index size 2
      151122  0:50:07 [Note] WSREP: Service thread queue flushed.
      151122  0:50:07 [Note] WSREP: wsdb trx map usage 240001 conn query map usage 1
      151122  0:50:07 [Note] WSREP: MemPool(LocalTrxHandle): hit ratio: 0.499799, misses: 240097, in use: 240001, in pool: 96
      151122  0:50:08 [Note] WSREP: MemPool(SlaveTrxHandle): hit ratio: 0, misses: 0, in use: 0, in pool: 0
      151122  0:50:08 [Note] WSREP: Shifting CLOSED -> DESTROYED (TO: 1008270)
      151122  0:50:08 [Note] WSREP: Flushing memory map to disk...
      151122  0:50:09 [Note] InnoDB: FTS optimize thread exiting.
      151122  0:50:09 [Note] InnoDB: Starting shutdown...
      151122  0:50:10 [Note] InnoDB: Shutdown completed; log sequence number 555064446
      151122  0:50:10 [Note] /usr/sbin/mysqld: Shutdown complete

      – note the "in use: 240001" structures above...
      – node2 shutdown diagnostics:

      151122  0:52:48 [Note] WSREP: dtor state: CLOSED
      151122  0:52:48 [Note] WSREP: mon: entered 816552 oooe fraction 0 oool fraction 0
      151122  0:52:48 [Note] WSREP: mon: entered 816552 oooe fraction 0 oool fraction 0
      151122  0:52:48 [Note] WSREP: mon: entered 805542 oooe fraction 0 oool fraction 1.2414e-06
      151122  0:52:48 [Note] WSREP: cert index usage at exit 0
      151122  0:52:48 [Note] WSREP: cert trx map usage at exit 0
      151122  0:52:48 [Note] WSREP: deps set usage at exit 0
      151122  0:52:48 [Note] WSREP: avg deps dist 1
      151122  0:52:48 [Note] WSREP: avg cert interval 0
      151122  0:52:48 [Note] WSREP: cert index size 2
      151122  0:52:48 [Note] WSREP: Service thread queue flushed.
      151122  0:52:48 [Note] WSREP: wsdb trx map usage 0 conn query map usage 0
      151122  0:52:48 [Note] WSREP: MemPool(LocalTrxHandle): hit ratio: 0, misses: 0, in use: 0, in pool: 0
      151122  0:52:48 [Note] WSREP: MemPool(SlaveTrxHandle): hit ratio: 0.999226, misses: 632, in use: 0, in pool: 632
      151122  0:52:48 [Note] WSREP: Shifting CLOSED -> DESTROYED (TO: 1008270)

      I cannot reproduce the same leak on PXC 5.6.

      Attachments

        Issue Links

          Activity

            Reproducible on 10.0-galera commit 3eb8bc01b6876dc9dbacb82179127a58f4b86e79

            I don't know whether it's a real permanent leak, or temporary growth and it will get to normal later, or it is filling up some wsrep-related buffer and it will stop when the buffer is full, but the difference between a galera node (even in a one-node cluster) and the same binary running in s standalone mode, without wsrep, is striking:

            The node soon after replication start

            27971  921120 106740 sql/mysqld --datadir=data1 --wsrep_provider=/usr/lib/galera/libgalera_smm.so --wsrep_sst_method=rsync --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0 --log-error=log.err --basedir=/home/elenst/git/10.0-galera --port=8306 --loose-lc-messages-dir=sql/share --socket=/tmp/elenst-galera-1.sock --tmpdir=data1/tmp --general-log=1 --wsrep_cluster_address=gcomm:// --server-id=1 --core --log-bin=master-bin --binlog-format=row --log-bin=master-bin --log-slave-updates

            ...

            The same node and another slave which was started a bit later

            27971 1297952 345904 sql/mysqld --datadir=data1 --wsrep_provider=/usr/lib/galera/libgalera_smm.so --wsrep_sst_method=rsync --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0 --log-error=log.err --basedir=/home/elenst/git/10.0-galera --port=8306 --loose-lc-messages-dir=sql/share --socket=/tmp/elenst-galera-1.sock --tmpdir=data1/tmp --general-log=1 --wsrep_cluster_address=gcomm:// --server-id=1 --core --log-bin=master-bin --binlog-format=row --log-bin=master-bin --log-slave-updates
            28300  715204  83748 sql/mysqld --no-defaults --basedir=/home/elenst/git/10.0-galera --datadir=data2 --log-error=data2/log.err --loose-lc-messages-dir=sql/share --loose-language=sql/share/english --port=3307 --socket=data2/tmp/mysql.sock --tmpdir=data2/tmp --loose-core --log-bin --binlog-format=row --log-slave-updates --server-id=200 --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0

            ...

            The node keeps growing, the other slave has only grown a little

            27971 1322528 361600 sql/mysqld --datadir=data1 --wsrep_provider=/usr/lib/galera/libgalera_smm.so --wsrep_sst_method=rsync --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0 --log-error=log.err --basedir=/home/elenst/git/10.0-galera --port=8306 --loose-lc-messages-dir=sql/share --socket=/tmp/elenst-galera-1.sock --tmpdir=data1/tmp --general-log=1 --wsrep_cluster_address=gcomm:// --server-id=1 --core --log-bin=master-bin --binlog-format=row --log-bin=master-bin --log-slave-updates
            28300  719300 116052 sql/mysqld --no-defaults --basedir=/home/elenst/git/10.0-galera --datadir=data2 --log-error=data2/log.err --loose-lc-messages-dir=sql/share --loose-language=sql/share/english --port=3307 --socket=data2/tmp/mysql.sock --tmpdir=data2/tmp --loose-core --log-bin --binlog-format=row --log-slave-updates --server-id=200 --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0

            ...

            The node keeps growing, the other slave stopped growing

            27971 2190880 893276 sql/mysqld --datadir=data1 --wsrep_provider=/usr/lib/galera/libgalera_smm.so --wsrep_sst_method=rsync --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0 --log-error=log.err --basedir=/home/elenst/git/10.0-galera --port=8306 --loose-lc-messages-dir=sql/share --socket=/tmp/elenst-galera-1.sock --tmpdir=data1/tmp --general-log=1 --wsrep_cluster_address=gcomm:// --server-id=1 --core --log-bin=master-bin --binlog-format=row --log-bin=master-bin --log-slave-updates
            28300  719300 119176 sql/mysqld --no-defaults --basedir=/home/elenst/git/10.0-galera --datadir=data2 --log-error=data2/log.err --loose-lc-messages-dir=sql/share --loose-language=sql/share/english --port=3307 --socket=data2/tmp/mysql.sock --tmpdir=data2/tmp --loose-core --log-bin --binlog-format=row --log-slave-updates --server-id=200 --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0

            ... and counting.
            Also, the replication on the node is incredibly slow:

                      Exec_Master_Log_Pos: 21200089
                          Relay_Log_Space: 1692002518
                    Seconds_Behind_Master: 14566

            The other slave caught up with the master hours ago.

            elenst Elena Stepanova added a comment - Reproducible on 10.0-galera commit 3eb8bc01b6876dc9dbacb82179127a58f4b86e79 I don't know whether it's a real permanent leak, or temporary growth and it will get to normal later, or it is filling up some wsrep-related buffer and it will stop when the buffer is full, but the difference between a galera node (even in a one-node cluster) and the same binary running in s standalone mode, without wsrep, is striking: The node soon after replication start 27971 921120 106740 sql/mysqld --datadir=data1 --wsrep_provider=/usr/lib/galera/libgalera_smm.so --wsrep_sst_method=rsync --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0 --log-error=log.err --basedir=/home/elenst/git/10.0-galera --port=8306 --loose-lc-messages-dir=sql/share --socket=/tmp/elenst-galera-1.sock --tmpdir=data1/tmp --general-log=1 --wsrep_cluster_address=gcomm:// --server-id=1 --core --log-bin=master-bin --binlog-format=row --log-bin=master-bin --log-slave-updates ... The same node and another slave which was started a bit later 27971 1297952 345904 sql/mysqld --datadir=data1 --wsrep_provider=/usr/lib/galera/libgalera_smm.so --wsrep_sst_method=rsync --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0 --log-error=log.err --basedir=/home/elenst/git/10.0-galera --port=8306 --loose-lc-messages-dir=sql/share --socket=/tmp/elenst-galera-1.sock --tmpdir=data1/tmp --general-log=1 --wsrep_cluster_address=gcomm:// --server-id=1 --core --log-bin=master-bin --binlog-format=row --log-bin=master-bin --log-slave-updates 28300 715204 83748 sql/mysqld --no-defaults --basedir=/home/elenst/git/10.0-galera --datadir=data2 --log-error=data2/log.err --loose-lc-messages-dir=sql/share --loose-language=sql/share/english --port=3307 --socket=data2/tmp/mysql.sock --tmpdir=data2/tmp --loose-core --log-bin --binlog-format=row --log-slave-updates --server-id=200 --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0 ... The node keeps growing, the other slave has only grown a little 27971 1322528 361600 sql/mysqld --datadir=data1 --wsrep_provider=/usr/lib/galera/libgalera_smm.so --wsrep_sst_method=rsync --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0 --log-error=log.err --basedir=/home/elenst/git/10.0-galera --port=8306 --loose-lc-messages-dir=sql/share --socket=/tmp/elenst-galera-1.sock --tmpdir=data1/tmp --general-log=1 --wsrep_cluster_address=gcomm:// --server-id=1 --core --log-bin=master-bin --binlog-format=row --log-bin=master-bin --log-slave-updates 28300 719300 116052 sql/mysqld --no-defaults --basedir=/home/elenst/git/10.0-galera --datadir=data2 --log-error=data2/log.err --loose-lc-messages-dir=sql/share --loose-language=sql/share/english --port=3307 --socket=data2/tmp/mysql.sock --tmpdir=data2/tmp --loose-core --log-bin --binlog-format=row --log-slave-updates --server-id=200 --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0 ... The node keeps growing, the other slave stopped growing 27971 2190880 893276 sql/mysqld --datadir=data1 --wsrep_provider=/usr/lib/galera/libgalera_smm.so --wsrep_sst_method=rsync --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0 --log-error=log.err --basedir=/home/elenst/git/10.0-galera --port=8306 --loose-lc-messages-dir=sql/share --socket=/tmp/elenst-galera-1.sock --tmpdir=data1/tmp --general-log=1 --wsrep_cluster_address=gcomm:// --server-id=1 --core --log-bin=master-bin --binlog-format=row --log-bin=master-bin --log-slave-updates 28300 719300 119176 sql/mysqld --no-defaults --basedir=/home/elenst/git/10.0-galera --datadir=data2 --log-error=data2/log.err --loose-lc-messages-dir=sql/share --loose-language=sql/share/english --port=3307 --socket=data2/tmp/mysql.sock --tmpdir=data2/tmp --loose-core --log-bin --binlog-format=row --log-slave-updates --server-id=200 --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0 ... and counting. Also, the replication on the node is incredibly slow: Exec_Master_Log_Pos: 21200089 Relay_Log_Space: 1692002518 Seconds_Behind_Master: 14566 The other slave caught up with the master hours ago.

            For a note, I left it running, and after 1 day it still hasn't caught up with the master, and it's 11 Gb already:

            27971 11071008 5711952 sql/mysqld ...

                      Read_Master_Log_Pos: 618258865
            ...
                      Exec_Master_Log_Pos: 169162951
            ...
                    Seconds_Behind_Master: 82883

            elenst Elena Stepanova added a comment - For a note, I left it running, and after 1 day it still hasn't caught up with the master, and it's 11 Gb already: 27971 11071008 5711952 sql/mysqld ... Read_Master_Log_Pos: 618258865 ... Exec_Master_Log_Pos: 169162951 ... Seconds_Behind_Master: 82883

            elenst: How does your topology look? M — (async replication) --> (Galera cluster) ?

            nirbhay_c Nirbhay Choubey (Inactive) added a comment - elenst : How does your topology look? M — (async replication) --> (Galera cluster) ?
            elenst Elena Stepanova added a comment - - edited

            3 local servers, all running on 10.0-galera tree, startup options as below (even for those that don't say "--no-defaults" explicitly there are no cnf files to pick up).

            Topology is M=>S1, M=>S2 where "=>" stands for the regular async replication.

            S1 (pid 27971) is a node in a single-node cluster (basically, a standalone server but started with wsrep* options as below, it shows the cluster size 1).
            S2 (pid 28300) is a standalone server started without wsrep* options, otherwise seemingly the same unless I missed something.
            M (pid 28032) is a standalone server started without wsrep* options

            Executed on both slaves:

            change master to master_host='127.0.0.1', master_port=3306, master_user='root';
            start slave;

            Master has run the slap flow as suggested in the description (several mln queries total). It all went reasonably fast, and it's been idle ever since.
            S2 caught up with the master quite fast (was already idle by the time I posted my first comment).
            S1 is still replicatiing.

            Here is the current ps:

            elenst   27971  2.8 69.9 11230752 5732500 pts/0 Sl  Nov22  40:54 10.0-galera/sql/mysqld --datadir=10.0-galera/data1 --wsrep_provider=/usr/lib/galera/libgalera_smm.so --wsrep_sst_method=rsync --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0 --log-error=log.err --basedir=10.0-galera --port=8306 --loose-lc-messages-dir=10.0-galera/sql/share --socket=/tmp/elenst-galera-1.sock --tmpdir=10.0-galera/data1/tmp --general-log=1 --wsrep_cluster_address=gcomm:// --server-id=1 --core --log-bin=master-bin --binlog-format=row --log-bin=master-bin --log-slave-updates
            elenst   28032  3.9  0.9 715496 75844 pts/0    Sl   Nov22  56:35 10.0-galera/sql/mysqld --no-defaults --basedir=10.0-galera --datadir=10.0-galera/data --log-error=10.0-galera/data/log.err --loose-lc-messages-dir=10.0-galera/sql/share --loose-language=10.0-galera/sql/share/english --port=3306 --socket=10.0-galera/data/tmp/mysql.sock --tmpdir=10.0-galera/data/tmp --loose-core --log-bin --binlog-format=row --log-slave-updates --server-id=100
            elenst   28300  4.3  1.4 719300 117480 pts/0   Sl   Nov22  59:29 10.0-galera/sql/mysqld --no-defaults --basedir=10.0-galera --datadir=10.0-galera/data2 --log-error=10.0-galera/data2/log.err --loose-lc-messages-dir=10.0-galera/sql/share --loose-language=10.0-galera/sql/share/english --port=3307 --socket=10.0-galera/data2/tmp/mysql.sock --tmpdir=10.0-galera/data2/tmp --loose-core --log-bin --binlog-format=row --log-slave-updates --server-id=200 --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0

            Before that, I tried it with a 2-node cluster, M=>S1, S1<->S2 where M was a standalone server while S1 and S2 were Galera nodes, "=>" stands for the async replication and "<->" stands for Galera replication, and I saw the signs of the same memory growth as described, but I stopped it after S1 hit 1,5G, so I don't know if it would keep growing, and I didn't check the speed of async replication back then.

            elenst Elena Stepanova added a comment - - edited 3 local servers, all running on 10.0-galera tree, startup options as below (even for those that don't say "--no-defaults" explicitly there are no cnf files to pick up). Topology is M=>S1, M=>S2 where "=>" stands for the regular async replication. S1 (pid 27971) is a node in a single-node cluster (basically, a standalone server but started with wsrep* options as below, it shows the cluster size 1). S2 (pid 28300) is a standalone server started without wsrep* options, otherwise seemingly the same unless I missed something. M (pid 28032) is a standalone server started without wsrep* options Executed on both slaves: change master to master_host='127.0.0.1', master_port=3306, master_user='root'; start slave; Master has run the slap flow as suggested in the description (several mln queries total). It all went reasonably fast, and it's been idle ever since. S2 caught up with the master quite fast (was already idle by the time I posted my first comment). S1 is still replicatiing. Here is the current ps: elenst 27971 2.8 69.9 11230752 5732500 pts/0 Sl Nov22 40:54 10.0-galera/sql/mysqld --datadir=10.0-galera/data1 --wsrep_provider=/usr/lib/galera/libgalera_smm.so --wsrep_sst_method=rsync --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0 --log-error=log.err --basedir=10.0-galera --port=8306 --loose-lc-messages-dir=10.0-galera/sql/share --socket=/tmp/elenst-galera-1.sock --tmpdir=10.0-galera/data1/tmp --general-log=1 --wsrep_cluster_address=gcomm:// --server-id=1 --core --log-bin=master-bin --binlog-format=row --log-bin=master-bin --log-slave-updates elenst 28032 3.9 0.9 715496 75844 pts/0 Sl Nov22 56:35 10.0-galera/sql/mysqld --no-defaults --basedir=10.0-galera --datadir=10.0-galera/data --log-error=10.0-galera/data/log.err --loose-lc-messages-dir=10.0-galera/sql/share --loose-language=10.0-galera/sql/share/english --port=3306 --socket=10.0-galera/data/tmp/mysql.sock --tmpdir=10.0-galera/data/tmp --loose-core --log-bin --binlog-format=row --log-slave-updates --server-id=100 elenst 28300 4.3 1.4 719300 117480 pts/0 Sl Nov22 59:29 10.0-galera/sql/mysqld --no-defaults --basedir=10.0-galera --datadir=10.0-galera/data2 --log-error=10.0-galera/data2/log.err --loose-lc-messages-dir=10.0-galera/sql/share --loose-language=10.0-galera/sql/share/english --port=3307 --socket=10.0-galera/data2/tmp/mysql.sock --tmpdir=10.0-galera/data2/tmp --loose-core --log-bin --binlog-format=row --log-slave-updates --server-id=200 --core --default-storage-engine=InnoDB --innodb_autoinc_lock_mode=2 --innodb_locks_unsafe_for_binlog=1 --innodb_flush_log_at_trx_commit=0 Before that, I tried it with a 2-node cluster, M=>S1, S1<->S2 where M was a standalone server while S1 and S2 were Galera nodes, "=>" stands for the async replication and "<->" stands for Galera replication, and I saw the signs of the same memory growth as described, but I stopped it after S1 hit 1,5G, so I don't know if it would keep growing, and I didn't check the speed of async replication back then.

            Duplicate of MDEV-8965.

            nirbhay_c Nirbhay Choubey (Inactive) added a comment - Duplicate of MDEV-8965 .
            nirbhay_c Nirbhay Choubey (Inactive) added a comment - http://lists.askmonty.org/pipermail/commits/2015-December/008722.html

            Ok to push.

            jplindst Jan Lindström (Inactive) added a comment - Ok to push.
            nirbhay_c Nirbhay Choubey (Inactive) added a comment - https://github.com/MariaDB/server/commit/18173ddfc4081407832d9a6703d1b8356b7defe9

            People

              nirbhay_c Nirbhay Choubey (Inactive)
              przemek@mysqlmaniac.com Przemek
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.