[MDEV-14050] Memory not freed on memory table drop Created: 2017-10-11  Updated: 2020-08-25  Resolved: 2019-04-30

Status: Closed
Project: MariaDB Server
Component/s: Data Manipulation - Delete
Affects Version/s: 10.2.9
Fix Version/s: 10.2.23, 10.3.14, 10.4.5

Type: Bug Priority: Critical
Reporter: Silviu Dan Tanasie Assignee: Oleksandr Byelkin
Resolution: Not a Bug Votes: 4
Labels: None
Environment:

MariaDB 10.2.9 on Centos 7.4


Attachments: Text File Global.txt    

 Description   

The issue is very simple.

I have multiple tables using storage ENGINE MEMORY . These tables can range in size from 2Kb to 500Mb. They all have about 15 normal int columns with one BIG UNSIGNED INT column and HASH index on every column.

I just updated from MariaDB 10.1.28 to MariaDB 10.2.9 and there is a major issue I have encountered:
using the DROP TABLE <tablename> command and TRUNCATE TABLE <tablename> does not free the server memory on tables using ENGINE MEMORY. As a results the mysql server instance keeps growing as I delete and create new tables until the server crashes. Filling up my server's memory takes about one day.

The server also have Replication enabled, but not on the database that uses MEMORY tables.

Edit: After further investigation, it seems the memory is indeed freed but only after some time. Thus if you delete say a large number of tables and create a large number of tables in short time-span using Engine Memory, there is a high risk of running out of memory.



 Comments   
Comment by Elena Stepanova [ 2017-10-12 ]

Starchaser,

I can reproduce the difference, it seems to be related to 10.1 being linked with libjemalloc and 10.2 not.
But I don't see the actual harm, described by this part:

Thus if you delete say a large number of tables and create a large number of tables in short time-span using Engine Memory, there is a high risk of running out of memory

Yes, the memory doesn't appear to be freed right away when I drop a table; but if I create a new one afterwards, the memory is reused, it's not allocated again. Thus, there is no danger of running out of memory, unless you have more existing tables than the machine can handle. Here is how it looks:

MariaDB [test]> select @@max_heap_table_size, @@tmp_table_size;
+-----------------------+------------------+
| @@max_heap_table_size | @@tmp_table_size |
+-----------------------+------------------+
|             503316480 |        503316480 |
+-----------------------+------------------+
1 row in set (0.00 sec)
 
MariaDB [test]> select @@pid_file;
+----------------------------+
| @@pid_file                 |
+----------------------------+
| /data/bld/10.2/data/ws.pid |
+----------------------------+
1 row in set (0.00 sec)
 
MariaDB [test]> system cat /data/bld/10.2/data/ws.pid 
18098

MariaDB [test]> system top -n 1 -b -p 18098
top - 00:37:49 up 16 days, 17 min, 11 users,  load average: 0.12, 0.37, 0.90
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.6 us,  0.8 sy,  0.1 ni, 94.4 id,  2.1 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:  16418816 total, 15306992 used,  1111824 free,   172968 buffers
KiB Swap: 15625212 total,   752520 used, 14872692 free. 10137040 cached Mem
 
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
18098 elenst    20   0 1931812 330156  23560 S   0.0  2.0   0:00.63 mysqld
MariaDB [test]> 

MariaDB [test]> create table t1 (f bigint unsigned, f1 int, f2 int, f3 int, f4 int, f5 int, f6 int, f7 int, f8 int, f9 int, f10 int, f11 int, f12 int, f13 int, f14 int, f15 int, key(f), key(f1), key(f2), key(f3), key(f4), key(f5), key(f6), key(f7), key(f8), key(f9), key(f10), key(f11), key(f12), key(f13), key(f14), key(f15)) engine=Memory;
Query OK, 0 rows affected (0.03 sec)
 
MariaDB [test]> insert into t1 select seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq from seq_1_to_1000000;
Query OK, 1000000 rows affected (8.45 sec)
Records: 1000000  Duplicates: 0  Warnings: 0

MariaDB [test]> system top -n 1 -b -p 18098
top - 00:38:21 up 16 days, 17 min, 11 users,  load average: 0.21, 0.36, 0.88
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.6 us,  0.8 sy,  0.1 ni, 94.4 id,  2.1 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:  16418816 total, 15752568 used,   666248 free,   173036 buffers
KiB Swap: 15625212 total,   752512 used, 14872700 free. 10137092 cached Mem
 
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
18098 elenst    20   0 2325028 778072  23744 S   0.0  4.7   0:09.12 mysqld
MariaDB [test]> 

MariaDB [test]> create table t2 (f bigint unsigned, f1 int, f2 int, f3 int, f4 int, f5 int, f6 int, f7 int, f8 int, f9 int, f10 int, f11 int, f12 int, f13 int, f14 int, f15 int, key(f), key(f1), key(f2), key(f3), key(f4), key(f5), key(f6), key(f7), key(f8), key(f9), key(f10), key(f11), key(f12), key(f13), key(f14), key(f15)) engine=Memory;
Query OK, 0 rows affected (0.05 sec)
 
MariaDB [test]> insert into t2 select seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq from seq_1_to_1000000;
Query OK, 1000000 rows affected (8.44 sec)
Records: 1000000  Duplicates: 0  Warnings: 0

MariaDB [test]> system top -n 1 -b -p 18098
top - 00:38:45 up 16 days, 17 min, 11 users,  load average: 0.66, 0.45, 0.90
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.6 us,  0.8 sy,  0.1 ni, 94.4 id,  2.1 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:  16418816 total, 16201056 used,   217760 free,   173076 buffers
KiB Swap: 15625212 total,   752512 used, 14872700 free. 10137104 cached Mem
 
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
18098 elenst    20   0 2783780 1.169g  23744 S   0.0  7.5   0:17.60 mysqld
MariaDB [test]> 

MariaDB [test]> create table t3 (f bigint unsigned, f1 int, f2 int, f3 int, f4 int, f5 int, f6 int, f7 int, f8 int, f9 int, f10 int, f11 int, f12 int, f13 int, f14 int, f15 int, key(f), key(f1), key(f2), key(f3), key(f4), key(f5), key(f6), key(f7), key(f8), key(f9), key(f10), key(f11), key(f12), key(f13), key(f14), key(f15)) engine=Memory;
Query OK, 0 rows affected (0.05 sec)
 
MariaDB [test]> insert into t3 select seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq from seq_1_to_1000000;
Query OK, 1000000 rows affected (8.44 sec)
Records: 1000000  Duplicates: 0  Warnings: 0

MariaDB [test]> system top -n 1 -b -p 18098
top - 00:39:09 up 16 days, 18 min, 11 users,  load average: 0.73, 0.48, 0.90
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.6 us,  0.8 sy,  0.1 ni, 94.4 id,  2.1 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:  16418816 total, 16248776 used,   170040 free,   165092 buffers
KiB Swap: 15625212 total,   753916 used, 14871296 free.  9746420 cached Mem
 
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
18098 elenst    20   0 3242532 1.596g  23744 S   0.0 10.2   0:26.07 mysqld
MariaDB [test]> 

MariaDB [test]> drop table t1;
Query OK, 0 rows affected (2.65 sec)

MariaDB [test]> system top -n 1 -b -p 18098
top - 00:39:20 up 16 days, 18 min, 11 users,  load average: 0.70, 0.48, 0.90
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.6 us,  0.8 sy,  0.1 ni, 94.4 id,  2.1 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:  16418816 total, 16247524 used,   171292 free,   165136 buffers
KiB Swap: 15625212 total,   753916 used, 14871296 free.  9746400 cached Mem
 
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
18098 elenst    20   0 3242532 1.597g  23744 S   0.0 10.2   0:28.73 mysqld
MariaDB [test]> 

MariaDB [test]> create table t4 (f bigint unsigned, f1 int, f2 int, f3 int, f4 int, f5 int, f6 int, f7 int, f8 int, f9 int, f10 int, f11 int, f12 int, f13 int, f14 int, f15 int, key(f), key(f1), key(f2), key(f3), key(f4), key(f5), key(f6), key(f7), key(f8), key(f9), key(f10), key(f11), key(f12), key(f13), key(f14), key(f15)) engine=Memory;
Query OK, 0 rows affected (0.02 sec)
 
MariaDB [test]> insert into t4 select seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq from seq_1_to_1000000;
Query OK, 1000000 rows affected (8.38 sec)
Records: 1000000  Duplicates: 0  Warnings: 0

MariaDB [test]> system top -n 1 -b -p 18098
top - 00:39:44 up 16 days, 18 min, 11 users,  load average: 0.67, 0.49, 0.89
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.6 us,  0.8 sy,  0.1 ni, 94.4 id,  2.1 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:  16418816 total, 16248352 used,   170464 free,   165176 buffers
KiB Swap: 15625212 total,   753916 used, 14871296 free.  9746416 cached Mem
 
  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
18098 elenst    20   0 3242532 1.597g  23808 S   0.0 10.2   0:37.15 mysqld
MariaDB [test]> 

Could you please double-check that you actually have the extra consumption in the described scenario ("if you delete a large number of tables and create a large number of tables in short time-span"), and show the numbers? And if you do, please specify which version of jemalloc you have on the machine, and attach your cnf file(s).

Comment by Silviu Dan Tanasie [ 2017-10-16 ]

Thank you!

Will do so.

I can confirm I have the extra consumption. Ever since MariaDB 10.2 the mysql service will crash eventually if I delete and recreate a large number of tables using ENGINE MEMORY. System logs write that the OS ran out of memory and had to force close the mysql service in order to free up system memory.

I know it's not because there are two many tables, because normally they occupy about 8 GB of RAM, the system has 16GB. If i drop them and recreate them, then MariaDB eats up about 16GB of RAM and the OS has to force restart it in order to avoid running out of memory. After MariaDB restart, if I recreate the tables, they will occupy 8GB of RAM, as the should.

I will ran a test and give you the required information.

Comment by Silviu Dan Tanasie [ 2017-10-24 ]

I have tried to reproduce the bug but it's very difficult.

Mind you this table creation, dropping and recreation is done through PHP. However, dropping a table through mariadb console has the same effects and results from my tests.

I have tried to use a simple memory table with 4 int colums: creating, dropping and recreating works as expected (no memory issues).
Then I tried to create a table like the ones that exhibit this problem , populate it with data, then drop it and create it again. Again everything works expected (no memory issues).
I have tried to export one of the existing tables and than import under a different name. Creating the new table, dropping it and recreating works as expected (no memory issue).

*I have tried to export one of the existing tables and than drop it. Memory is not freed. Importing the table leads to additional memory usage. Dropping the table again afterwards frees the memory and importing it again is done at no extra memory usage.
So if I start with 100MB memory usage and table a,b,c
I drop table a. memory usage: 100MB
I import table a. memory usage: 120MB
I drop table a. memory usage: 100MB
I import table a. memory usage: 100MB*

Right now this is what MariaDB reports and size for my memory tables:

 SELECT table_schema "database", sum(data_length + index_length)/1024/1024 "size in MB" FROM information_schema.TABLES GROUP BY table_schema;  
+--------------------+---------------+
| database           | size in MB    |
+--------------------+---------------+
| information_schema | 0.17187500    |
| mysql              | 0.81905174    |
| db_arch       | 0.92187500    |
| db_buy        | 0.03125000    |
| db_db         | 1.71875000    |
| db_stuff    | 136.75000000  |
| db_people    | 0.10937500    |
| db_site       | 0.17187500    |
| db_temp       | 4586.49823952 | <- where the memory tables are
| db_test       | 3.43750000    |
| performance_schema | 0.00000000    |
| other        | 0.39062500    |
+--------------------+---------------+

Yet pmap for mysqld reports total 22006888K

Other info:

MariaDB [(none)]> select @@max_heap_table_size, @@tmp_table_size;
+-----------------------+------------------+
| @@max_heap_table_size | @@tmp_table_size |
+-----------------------+------------------+
|            5368709120 |         16777216 |
 
MariaDB [(none)]> select @@pid_file;
+-----------------------------------+
| @@pid_file                        |
+-----------------------------------+
| /var/lib/mysql/db_master.pid |
+-----------------------------------+

As for the above tests:

starting with pmap mysql memory:  22006888K
running create table t1
running insert table t1
pmap mysql memory:  22400104K
running create table t2
running insert table t2
pmap mysql memory:   22858856K
running create table t3
running insert table t3
pmap mysql memory:    23317608K
running drop table t1
pmap mysql memory:    23317608K
running create table t4
running insert table t4
pmap mysql memory:    23317608K

At this point

Comment by Elena Stepanova [ 2017-10-24 ]

In regard to 20G reported by pmap, I'd expect most of those 20G would be various buffers, namely the InnoDB buffer pool, but also others. I think it also includes libraries.

For the rest, which part of the output above you consider to be a leak? The last block with pmap results seems reasonable – you created three tables, the consumption grew three times. You dropped one table and created another one, the consumption didn't grow any further. For drop/import table, I'm not quite sure what you mean by import here, can you provide the exact commands that you used?

Comment by Silviu Dan Tanasie [ 2017-10-28 ]

I've finally been able to reproduce the bug but only when creating and dropping the tables via a PHP script. I'm using PHP 7.1.

By manually running this script several times I was able to fill my server's memory to the point where it had to restart the mysql service (MariaDB).

<?php
$cons = mysqli_init();
$user="user";
$pass="somepass";
$database="test";
$host="192.168.1.1";
$port="3306";
mysqli_real_connect($cons, $host, $user, $pass, $database, $port);
 
$sql="CALL delete_tbls;";
mysqli_query($cons,$sql);
 
for($i=1;$i<10;$i++)
{
	//$sql="DROP TABLE IF EXISTS t".$i."";
	//mysqli_query($cons,$sql);
	$sql="create table t".$i." (f bigint unsigned, f1 int, f2 int, f3 int, f4 int, f5 int, f6 int, f7 int, f8 int, f9 int, f10 int, f11 int, f12 int, f13 int, f14 int, f15 int, key(f), key(f1), key(f2), key(f3), key(f4), key(f5), key(f6), key(f7), key(f8), key(f9), key(f10), key(f11), key(f12), key(f13), key(f14), key(f15)) engine=Memory;";
	mysqli_query($cons,$sql);
	$sql="insert into t".$i." select seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq from seq_1_to_1000000;";
	mysqli_query($cons,$sql);
}
echo "it's working";
?>

This is the procedure:
DELIMITER $$
CREATE DEFINER=`root`@`localhost` PROCEDURE `delete_tbls`()
NO SQL
SQL SECURITY INVOKER
BEGIN
SET FOREIGN_KEY_CHECKS = 0;
SET GROUP_CONCAT_MAX_LEN=32768;
SET @tables = NULL;
SELECT GROUP_CONCAT('`', table_name, '`') INTO @tables FROM information_schema.tables WHERE table_schema = (SELECT DATABASE()) AND table_name LIKE 't%';
SELECT IFNULL(@tables,'dummy') INTO @tables;
SET @tables = CONCAT('DROP TABLE IF EXISTS ', @tables);
PREPARE stmt FROM @tables;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;
SET FOREIGN_KEY_CHECKS = 1;
END$$
DELIMITER ;

Server version: 10.2.9-MariaDB
Apache/2.4.6 (CentOS) PHP/7.1.11
Database client version: libmysql - mysqlnd 5.0.12-dev - 20150407
PHP extension: mysqliDocumentation curlDocumentation mbstringDocumentation

Comment by Alice Sherepa [ 2017-11-22 ]

I could not reproduce the problem with test case provided. Memory grows up to some limit, but it should be expected, because the case needs a lot of memory. To move forward please confirm output of `SHOW GLOBAL VARIABLES` from the test server and provide exact output of memory growth as shown in `top -cbn1 | grep mysqld` during the test

Comment by Silviu Dan Tanasie [ 2017-12-04 ]

Begining of test:

2035 mysql 20 0 3118012 381096 10484 S 0.0 1.2 0:00.28 /usr/sbin/mysqld
2073 root 20 0 112660 972 844 S 0.0 0.0 0:00.00 grep --color=auto mysqld

First run:
2035 mysql 20 0 7251568 4.191g 12040 S 0.0 13.5 0:39.79 /usr/sbin/mysqld
2078 root 20 0 112660 976 844 S 0.0 0.0 0:00.00 grep --color=auto mysqld

One minute wait

Second run:
2035 mysql 20 0 10.353g 7.611g 12136 S 0.0 24.5 1:19.40 /usr/sbin/mysqld
2081 root 20 0 112660 976 844 S 0.0 0.0 0:00.00 grep --color=auto mysqld

2 runs with 5 seconds wait them between them:
2035 mysql 20 0 10.353g 7.568g 12148 S 0.0 24.4 2:35.86 /usr/sbin/mysqld
2088 root 20 0 112660 976 844 S 0.0 0.0 0:00.00 grep --color=auto mysqld

One minute wait
Fifth run:
2035 mysql 20 0 10.353g 7.640g 12148 S 0.0 24.6 3:14.67 /usr/sbin/mysqld
2093 root 20 0 112660 972 844 S 0.0 0.0 0:00.00 grep --color=auto mysqld

Sixth run:
5 second wait
2035 mysql 20 0 10.353g 7.579g 12148 S 100.0 24.4 3:58.56 /usr/sbin/mysqld
2150 root 20 0 112660 976 844 S 0.0 0.0 0:00.00 grep --color=auto mysqld

Another 4 runs with 5 seconds between them wait time (did not check mysqld usage between them):
2035 mysql 20 0 13.978g 0.011t 12148 S 0.0 36.1 7:41.47 /usr/sbin/mysqld
2184 root 20 0 112660 976 844 S 0.0 0.0 0:00.00 grep --color=auto mysqld

Another 5 runs with 5 seconds between them wait time (did not check mysqld usage between them):
2035 mysql 20 0 14.228g 0.011t 12148 S 0.0 36.9 10:52.92 /usr/sbin/mysqld
2201 root 20 0 112660 976 844 S 0.0 0.0 0:00.00 grep --color=auto mysqld

One minute wait
2035 mysql 20 0 14.228g 0.011t 12156 S 0.0 36.9 11:31.71 /usr/sbin/mysqld
2205 root 20 0 112660 972 844 S 0.0 0.0 0:00.00 grep --color=auto mysqld

Another 10 runs with 2 seconds between them wait time (did not check mysqld usage between them):
2035 mysql 20 0 14.041g 0.011t 12156 S 0.0 36.1 18:30.20 /usr/sbin/mysqld
2225 root 20 0 112660 976 844 S 0.0 0.0 0:00.00 grep --color=auto mysqld

Rerunning one more time:

2035 mysql 20 0 17.666g 0.014t 12156 S 0.0 47.7 19:09.73 /usr/sbin/mysqld
2231 root 20 0 112660 976 844 S 0.0 0.0 0:00.00 grep --color=auto mysqld

Rerunning it 7-8 more times:
2035 mysql 20 0 17.104g 0.014t 12156 S 0.0 46.0 24:45.00 /usr/sbin/mysqld
2316 root 20 0 112660 976 844 S 0.0 0.0 0:00.00 grep --color=auto mysqld

Comment by Silviu Dan Tanasie [ 2017-12-04 ]

This is another test that I did with 600 temporary tables already loaded in memory.

Server is a dual core HT machine with 32 GB of DDR4 RAM .

We start at 25.3 % memory usage by mysql service:
http://imagizer.imageshack.com/img924/2891/AFzkH4.png

We now run the mentioned script once:
http://imagizer.imageshack.com/img923/6938/wDuWAH.png
As expected, the memory usage has increased with the memory required by the script to run. Nothing abnormal so far.

Now let us wait 40 seconds and run the script again:
http://imagizer.imageshack.com/img923/2112/OQ8TUT.png
Nothing abnormal so far. Everything working as expected.

-----------------------

Waited for 10 seconds after the previous script run and then run the script again. Memory is not cleared and the mysql memory usage grows to 47%
Waited for about 15 seconds after the previous script run and then run the script again. Memory again is not cleared and the mysql memory usage now grows to 56.4% .
http://imagizer.imageshack.com/img924/7773/Iuq9XB.png

Now, wait for 15 seconds and run the script again. Surprisingly, memory usage seems to remain stable at 57.8% :
http://imagizer.imageshack.com/img923/5191/PtlbTe.png

Now I wait for 1 minute before running the script again. Again the memory seems to be cleared and usage remains at 57.3%.
Wait for another 1 minute and run the script again. Memory usage remains at 57.5% and all seems ok.
Now I try another run by only wait for about 5 seconds from the previous run. Memory usage grows again to 67.6%.

http://imagizer.imageshack.com/img922/138/sVkixw.png

Took another 10 second break and run the script again. This time memory usage remains at 65% after the script run.
Wait for another 5 second and run the script again. Memory usage now grows to 77.8% .

http://imagizer.imageshack.com/img922/2258/l1CyI6.png

Took another 1 minute break and ran the script again. Memory usage now remains at 76.8%.
Wait for 10 seconds and run the script again. Memory usage now grows to 87.6%.
I stopped now and restarted the mysql server to clear my system memory.

Comment by Silviu Dan Tanasie [ 2017-12-04 ]

Sorry for all the Global.txt edits, had some server information that needed to be removed. Should be good now.

Comment by Andrii Nikitin (Inactive) [ 2017-12-05 ]

I can trigger the problem with concurrent load from bash (which also downloads and unpacks 10.2.11), I don't see how to write the same load from mtr .

script: https://github.com/AndriiNikitin/bugs/blob/master/MDEV-14050.sh
log for default table cache: https://github.com/AndriiNikitin/bugs/blob/master/MDEV-14050.log
log for huge table cache: https://github.com/AndriiNikitin/bugs/blob/master/MDEV-14050-tablecache4k.log

The logs show two numbers : combined size of all tables and memory usage, e.g. it starts with:
11557.89378929
15128 a 20 0 16.864g

meaning 11G of data+index and 16G memory usage, which looks good. Then memory usage steadily grows to be much more than size of tables:
23299.57424736
15128 a 20 0 30.290g
...
21003.40496635
15128 a 20 0 47.415g
...
21114.50510216
15128 a 20 0 49.071g

(You may see many errors like '2013 (HY000): Lost connection to MySQL server at 'handshake: reading inital communication packet', system error: 22' at start of bash script, which are caused by MDEV-14131 are safe to ignore in this test)

All in my tests show that memory may grow up to twice more than size of memory tables, which of course is not effective, but may be explained by memory fragmentation.

Minor notice is that memory usage with default table cache seems to cap at ~49G for handling ~24G of table data.
When I set table cache at 4000 - the problem becomes little more severe, capping at ~53G for identical test with ~24G of re-created data.

Since:

  • memory management is complex topic ;
  • freed memory doesn't have to be immediately seen in programs like 'top' or 'ps' ;
  • I definitely see a cap of memory growth in described test cases

I don't see any direct bug here.

But of course it will be better if developers can look at the data and confirm conclusion

Comment by Osiris Support [ 2018-06-26 ]

Hello,

I encounter as well this issue with MariaDB 10.2.14, running on Red Hat Enterprise Linux Server release 6.2.
I previously used MariaDB 10.1.21 and this did not happen.

I modified the PHP script of @Starchaser to open a new connection every time.
The memory used by MariaDB increases up to the point where MariaDB restarts.
Under MariaDB 10.1, memory does not increase.

<?php
$user="user";
$pass="pass";
$database="test";
$host="localhost";
$port="3306";
$cons = array();
 
for($i=1;$i<10;$i++) {
 
        $cons[$i] = mysqli_connect($host, $user, $pass, $database, $port);
 
        echo "Table $i...";
        $sql="create table t".$i." (f bigint unsigned, f1 int, f2 int, f3 int, f4 int, f5 int, f6 int, f7 int, f8 int, f9 int, f10 int, f11 int, f12 int, f13 int, f14 int, f15 int, key(f), key(f1), key(f2), key(f3), key(f4), key(f5), key(f6), key(f7), key(f8), key(f9), key(f10), key(f11), key(f12), key(f13), key(f14), key(f15)) engine=Memory;";
        mysqli_query($cons[$i],$sql);
        $sql="insert into t".$i." select seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq, seq from seq_1_to_1000000;";
        mysqli_query($cons[$i],$sql);
 
        $sql="DROP TABLE IF EXISTS t".$i."";
        mysqli_query($cons[$i],$sql);
 
        echo " done\n";
 
        echo "Sleep...";
        sleep(2);
        echo " finished\n";
}

As our application runs several scripts in parallel creating temporary tables, we can not use MariaDB 10.2, it always ends up crashing.
So I'm not sure if you consider it as a bug or not, but it for sure is a strong regression for us.

Comment by Osiris Support [ 2018-07-23 ]

Hello,
We finally found a solution to prevent this problem, by forcing MySQL to use jemalloc in my.cnf :

[mysqld_safe]
malloc-lib=[path to libjemalloc]

We found this solution because we also had the problem mentioned in bug MDEV-13403

Comment by Silviu Dan Tanasie [ 2018-07-25 ]

This seems like a good workaround. Though the bug is still there.

Comment by Silviu Dan Tanasie [ 2018-11-26 ]

As of MariaDB 10.3.11 this bug is still present. Any fix in sight?

Comment by Hyeokjin Park [ 2018-12-24 ]

we have same issue 10.2.18, jemalloc is usefull.

Comment by Valerii Kravchuk [ 2019-04-23 ]

Using jemalloc or tcmalloc is just a wokraround. Wee havee to understand what changed in 10.2+ comparing to 10.1 that made this visible.

Comment by Oleksandr Byelkin [ 2019-04-24 ]

valerii, elenst mentioned that the difference is exactly in building 10.1 with jemalloc and 10.2 without (if I correctly understand the statement).

Comment by Valerii Kravchuk [ 2019-04-24 ]

If building with jemalloc is a solution then why not to conitnue building with it by default (while having option to use other malloc libraries via LD_PRELOAD etc)?

In any case, is there anything in the code that can be done so that memory is returned to OS properly no matter what malloc implementation is used? Like calling malloc_trim(0) as suggested in upstream https://bugs.mysql.com/bug.php?id=94647?

Comment by Oleksandr Byelkin [ 2019-04-24 ]

malloc_trim(0) looks good until next bugreport with question why temporary tables become slow...

Comment by Oleksandr Byelkin [ 2019-04-24 ]

I read and discussed it:
1) strict building with jemalloc was removed (one of the cause maybe not main, main was additional dependence and problem for some users) to get user more flexibility in choose of memory allocator (so jemalloc solution is not workaround but normal solution for client with a special need)
2) malloc_trim is costly thing and far from universal solution (other will complain about it)

so IMHO it is question of setup for special needs of the user (pre-load jemalloc) and there is no universal solution which will not touch most other customers.

Comment by Oleksandr Byelkin [ 2019-04-24 ]

Maybe the other solution is also to play with MALLOC_TRIM_THRESHOLD_ and MALLOC_TOP_PAD_ environment variables if jemalloc is not ok somehow.

Comment by Silviu Dan Tanasie [ 2019-04-24 ]

Hi,

Unfortunately, jemalloc is not a stable solution or at least for us it is not.
However, the server crashes far less often using it. Usually once a week only.

This is how our server configuration looks like now on Centos:

  1. this is only for the mysqld standalone daemon
    [mysqld_safe]
    malloc-lib=/usr/lib64/libjemalloc.so.1

[mysqld]
bind-address=0.0.0.0

And this is memory usage:
KiB Mem : 32536000 total, 8907228 free, 22669224 used, 959548 buff/cache
KiB Swap: 16450556 total, 16377848 free, 72708 used. 9431960 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
18793 mysql 20 0 24.3g 21.2g 14888 S 99.0 68.4 48:05.41 mysqld

The actual size of the in-memory tables is about 7GB. So mysql (MariaDB) is eating a lot more.

Comment by Oleksandr Byelkin [ 2019-04-24 ]

How you measured size of in-memory tables footprint ("about 7GB")?

Comment by Silviu Dan Tanasie [ 2019-04-24 ]

I used

SELECT table_schema "database_name",
Round(Sum(data_length + index_length) / 1024 / 1024, 1) "DB Size in MB"
FROM information_schema.tables
GROUP BY table_schema;

I know that only one database uses in-memory tables, in fact, it has ONLY in-memory tables. It's called "something_temp" and that database is reported to have 6634.7 MB.

Comment by Oleksandr Byelkin [ 2019-04-25 ]

I checked the docs and there written that size is approximate so I checked what is not counted, about 14-15KB per table with 16 indexes by 1 index segment, i.e. not so much (actually I do not know if there is thouthands of tables it can be a lot) .

But then I thought about allocation: we can not count memory management overhead. The only thing I can tell it proportional to number of tables and inversely proportional to record size. So most probably we have here big overhead due to memory fragmentation and just overhead of allocation (and using jemalloc is proof of it because they have better dealing with fragmentation). Other proof of this idea that if stop using heap tables memory will be returned eventually.

So IMHO here we have problem of just out of resources for given load.

Comment by Silviu Dan Tanasie [ 2019-04-25 ]

Fortunately, I can help with a little bit more information.

  • We have around 1500 - 2000 tables, depends, but never more than 2000
  • The table size varies, some have a few rows, some may have millions of rows. Out of the 1700 tables we have right now, most are under 20MB and I think 16-17 are above 100MB and the biggest one is 450MB. If it helps I can also provide row numbers and index numbers.

Our workaround to this problem, that has never failed so far, is to have a job that restarts the MariaDB server once in a while and regenerate all the in-memory tables. If the MariaDB server is restarted, the memory is released.

Comment by Oleksandr Byelkin [ 2019-04-30 ]

But you have not tried more memory or less table, it looks like the issue connected with memory fragmentation and so with intense use it just require much more memory than was allocated.

I have not found memory leaks, or huge amount non-counted memory (limited size structures only). So I think it is just a memory fragmentation (which fought with different efficiency by different allocators but can not be removed with current approach of allocating memory for heap tables).

Feel free to reopen bug if memory will not stop growing after increasing memory or decreasing volume of temporary data.

Comment by Silviu Dan Tanasie [ 2019-05-03 ]

Hi,

Is there any way to check what memory management library MariaDB is using?

The server is still crashing, albeit only once a week, with jemalloc.

Comment by Silviu Dan Tanasie [ 2019-09-19 ]

It's time to finally put this issue to rest.
I have been struggling with getting MariaDB to use a different memory allocator. The documentation seems poor and outdated on this matter.
I have initially tried to use the [mysqld_safe] option, but that doesn't do anything under Centos at least.

The one solution that works is to edit the service file on Centos: /usr/lib/systemd/system/mariadb.service
You need to add LD_PRELOAD="<malloc library>"

With this method, I was able to load my custom malloc library and finally, the memory allocation problem has gone away.

For the future, I suggest to the MariaDB team to have more explicit features on using different memory allocation libraries.

Comment by Roope Pääkkönen (Inactive) [ 2019-09-20 ]

Small tip, if you edit that file in /usr/lib directly, it gets replaced when you update packages - instead you can run:

systemctl edit mariadb

and then add:

[Service]
LD_PRELOAD="malloc library"

This will create an "override" to the systemd service definition, which will remain there over package upgrades, etc.
(basically it save the file into /etc/systemd/system/mariadb.service.d/override.conf )

I've also have had good experiences with using jemalloc w mariadb so far.

Generated at Thu Feb 08 08:10:31 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.