[MDEV-4754] Memory leak on MariaDB 5.5.31 Created: 2013-07-04  Updated: 2013-07-18  Resolved: 2013-07-18

Status: Closed
Project: MariaDB Server
Component/s: None
Affects Version/s: 5.5.31
Fix Version/s: 5.5.32

Type: Bug Priority: Major
Reporter: SUJET Assignee: Unassigned
Resolution: Fixed Votes: 2
Labels: None
Environment:

2 Quad Intel Xeon 2,2 Ghz - 12 GB memory OS : CentOS 5.8


Attachments: Text File engine_memory_usage.txt     Text File engine_memory_usage.txt     File memory_usage (3 Months).PNG     Text File my.cnf     Zip Archive prdmutmys002.zip     JPEG File screenshot-1.jpg     Text File show_global_status.txt     Text File show_variables.txt     File swap_usage (3 Months).PNG    
Issue Links:
Blocks
is blocked by MDEV-4763 5.5.32 merge Closed
Relates
relates to MDEV-4703 Memory leak in MariaDB 5.5.31 Closed

 Description   

I migrated from MySQL 5.1 to MariaDB 5.5.31 on many servers during three last months. On some servers, i see a big change i consumer memory which requires a stop-start of MariaDB to release memory. The size of data evolves very little. The attachments illustrate the change since the migration of one server made on 05/28/2013. Thank for your help.



 Comments   
Comment by Elena Stepanova [ 2013-07-04 ]

Hi,

We would need more information on this, especially if, as you say, it only happens on some servers.
What is different about these servers comparing to those that are not affected?
Can you upload your schema and data dump? Can you enable general log on one of such servers?
If you can, then please:
when you have to restart the server again,

  • create a data dump;
  • enable general_log;
  • run server until the memory leak becomes obvious;
  • upload the previously created dump and the general log to our FTP server.

Thanks!

Comment by SUJET [ 2013-07-04 ]

Hi,
I migrated many servers from MySQL 5.1 to MariaDB 5.5.30 & 5.5.31. The problem is only visible on servers with MariaDB 5.5.31
You can see on pictures the increasing memory consumption since 05/28/2013.

Comment by Elena Stepanova [ 2013-07-04 ]

Hi,

I see the pictures and I believe you, but we still need more information to find the reason of the problem. If you can't provide the data dumps and the general logs, maybe at least some details about your schema and workflow? What engines you use, how big is the data, what kind of queries are typical, what is your configuration?
Could you execute
show variables;
show global status;
select engine,sum(data_length)/1024/1024 as DATA_MB,sum(INDEX_LENGTH)/1024/1024 as INDEX_MB from information_schema.tables group by engine;

and paste or attach the output?

Thanks.

Comment by SUJET [ 2013-07-04 ]
  • prdmutmys002.zip is the zip file of general log (prdmutmys002.log)
  • engine_memory_uage.txt is the result of the request you provide me.
Comment by SUJET [ 2013-07-04 ]

Hi,

I can't provide you a data dump but you'll find in as an attachment some answers to your questions.
Don't be surprise by the uptime in "show_global_status.txt" : we restarted the server this morning (too less memory available).
The file attach prdmutmys002.zip contains general log during 15 minutes. The queries reflect the typical activity of this server.

Thanks

Comment by Patryk Pomykalski [ 2013-07-04 ]

Maybe this one? memory leak in xtradb + query cache:
https://bugs.launchpad.net/percona-server/+bug/1170103

Comment by Elena Stepanova [ 2013-07-05 ]

Thanks, Patryk.

Comment by Elena Stepanova [ 2013-07-05 ]

Hi,

could you please disable the query cache and see if it helps?

Thanks.

Comment by SUJET [ 2013-07-05 ]

Hi,

I just disable query_cache (without restart, just by 'set global query_cache_size = 0') on one of our server and i observe his behavior during the day.

Thank you to both

Comment by Patryk Pomykalski [ 2013-07-05 ]

It's better to set query_cache_type = 0 too.

Comment by SUJET [ 2013-07-05 ]

Hi,

Consumption seems to stabilize since the deactivation of query_cache for 7 hours.
I'll tell you on Monday if the situation is stabilized.

Thanks

Comment by SUJET [ 2013-07-08 ]

The server was migrated MariaDB 5.5 on 28/05, restarted Thursday 04/07.
Disabling the query_cache was dynamically applied Friday 05/07

Comment by SUJET [ 2013-07-08 ]

Hi,

I confirm that the leak is stabilized on the server after 3 days (screenshot-1.jpg)
I apply the solution on other servers with the same problem.

Thank you

Comment by Gabriel Sosa [ 2013-07-17 ]

how is turning off the query cache a solution here? Sounds like removing an totally relevant part of the performance component of the server...

Comment by Elena Stepanova [ 2013-07-17 ]

It's not a solution, it's a workaround till the next release (5.5.32) where the bug is supposed to be fixed.

Comment by Gabriel Sosa [ 2013-07-18 ]

Looks like you guys just release the version 5.5.32 and this fix is not on it. Do you have any certain release date for this bugifx? Also would you help guys if we provide us some more info on this matter ?

thanks

Comment by Elena Stepanova [ 2013-07-18 ]

The fix should be in 5.5.32 release. The bug was fixed in XtraDB 5.5.32 (at least it is marked as such in https://bugs.launchpad.net/percona-server/+bug/1170103), and MariaDB 5.5.32 includes XtraDB 5.5.32.

We didn't have a reproducible test case to verify it, so please re-open the issue (or comment on it to get it re-opened) if you still have the problem with MariaDB 5.5.32,

Comment by Gabriel Sosa [ 2013-07-18 ]

excellent! perfect. Thanks for your time

Generated at Thu Feb 08 06:58:54 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.