[MXS-2444] Maxscale Memory leak Created: 2019-04-22  Updated: 2019-10-28  Resolved: 2019-10-28

Status: Closed
Project: MariaDB MaxScale
Component/s: N/A
Affects Version/s: 2.3.5
Fix Version/s: 2.4.2

Type: Bug Priority: Critical
Reporter: Vladimir Savostyanov Assignee: Unassigned
Resolution: Fixed Votes: 0
Labels: None

Attachments: Zip Archive MaxScale.cnf.zip     JPEG File MaxScaleMemoryLeak.jpg     HTML File lost    

 Description   

I have the issue with memory consumption on my server. There are no one send request to the Maxscale. However, percent of available memory is slowly go down.
Maxscale configuration files are attached to the issue



 Comments   
Comment by Johan Wikman [ 2019-04-23 ]

vsavostyanov Just to make sure that I have completely understood the issue. So,

  • MaxScale is running on a server,
  • there is no traffic going through MaxScale, but
  • the amount of available memory is slowly going down.

Is that correct?
Is anything else running on the server?

Comment by markus makela [ 2019-04-23 ]

Possibly a memory leak in the monitor?

Comment by Johan Wikman [ 2019-04-29 ]

Good guess, but according to the config file there is no monitor running.

Comment by Timofey Turenko [ 2019-05-07 ]

Found this:

==10620== 1,120,924 bytes in 22,876 blocks are possibly lost in loss record 1,520 of 1,521
==10620== at 0x4C2A1E3: operator new(unsigned long) (vg_replace_malloc.c:334)
==10620== by 0x68D4A18: std::string::_Rep::_S_create(unsigned long, unsigned long, std::allocator<char> const&) (in /usr/lib64/libstdc++.so.6.0.19)
==10620== by 0x68D62A0: char* std::string::_S_construct<char const*>(char const*, char const*, std::allocator<char> const&, std::forward_iterator_tag) (in /usr/lib64/libstdc++.so.6.0.19)
==10620== by 0x68D66D7: std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(char const*, std::allocator<char> const&) (in /usr/lib64/libstdc++.so.6.0.19)
==10620== by 0x4FE7665: hktask_add (housekeeper.cc:269)
==10620== by 0xC23EF0B: blr_start_master_registration(ROUTER_INSTANCE*, gwbuf*) (blr_master.cc:2845)
==10620== by 0xC23ACDE: blr_master_response(ROUTER_INSTANCE*, gwbuf*) (blr_master.cc:592)
==10620== by 0xC236708: clientReply(mxs_router*, mxs_router_session*, gwbuf*, dcb*) (blr.cc:2337)
==10620== by 0x1149EAD1: gw_read_and_write(dcb*) (mysql_backend.cc:1041)
==10620== by 0x1149D9AE: gw_read_backend_event(dcb*) (mysql_backend.cc:508)
==10620== by 0x4FDC037: dcb_process_poll_events(dcb*, unsigned int) (dcb.cc:3136)
==10620== by 0x4FDC3ED: dcb_handler(dcb*, unsigned int) (dcb.cc:3221)

Comment by Timofey Turenko [ 2019-05-07 ]

`lost` - Valgrind log

Comment by markus makela [ 2019-10-07 ]

Can you test if this is still a problem with the latest version?

Comment by markus makela [ 2019-10-28 ]

vsavostyanov any updates?

Comment by Vladimir Savostyanov [ 2019-10-28 ]

I've updated to 2.4.2 version and the issue doesn't occur.
Thank you

Generated at Thu Feb 08 04:14:10 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.