Details
-
Bug
-
Status: Closed (View Workflow)
-
Critical
-
Resolution: Fixed
-
10.0(EOL), 10.1(EOL)
-
None
-
3.2.0-4-amd64 #1 SMP Debian 3.2.35-2 x86_64 GNU/Linux
Description
Scenario:
Bash console (it's mandatory to execute from the bash):
1) mysql -h 127.0.0.1 --port=3306 -u user -ppassword db -e 'DROP TABLE IF EXISTS c1;'
2) mysql -h 127.0.0.1 --port=3306 -u user -ppassword db -e 'CREATE TABLE c1 ENGINE=CONNECT TABLE_TYPE=MYSQL DBNAME='\''information_schema'\'' OPTION_LIST='\''host=127.0.0.1,port=33235,user=user,password=password'\'' `tabname`='\''tables'\'''
This scenario leads to spending an additional 500 MB of virtual memory (don't know exactly about the physical memory) for each repetition. This memory is not released until you restart the mysql process. Remote database name and a table - can be any.
Attachments
Issue Links
- includes
-
MDEV-13195 Connect Engine: INSERT FROM SELECT (or CREATE TABLE AS SELECT) memory leak
-
- Closed
-
Hi, I've looked into the problem a bit and I think I've found the leak in the connect_assisted_discovery function, ha_connect.cc : 5176
PCONNECT xp= NULL;
PGLOBAL g= GetPlug(thd, xp);
Here PCONNECT is a pointer to an user_connect object and GetPlug is a wrapper what calls the GetUser function and assigns the result to its second argument. GetUser function in turn allocates a new user_connect object (or increments a refcounter of an existing one). Since in the end the only owner of the newly created user_connect object is the xp variable, the object have to be explicitly freed (in a manner of the ha_connect::~ha_connect function) via the variable before the end of the function, which is never done.
Since deallocation of an user_connect object isn't trivial (mostly because of refcounters and integrated linked list) and the connect_assisted_discovery function has several return points, I think the best way to solve the problem is to add some kind of RAII-style guard class what implements proper deallocation of user_connect objects in destructor.
Unfortunately I can't produce a proper patch right now, but I hope this will help to fix the bug.