[CONJ-973] Large amount of memory use for caching ClientPreparedStatement Created: 2022-05-23 Updated: 2022-05-24 Resolved: 2022-05-23 |
|
| Status: | Closed |
| Project: | MariaDB Connector/J |
| Component/s: | performance |
| Affects Version/s: | 3.0.4 |
| Fix Version/s: | 3.0.5 |
| Type: | Bug | Priority: | Major |
| Reporter: | Jean Biancat | Assignee: | Diego Dupin |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | ClientPreparedStatement, Memory_leak, cache | ||
| Environment: |
The server is on a LXC Debian 11, with JDK 11 and Tomcat 9 |
||
| Description |
|
Since we have updated the connector from 2.7.4 to 3.0.4, we are experimenting some memory leaks, and after some analysis we have found that the cache variable in ClientParser is the main suspect. There is now a cache for ClientPreparedStatement, inside ClientParser, that variable is a LinkedHashMap and seems to be never cleanse. And there is no known way to interact with this cache. That resolve to store all the prepared statements since the server is started. |
| Comments |
| Comment by Diego Dupin [ 2022-05-23 ] |
|
The problem will be solved with https://jira.mariadb.org/browse/CONJ-972 correction : this cache is now removed (cache wasn't thread safe, and thread safe cache would be slower than not caching at all). |
| Comment by Diego Dupin [ 2022-05-24 ] |
|
remark: it's not a memory leak, because cache was limited (512 queries for queries with length < 16K, so a maximum of 8m) still for application that use large command and big pool, that might represent a large amount of memory. |
| Comment by Jean Biancat [ 2022-05-24 ] |
|
Okey, I only saw that cache was a LinkedHashMap with an initial capacity of 512 entites, and I never saw where it was limited to 512 or when they were removed if this amount is exceeded, in our case, they were more than 512 entities in cache, that's why I thought of a memory leak. |