This is not a bug but a consequence of how the maxrows filter works. The documentation says:
If a resultset from a backend server has more rows than the configured limit or the resultset size exceeds the configured size, an empty result will be sent to the client.
That is, if the limit is reached, the client will receive nothing.
That implies that the filter must buffer all data returned by the server until the complete resultset has been received or until the specified limit has been reached, whichever happens first. Since that has to be done for every client session, for instance, with a limit of 5 GB and 10 concurrent sessions, MaxScale may in the worst case have to allocate 50GB of memory to buffer all that data.
Further, since MaxScale uses the normal memory allocator, even if it internally frees all memory properly, from the outside it will appear as if that memory still would be allocated. That memory will be reused, so there will be some boundary, dependent upon the number of concurrent sessions, above which the memory usage will not go.
If the limit is in the order of gigabytes, then the current maxrows is not well suited for the task, due to the enormous amount of data it potentially has to buffer.
This is not a bug but a consequence of how the maxrows filter works. The documentation says:
That is, if the limit is reached, the client will receive nothing.
That implies that the filter must buffer all data returned by the server until the complete resultset has been received or until the specified limit has been reached, whichever happens first. Since that has to be done for every client session, for instance, with a limit of 5 GB and 10 concurrent sessions, MaxScale may in the worst case have to allocate 50GB of memory to buffer all that data.
Further, since MaxScale uses the normal memory allocator, even if it internally frees all memory properly, from the outside it will appear as if that memory still would be allocated. That memory will be reused, so there will be some boundary, dependent upon the number of concurrent sessions, above which the memory usage will not go.
If the limit is in the order of gigabytes, then the current maxrows is not well suited for the task, due to the enormous amount of data it potentially has to buffer.