Details

    • Bug
    • Status: Closed (View Workflow)
    • Critical
    • Resolution: Not a Bug
    • 2.4.11
    • N/A
    • maxrows
    • None
    • Amazon EC2
    • MXS-SPRINT-117, MXS-SPRINT-118

    Description

      Customer reported maxcale memory usage keeps growing.

      # maxscale node 1
      [root@ip-21-105-20-139 ~]# free
                   total       used       free     shared    buffers     cached
      Mem:      32467780   31823824     643956         76      71268    3887416
      -/+ buffers/cache:   27865140    4602640
      Swap:     10485756          0   10485756
      

      Attachments

        Activity

          johan.wikman Johan Wikman added a comment -

          This is not a bug but a consequence of how the maxrows filter works. The documentation says:

          If a resultset from a backend server has more rows than the configured limit or the resultset size exceeds the configured size, an empty result will be sent to the client.

          That is, if the limit is reached, the client will receive nothing.

          That implies that the filter must buffer all data returned by the server until the complete resultset has been received or until the specified limit has been reached, whichever happens first. Since that has to be done for every client session, for instance, with a limit of 5 GB and 10 concurrent sessions, MaxScale may in the worst case have to allocate 50GB of memory to buffer all that data.

          Further, since MaxScale uses the normal memory allocator, even if it internally frees all memory properly, from the outside it will appear as if that memory still would be allocated. That memory will be reused, so there will be some boundary, dependent upon the number of concurrent sessions, above which the memory usage will not go.

          If the limit is in the order of gigabytes, then the current maxrows is not well suited for the task, due to the enormous amount of data it potentially has to buffer.

          johan.wikman Johan Wikman added a comment - This is not a bug but a consequence of how the maxrows filter works. The documentation says: If a resultset from a backend server has more rows than the configured limit or the resultset size exceeds the configured size, an empty result will be sent to the client. That is, if the limit is reached, the client will receive nothing. That implies that the filter must buffer all data returned by the server until the complete resultset has been received or until the specified limit has been reached, whichever happens first. Since that has to be done for every client session, for instance, with a limit of 5 GB and 10 concurrent sessions, MaxScale may in the worst case have to allocate 50GB of memory to buffer all that data. Further, since MaxScale uses the normal memory allocator, even if it internally frees all memory properly, from the outside it will appear as if that memory still would be allocated. That memory will be reused, so there will be some boundary, dependent upon the number of concurrent sessions, above which the memory usage will not go. If the limit is in the order of gigabytes, then the current maxrows is not well suited for the task, due to the enormous amount of data it potentially has to buffer.

          People

            johan.wikman Johan Wikman
            allen.lee@mariadb.com Allen Lee (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Git Integration

                Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.