Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-15344

Huge memory usage on Maria 10.2.x cPanel

Details

    • Bug
    • Status: Closed (View Workflow)
    • Major
    • Resolution: Fixed
    • 10.2.13
    • 10.6.17, 10.11.7, 11.1.4, 11.2.3
    • Server
    • None
    • Centos 7.4 with CloudLinux - cPanel normal setup

    Description

      Hi, after we upgraded our servers to use latest 10.2 version it started to use all RAM in very short time on some servers, on older relese on same server there was never problems, even servers with 256GB of RAM have problems now and we never expected this. For example we moved server from 64 to 128GB of RAM and now RAM usage is over 100GB where on 64 it was about 50% with all same sites, same setup, only diffrent release of Maria.

      Only what I found similar is this: MDEV-13403

      Any help advice what to do, because, we need to restart MySQL on same servers every several days because of this.

      In attachment is my.cnf file
      cPanel say it is related to MariaDB and that they can not help.

      Again this is normal cPanel setup, on CentOS 7.4 and all memory problems started after upgarde to 10.2 and we never have this kind of problems with any older version of Maria

      Any help would be great.

      Attachments

        1. my.cnf
          1 kB
        2. mariadb10-2-memory.png
          mariadb10-2-memory.png
          64 kB
        3. screenshot-1.png
          screenshot-1.png
          115 kB
        4. before_after_oom_kill_maps_lsof_status.zip
          289 kB
        5. mysqld_oom_killer_messages.txt
          26 kB

        Issue Links

          Activity

            Yoh Evgenij added a comment -

            Hey.

            Has anyone found a solution for themselves other than using jemalloc?

            Could you tell me if you are using QEMU based virtual machines? Thank you in advance.

            Yoh Evgenij added a comment - Hey. Has anyone found a solution for themselves other than using jemalloc? Could you tell me if you are using QEMU based virtual machines? Thank you in advance.

            Everybody here only complained about 10.2. Could it be that 10.3 is not affected?

            serg Sergei Golubchik added a comment - Everybody here only complained about 10.2. Could it be that 10.3 is not affected?
            ykantoni YURII KANTONISTOV added a comment - - edited

            > Could it be that 10.3 is not affected?

            Our DB server was upgraded 10.2.21 => 10.4.17, to me memory consumption pattern looks very much the same.
            There were no OOM kills recently, but - for few last months the load on this server was signifacantly less than earlier.

            Server version: 10.4.17-MariaDB MariaDB Server
            jemalloc 3.6.0-0-g46c0af68bd248b04df75e4f92d5fb804c3d75340

            1. free -gh
              total used free shared buff/cache available
              Mem: 23G 22G 229M 19M 1.0G 879M
              Swap: 23G 2.4G 21G

            One customer particularly struggles with this issue, MariaDB 10.2.26.
            Seems that OOM event occurrs on file=>memory map calls, could it be the way innodb reads the big tables from disk...

            They collected the set of mysqld process map, status, list of open files few hours before OOM kill and soon after the autorestart, see attached before_after_oom_kill_maps_lsof_status.zip
            if that is of any help.

            ykantoni YURII KANTONISTOV added a comment - - edited > Could it be that 10.3 is not affected? Our DB server was upgraded 10.2.21 => 10.4.17, to me memory consumption pattern looks very much the same. There were no OOM kills recently, but - for few last months the load on this server was signifacantly less than earlier. Server version: 10.4.17-MariaDB MariaDB Server jemalloc 3.6.0-0-g46c0af68bd248b04df75e4f92d5fb804c3d75340 free -gh total used free shared buff/cache available Mem: 23G 22G 229M 19M 1.0G 879M Swap: 23G 2.4G 21G One customer particularly struggles with this issue, MariaDB 10.2.26. Seems that OOM event occurrs on file=>memory map calls, could it be the way innodb reads the big tables from disk... They collected the set of mysqld process map, status, list of open files few hours before OOM kill and soon after the autorestart, see attached before_after_oom_kill_maps_lsof_status.zip if that is of any help.

            Oops, forgot to attach a system log with OOM kill event.
            When looked into the stack - most probably that is a trivial swap file mapping, not a data file. Anyway - mysqld_oom_killer_messages.txt

            ykantoni YURII KANTONISTOV added a comment - Oops, forgot to attach a system log with OOM kill event. When looked into the stack - most probably that is a trivial swap file mapping, not a data file. Anyway - mysqld_oom_killer_messages.txt

            Most likely, it's a duplicate of MDEV-33279, fixed in Jan 2024

            serg Sergei Golubchik added a comment - Most likely, it's a duplicate of MDEV-33279 , fixed in Jan 2024

            People

              Unassigned Unassigned
              Neso Neso
              Votes:
              11 Vote for this issue
              Watchers:
              21 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.