Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-34577

Queries with on-disk tmp-tables cause significant additional memory use in Docker

Details

    Description

      Summary

      On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behaviour consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

      Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

      We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

      The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

      Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

      We found it seems to be related to temporary tables, but we could not identify any specific new usage or major changes between the versions.

      It could be interesting if you know if there were significant changes in how temporary tables are managed. For instance, you might know if the pattern of {{fsync}}s changed compared to 10.4, or not at all.

      I'm attaching a screenshot of our memory monitoring right after the upgrade.

      Technical investigation

      Stable system monitoring variables

      By monitoring /sys/fs/cgroup/memory/memory.stat (cgroupv1), here's what we see:

      • RSS remain stable. When taking continuous traces, it grows while the buffer pool is warming, after that it remains stable as expected. We do not expect any leak there;
      • anon allocations do not show any correlation as well;
      • mapped_files are strictly stable, no variations over from day to day;
      • the cache takes longer to stabilize but its increase does not seem to match working-set memory;
      • lsof outputs are stable over time, we do not see any increase of lines returned;
      • performance schemas memory table are stable over time, we do not see any increase in current memory used.

      Increasing system variable: active files

      The only significant change we noticed was a steep and constant increase of active_file.

      Starting from a warm MariaDB with an uptime of 346868 seconds (4 days), over the next 4 days active_file grows quickly

      DATE: Mon Apr  8 16:32:38 UTC 2024
      | Uptime        | 346868 |
      active_file 864256
       
      DATE: Tue Apr  9 10:00:53 UTC 2024
      | Uptime        | 409763 |
      active_file 2609152
       
      DATE: Thu Apr 11 12:45:30 UTC 2024
      | Uptime        | 592440 |
      active_file 36868096
      

      active_file counts toward the workingset memory calculation (https://github.com/kubernetes/kubernetes/issues/43916).

      MariaDB 10.4 vs 10.6 comparison

      When we compared running 10.4 and 10.6 clusters, here's what we found:

      • In both images, only innodb_flush_method = O_direct is used. It's by default with mariadb docker images. Method fsync would have explained a different memory usage.
      • innodb_flush_log_at_trx_commit = 2. After and before upgrade, we did not try to set it to 1 to avoid impact
      • both use jemalloc as malloc lib (note: using tcmalloc with 10.6 was tested and does not solve the leak).
      • galera.cache have not been changed (and mmap files are stable), we don't see usage of additional gcache pages
      • there are no usages of explicit temporary tables, no DDLs
      • innodb_adaptive_hash_index was tried both disabled and enabled, it did not seem to improve the issue. (It was disabled by default in 10.6, so we tried to match the 10.4 tuning.)
      • both 10.4 and 10.6 workload have a high buffer pool miss rate: Buffer pool hit rate 936 / 1000, young-making rate 36 / 1000 not 126 / 1000.

      Differences in raw parameters

      Variable                  /tmp/mariadb_104          /tmp/mariadb_106
      ========================= ========================= =========================
      back_log                  70                        80
      bulk_insert_buffer_size   16777216                  8388608
      concurrent_insert         ALWAYS                    AUTO
      connect_timeout           5                         10
      innodb_adaptive_hash_i... ON                        OFF
      innodb_change_buffering   all                       none
      innodb_checksum_algorithm crc32                     full_crc32
      innodb_lru_scan_depth     1024                      1536
      innodb_max_dirty_pages... 75.000000                 90.000000
      innodb_purge_batch_size   300                       1000
      max_recursive_iterations  4294967295                1000
      max_relay_log_size        104857600                 1073741824
      pseudo_thread_id          45                        29
      slave_parallel_mode       conservative              optimistic
      sort_buffer_size          4194304                   2097152
      table_open_cache          400                       2000
      thread_cache_size         100                       151
      wait_timeout              600                       28800
      

      Some of those variables had new default values in 10.6, but they were already tuned explicitly in the custom my.cnf.

      Both 10.4 and 10.6 are running in the same Kubernetes cluster.

      Temporary tables

      So far, we only found that reducing the amount of implicit temporary tables usage reduces the "leak". This reduction does not remove the leak, but it makes it happen slower.

      Things we did not try

      • comparing pmap over time;
      • jemalloc profiling (as RSS is stable);
      • any strace, perf, or any ebpf based tool. Without having a clear plan on what to track, we skipped as those can be costly.
      • removing entirely the temp tables used in a test cluster.

      TL;DR: workaround

      To work around this issue quickly, it is enough to add the --temp-pool=1 flag to the mariadbd (or mysql) program command.


      Archived environment (no longer applicable) label:

      Kubernetes cluster, managed by GCP (GKE cluster)
      Kubernetes version: 1.28.9-gke.1289000.
      Dedicated nodepool with cgroup v1 (switching to cgroup v2 does not resolve), virtual machine type n2d-highmem-32.
      Docker images: from MariaDB, e.g. mariadb:10.6.18 (Docker Hub).
      Other: uses Galera replication. No Kubernetes operators.
      

      Attachments

        Issue Links

          Activity

            Pinimo PNM created issue -
            Pinimo PNM made changes -
            Field Original Value New Value
            Description On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behavious consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

            Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

            We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

            The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

            Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

            We found it seems to be tied to temporary tables, but we could not identify any specific new usage or major changes between the versions.

            By low level changes, for example if there was significant changes in how temporary tables are managed. For instance, you might know if the pattern of `fsync`s changed compared to 10.4, or not at all.

            I'm attaching a screenshot of our memory monitoring right after the upgrade.
            On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behaviour consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

            Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

            We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

            The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

            Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

            We found it seems to be tied to temporary tables, but we could not identify any specific new usage or major changes between the versions.

            By low level changes, for example if there was significant changes in how temporary tables are managed. For instance, you might know if the pattern of `fsync`s changed compared to 10.4, or not at all.

            I'm attaching a screenshot of our memory monitoring right after the upgrade.
            Pinimo PNM made changes -
            Description On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behaviour consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

            Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

            We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

            The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

            Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

            We found it seems to be tied to temporary tables, but we could not identify any specific new usage or major changes between the versions.

            By low level changes, for example if there was significant changes in how temporary tables are managed. For instance, you might know if the pattern of `fsync`s changed compared to 10.4, or not at all.

            I'm attaching a screenshot of our memory monitoring right after the upgrade.
            On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behaviour consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

            Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

            We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

            The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

            Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

            We found it seems to be tied to temporary tables, but we could not identify any specific new usage or major changes between the versions.

            By low level changes, it could be interesting if you know if there were significant changes in how temporary tables are managed. For instance, you might know if the pattern of `fsync`s changed compared to 10.4, or not at all.

            I'm attaching a screenshot of our memory monitoring right after the upgrade.
            Pinimo PNM made changes -
            Description On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behaviour consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

            Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

            We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

            The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

            Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

            We found it seems to be tied to temporary tables, but we could not identify any specific new usage or major changes between the versions.

            By low level changes, it could be interesting if you know if there were significant changes in how temporary tables are managed. For instance, you might know if the pattern of `fsync`s changed compared to 10.4, or not at all.

            I'm attaching a screenshot of our memory monitoring right after the upgrade.
            On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behaviour consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

            Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

            We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

            The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

            Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

            We found it seems to be tied to temporary tables, but we could not identify any specific new usage or major changes between the versions.

            It could be interesting if you know if there were significant changes in how temporary tables are managed. For instance, you might know if the pattern of `fsync`s changed compared to 10.4, or not at all.

            I'm attaching a screenshot of our memory monitoring right after the upgrade.

            what do you mean by "it seems to be tied to temporary tables"? tied how? how did you find it?

            serg Sergei Golubchik added a comment - what do you mean by "it seems to be tied to temporary tables"? tied how? how did you find it?
            serg Sergei Golubchik made changes -
            Status Open [ 1 ] Needs Feedback [ 10501 ]
            Pinimo PNM made changes -
            Description On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behaviour consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

            Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

            We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

            The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

            Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

            We found it seems to be tied to temporary tables, but we could not identify any specific new usage or major changes between the versions.

            It could be interesting if you know if there were significant changes in how temporary tables are managed. For instance, you might know if the pattern of `fsync`s changed compared to 10.4, or not at all.

            I'm attaching a screenshot of our memory monitoring right after the upgrade.
            h3. Summary

            On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behaviour consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

            Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

            We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

            The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

            Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

            We found it seems to be related to temporary tables, but we could not identify any specific new usage or major changes between the versions.

            It could be interesting if you know if there were significant changes in how temporary tables are managed. For instance, you might know if the pattern of `fsync`s changed compared to 10.4, or not at all.

            I'm attaching a screenshot of our memory monitoring right after the upgrade.


            h3. Technical investigation

            h4. Stable system monitoring variables

            By monitoring /sys/fs/cgroup/memory/memory.stat (cgroupv1), here's what we see:

            * RSS remain stable. When taking continuous traces, it grows while the buffer pool is warming, after that it remains stable as expected. We do not expect any leak there;
            * `anon` allocations do not show any correlation as well;
            * `mapped_files` are strictly stable, no variations over from day to day;
            * the cache takes longer to stabilize but its increase does not seem to match working-set memory;
            * `lsof` outputs are stable over time, we do not see any increase of lines returned;
            * performance schemas memory table are stable over time, we do not see any increase in current memory used.

            h4. Increasing system variable: active files

            The only significant change we noticed was a steep and constant increase of `active_file`.

            Starting from a warm MariaDB with an uptime of 346868 seconds (4 days), over the next 4 days `active_file` grows quickly

            {code:text}
            DATE: Mon Apr 8 16:32:38 UTC 2024
            | Uptime | 346868 |
            active_file 864256

            DATE: Tue Apr 9 10:00:53 UTC 2024
            | Uptime | 409763 |
            active_file 2609152

            DATE: Thu Apr 11 12:45:30 UTC 2024
            | Uptime | 592440 |
            active_file 36868096
            {code}

            `active_file` counts toward the workingset memory calculation (https://github.com/kubernetes/kubernetes/issues/43916).

            h3. MariaDB 10.4 vs 10.6 comparison

            When we compared running 10.4 and 10.6 clusters, here's what we found:

            * In both images, only `innodb_flush_method = O_direct` is used. It's by default with mariadb docker images. Method `fsync` would have explained a different memory usage.
            * `innodb_flush_log_at_trx_commit = 2`. After and before upgrade, we did not try to set it to `1` to avoid impact
            * both use `jemalloc` as `malloc` lib (note: using `tcmalloc` with 10.6 was tested and does not solve the leak).
            * `galera.cache` have not been changed (and `mmap` files are stable), we don't see usage of additional `gcache` pages
            * there are no usages of explicit temporary tables, no DDLs
            * `innodb_adaptive_hash_index` was tried both disabled and enabled, it did not seem to improve the issue. (It was disabled by default in 10.6, so we tried to match the 10.4 tuning.)
            * both 10.4 and 10.6 workload have a high buffer pool miss rate: `Buffer pool hit rate 936 / 1000, young-making rate 36 / 1000 not 126 / 1000`
             
            h4. Differences in raw parameters

            {code:text}
            Variable /tmp/mariadb_104 /tmp/mariadb_106
            ========================= ========================= =========================
            back_log 70 80
            bulk_insert_buffer_size 16777216 8388608
            concurrent_insert ALWAYS AUTO
            connect_timeout 5 10
            innodb_adaptive_hash_i... ON OFF
            innodb_change_buffering all none
            innodb_checksum_algorithm crc32 full_crc32
            innodb_lru_scan_depth 1024 1536
            innodb_max_dirty_pages... 75.000000 90.000000
            innodb_purge_batch_size 300 1000
            max_recursive_iterations 4294967295 1000
            max_relay_log_size 104857600 1073741824
            pseudo_thread_id 45 29
            slave_parallel_mode conservative optimistic
            sort_buffer_size 4194304 2097152
            table_open_cache 400 2000
            thread_cache_size 100 151
            wait_timeout 600 28800
            {code}

            Some of those variables had new default values in 10.6, but they were already tuned explicitly in the custom `my.cnf`.

            Both 10.4 and 10.6 are running in the same Kubernetes cluster.

            h4. Temporary tables

            So far, we only found that reducing the amount of implicit temporary tables usage reduces the "leak". This reduction does not remove the leak, but it makes it happen slower.

            h3. Things we did not try

            * comparing `pmap` over time;
            * `jemalloc` profiling (as RSS is stable);
            * any `strace`, `perf`, or any `ebpf` based tool. Without having a clear plan on what to track, we skipped as those can be costly.
            * removing entirely the temp tables used in a test cluster.
            Pinimo PNM made changes -
            Description h3. Summary

            On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behaviour consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

            Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

            We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

            The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

            Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

            We found it seems to be related to temporary tables, but we could not identify any specific new usage or major changes between the versions.

            It could be interesting if you know if there were significant changes in how temporary tables are managed. For instance, you might know if the pattern of `fsync`s changed compared to 10.4, or not at all.

            I'm attaching a screenshot of our memory monitoring right after the upgrade.


            h3. Technical investigation

            h4. Stable system monitoring variables

            By monitoring /sys/fs/cgroup/memory/memory.stat (cgroupv1), here's what we see:

            * RSS remain stable. When taking continuous traces, it grows while the buffer pool is warming, after that it remains stable as expected. We do not expect any leak there;
            * `anon` allocations do not show any correlation as well;
            * `mapped_files` are strictly stable, no variations over from day to day;
            * the cache takes longer to stabilize but its increase does not seem to match working-set memory;
            * `lsof` outputs are stable over time, we do not see any increase of lines returned;
            * performance schemas memory table are stable over time, we do not see any increase in current memory used.

            h4. Increasing system variable: active files

            The only significant change we noticed was a steep and constant increase of `active_file`.

            Starting from a warm MariaDB with an uptime of 346868 seconds (4 days), over the next 4 days `active_file` grows quickly

            {code:text}
            DATE: Mon Apr 8 16:32:38 UTC 2024
            | Uptime | 346868 |
            active_file 864256

            DATE: Tue Apr 9 10:00:53 UTC 2024
            | Uptime | 409763 |
            active_file 2609152

            DATE: Thu Apr 11 12:45:30 UTC 2024
            | Uptime | 592440 |
            active_file 36868096
            {code}

            `active_file` counts toward the workingset memory calculation (https://github.com/kubernetes/kubernetes/issues/43916).

            h3. MariaDB 10.4 vs 10.6 comparison

            When we compared running 10.4 and 10.6 clusters, here's what we found:

            * In both images, only `innodb_flush_method = O_direct` is used. It's by default with mariadb docker images. Method `fsync` would have explained a different memory usage.
            * `innodb_flush_log_at_trx_commit = 2`. After and before upgrade, we did not try to set it to `1` to avoid impact
            * both use `jemalloc` as `malloc` lib (note: using `tcmalloc` with 10.6 was tested and does not solve the leak).
            * `galera.cache` have not been changed (and `mmap` files are stable), we don't see usage of additional `gcache` pages
            * there are no usages of explicit temporary tables, no DDLs
            * `innodb_adaptive_hash_index` was tried both disabled and enabled, it did not seem to improve the issue. (It was disabled by default in 10.6, so we tried to match the 10.4 tuning.)
            * both 10.4 and 10.6 workload have a high buffer pool miss rate: `Buffer pool hit rate 936 / 1000, young-making rate 36 / 1000 not 126 / 1000`
             
            h4. Differences in raw parameters

            {code:text}
            Variable /tmp/mariadb_104 /tmp/mariadb_106
            ========================= ========================= =========================
            back_log 70 80
            bulk_insert_buffer_size 16777216 8388608
            concurrent_insert ALWAYS AUTO
            connect_timeout 5 10
            innodb_adaptive_hash_i... ON OFF
            innodb_change_buffering all none
            innodb_checksum_algorithm crc32 full_crc32
            innodb_lru_scan_depth 1024 1536
            innodb_max_dirty_pages... 75.000000 90.000000
            innodb_purge_batch_size 300 1000
            max_recursive_iterations 4294967295 1000
            max_relay_log_size 104857600 1073741824
            pseudo_thread_id 45 29
            slave_parallel_mode conservative optimistic
            sort_buffer_size 4194304 2097152
            table_open_cache 400 2000
            thread_cache_size 100 151
            wait_timeout 600 28800
            {code}

            Some of those variables had new default values in 10.6, but they were already tuned explicitly in the custom `my.cnf`.

            Both 10.4 and 10.6 are running in the same Kubernetes cluster.

            h4. Temporary tables

            So far, we only found that reducing the amount of implicit temporary tables usage reduces the "leak". This reduction does not remove the leak, but it makes it happen slower.

            h3. Things we did not try

            * comparing `pmap` over time;
            * `jemalloc` profiling (as RSS is stable);
            * any `strace`, `perf`, or any `ebpf` based tool. Without having a clear plan on what to track, we skipped as those can be costly.
            * removing entirely the temp tables used in a test cluster.
            h3. Summary

            On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behaviour consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

            Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

            We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

            The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

            Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

            We found it seems to be related to temporary tables, but we could not identify any specific new usage or major changes between the versions.

            It could be interesting if you know if there were significant changes in how temporary tables are managed. For instance, you might know if the pattern of `fsync`s changed compared to 10.4, or not at all.

            I'm attaching a screenshot of our memory monitoring right after the upgrade.


            h3. Technical investigation

            h4. Stable system monitoring variables

            By monitoring /sys/fs/cgroup/memory/memory.stat (cgroupv1), here's what we see:

            * RSS remain stable. When taking continuous traces, it grows while the buffer pool is warming, after that it remains stable as expected. We do not expect any leak there;
            * {{anon}} allocations do not show any correlation as well;
            * {{mapped_files` are strictly stable, no variations over from day to day;
            * the cache takes longer to stabilize but its increase does not seem to match working-set memory;
            * `lsof` outputs are stable over time, we do not see any increase of lines returned;
            * performance schemas memory table are stable over time, we do not see any increase in current memory used.

            h4. Increasing system variable: active files

            The only significant change we noticed was a steep and constant increase of `active_file`.

            Starting from a warm MariaDB with an uptime of 346868 seconds (4 days), over the next 4 days `active_file` grows quickly

            {code:text}
            DATE: Mon Apr 8 16:32:38 UTC 2024
            | Uptime | 346868 |
            active_file 864256

            DATE: Tue Apr 9 10:00:53 UTC 2024
            | Uptime | 409763 |
            active_file 2609152

            DATE: Thu Apr 11 12:45:30 UTC 2024
            | Uptime | 592440 |
            active_file 36868096
            {code}

            `active_file` counts toward the workingset memory calculation (https://github.com/kubernetes/kubernetes/issues/43916).

            h3. MariaDB 10.4 vs 10.6 comparison

            When we compared running 10.4 and 10.6 clusters, here's what we found:

            * In both images, only `innodb_flush_method = O_direct` is used. It's by default with mariadb docker images. Method `fsync` would have explained a different memory usage.
            * `innodb_flush_log_at_trx_commit = 2`. After and before upgrade, we did not try to set it to `1` to avoid impact
            * both use `jemalloc` as `malloc` lib (note: using `tcmalloc` with 10.6 was tested and does not solve the leak).
            * `galera.cache` have not been changed (and `mmap` files are stable), we don't see usage of additional `gcache` pages
            * there are no usages of explicit temporary tables, no DDLs
            * `innodb_adaptive_hash_index` was tried both disabled and enabled, it did not seem to improve the issue. (It was disabled by default in 10.6, so we tried to match the 10.4 tuning.)
            * both 10.4 and 10.6 workload have a high buffer pool miss rate: `Buffer pool hit rate 936 / 1000, young-making rate 36 / 1000 not 126 / 1000`
             
            h4. Differences in raw parameters

            {code:text}
            Variable /tmp/mariadb_104 /tmp/mariadb_106
            ========================= ========================= =========================
            back_log 70 80
            bulk_insert_buffer_size 16777216 8388608
            concurrent_insert ALWAYS AUTO
            connect_timeout 5 10
            innodb_adaptive_hash_i... ON OFF
            innodb_change_buffering all none
            innodb_checksum_algorithm crc32 full_crc32
            innodb_lru_scan_depth 1024 1536
            innodb_max_dirty_pages... 75.000000 90.000000
            innodb_purge_batch_size 300 1000
            max_recursive_iterations 4294967295 1000
            max_relay_log_size 104857600 1073741824
            pseudo_thread_id 45 29
            slave_parallel_mode conservative optimistic
            sort_buffer_size 4194304 2097152
            table_open_cache 400 2000
            thread_cache_size 100 151
            wait_timeout 600 28800
            {code}

            Some of those variables had new default values in 10.6, but they were already tuned explicitly in the custom `my.cnf`.

            Both 10.4 and 10.6 are running in the same Kubernetes cluster.

            h4. Temporary tables

            So far, we only found that reducing the amount of implicit temporary tables usage reduces the "leak". This reduction does not remove the leak, but it makes it happen slower.

            h3. Things we did not try

            * comparing `pmap` over time;
            * `jemalloc` profiling (as RSS is stable);
            * any `strace`, `perf`, or any `ebpf` based tool. Without having a clear plan on what to track, we skipped as those can be costly.
            * removing entirely the temp tables used in a test cluster.
            Pinimo PNM made changes -
            Description h3. Summary

            On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behaviour consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

            Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

            We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

            The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

            Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

            We found it seems to be related to temporary tables, but we could not identify any specific new usage or major changes between the versions.

            It could be interesting if you know if there were significant changes in how temporary tables are managed. For instance, you might know if the pattern of `fsync`s changed compared to 10.4, or not at all.

            I'm attaching a screenshot of our memory monitoring right after the upgrade.


            h3. Technical investigation

            h4. Stable system monitoring variables

            By monitoring /sys/fs/cgroup/memory/memory.stat (cgroupv1), here's what we see:

            * RSS remain stable. When taking continuous traces, it grows while the buffer pool is warming, after that it remains stable as expected. We do not expect any leak there;
            * {{anon}} allocations do not show any correlation as well;
            * {{mapped_files` are strictly stable, no variations over from day to day;
            * the cache takes longer to stabilize but its increase does not seem to match working-set memory;
            * `lsof` outputs are stable over time, we do not see any increase of lines returned;
            * performance schemas memory table are stable over time, we do not see any increase in current memory used.

            h4. Increasing system variable: active files

            The only significant change we noticed was a steep and constant increase of `active_file`.

            Starting from a warm MariaDB with an uptime of 346868 seconds (4 days), over the next 4 days `active_file` grows quickly

            {code:text}
            DATE: Mon Apr 8 16:32:38 UTC 2024
            | Uptime | 346868 |
            active_file 864256

            DATE: Tue Apr 9 10:00:53 UTC 2024
            | Uptime | 409763 |
            active_file 2609152

            DATE: Thu Apr 11 12:45:30 UTC 2024
            | Uptime | 592440 |
            active_file 36868096
            {code}

            `active_file` counts toward the workingset memory calculation (https://github.com/kubernetes/kubernetes/issues/43916).

            h3. MariaDB 10.4 vs 10.6 comparison

            When we compared running 10.4 and 10.6 clusters, here's what we found:

            * In both images, only `innodb_flush_method = O_direct` is used. It's by default with mariadb docker images. Method `fsync` would have explained a different memory usage.
            * `innodb_flush_log_at_trx_commit = 2`. After and before upgrade, we did not try to set it to `1` to avoid impact
            * both use `jemalloc` as `malloc` lib (note: using `tcmalloc` with 10.6 was tested and does not solve the leak).
            * `galera.cache` have not been changed (and `mmap` files are stable), we don't see usage of additional `gcache` pages
            * there are no usages of explicit temporary tables, no DDLs
            * `innodb_adaptive_hash_index` was tried both disabled and enabled, it did not seem to improve the issue. (It was disabled by default in 10.6, so we tried to match the 10.4 tuning.)
            * both 10.4 and 10.6 workload have a high buffer pool miss rate: `Buffer pool hit rate 936 / 1000, young-making rate 36 / 1000 not 126 / 1000`
             
            h4. Differences in raw parameters

            {code:text}
            Variable /tmp/mariadb_104 /tmp/mariadb_106
            ========================= ========================= =========================
            back_log 70 80
            bulk_insert_buffer_size 16777216 8388608
            concurrent_insert ALWAYS AUTO
            connect_timeout 5 10
            innodb_adaptive_hash_i... ON OFF
            innodb_change_buffering all none
            innodb_checksum_algorithm crc32 full_crc32
            innodb_lru_scan_depth 1024 1536
            innodb_max_dirty_pages... 75.000000 90.000000
            innodb_purge_batch_size 300 1000
            max_recursive_iterations 4294967295 1000
            max_relay_log_size 104857600 1073741824
            pseudo_thread_id 45 29
            slave_parallel_mode conservative optimistic
            sort_buffer_size 4194304 2097152
            table_open_cache 400 2000
            thread_cache_size 100 151
            wait_timeout 600 28800
            {code}

            Some of those variables had new default values in 10.6, but they were already tuned explicitly in the custom `my.cnf`.

            Both 10.4 and 10.6 are running in the same Kubernetes cluster.

            h4. Temporary tables

            So far, we only found that reducing the amount of implicit temporary tables usage reduces the "leak". This reduction does not remove the leak, but it makes it happen slower.

            h3. Things we did not try

            * comparing `pmap` over time;
            * `jemalloc` profiling (as RSS is stable);
            * any `strace`, `perf`, or any `ebpf` based tool. Without having a clear plan on what to track, we skipped as those can be costly.
            * removing entirely the temp tables used in a test cluster.
            h3. Summary

            On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behaviour consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

            Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

            We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

            The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

            Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

            We found it seems to be related to temporary tables, but we could not identify any specific new usage or major changes between the versions.

            It could be interesting if you know if there were significant changes in how temporary tables are managed. For instance, you might know if the pattern of {{fsync}}s changed compared to 10.4, or not at all.

            I'm attaching a screenshot of our memory monitoring right after the upgrade.


            h3. Technical investigation

            h4. Stable system monitoring variables

            By monitoring /sys/fs/cgroup/memory/memory.stat (cgroupv1), here's what we see:

            * RSS remain stable. When taking continuous traces, it grows while the buffer pool is warming, after that it remains stable as expected. We do not expect any leak there;
            * {{anon}} allocations do not show any correlation as well;
            * {{mapped_files}} are strictly stable, no variations over from day to day;
            * the cache takes longer to stabilize but its increase does not seem to match working-set memory;
            * {{lsof}} outputs are stable over time, we do not see any increase of lines returned;
            * performance schemas memory table are stable over time, we do not see any increase in current memory used.

            h4. Increasing system variable: active files

            The only significant change we noticed was a steep and constant increase of {{active_file}}.

            Starting from a warm MariaDB with an uptime of 346868 seconds (4 days), over the next 4 days {{active_file}} grows quickly

            {code:text}
            DATE: Mon Apr 8 16:32:38 UTC 2024
            | Uptime | 346868 |
            active_file 864256

            DATE: Tue Apr 9 10:00:53 UTC 2024
            | Uptime | 409763 |
            active_file 2609152

            DATE: Thu Apr 11 12:45:30 UTC 2024
            | Uptime | 592440 |
            active_file 36868096
            {code}

            {{active_file}} counts toward the workingset memory calculation (https://github.com/kubernetes/kubernetes/issues/43916).

            h3. MariaDB 10.4 vs 10.6 comparison

            When we compared running 10.4 and 10.6 clusters, here's what we found:

            * In both images, only {{innodb_flush_method = O_direct}} is used. It's by default with mariadb docker images. Method {{fsync}} would have explained a different memory usage.
            * {{innodb_flush_log_at_trx_commit = 2}}. After and before upgrade, we did not try to set it to {{1}} to avoid impact
            * both use {{jemalloc}} as {{malloc}} lib (note: using {{tcmalloc}} with 10.6 was tested and does not solve the leak).
            * {{galera.cache}} have not been changed (and {{mmap}} files are stable), we don't see usage of additional {{gcache}} pages
            * there are no usages of explicit temporary tables, no DDLs
            * {{innodb_adaptive_hash_index}} was tried both disabled and enabled, it did not seem to improve the issue. (It was disabled by default in 10.6, so we tried to match the 10.4 tuning.)
            * both 10.4 and 10.6 workload have a high buffer pool miss rate: {{Buffer pool hit rate 936 / 1000, young-making rate 36 / 1000 not 126 / 1000}}.
             
            h4. Differences in raw parameters

            {code:text}
            Variable /tmp/mariadb_104 /tmp/mariadb_106
            ========================= ========================= =========================
            back_log 70 80
            bulk_insert_buffer_size 16777216 8388608
            concurrent_insert ALWAYS AUTO
            connect_timeout 5 10
            innodb_adaptive_hash_i... ON OFF
            innodb_change_buffering all none
            innodb_checksum_algorithm crc32 full_crc32
            innodb_lru_scan_depth 1024 1536
            innodb_max_dirty_pages... 75.000000 90.000000
            innodb_purge_batch_size 300 1000
            max_recursive_iterations 4294967295 1000
            max_relay_log_size 104857600 1073741824
            pseudo_thread_id 45 29
            slave_parallel_mode conservative optimistic
            sort_buffer_size 4194304 2097152
            table_open_cache 400 2000
            thread_cache_size 100 151
            wait_timeout 600 28800
            {code}

            Some of those variables had new default values in 10.6, but they were already tuned explicitly in the custom {{my.cnf}}.

            Both 10.4 and 10.6 are running in the same Kubernetes cluster.

            h4. Temporary tables

            So far, we only found that reducing the amount of implicit temporary tables usage reduces the "leak". This reduction does not remove the leak, but it makes it happen slower.

            h3. Things we did not try

            * comparing {{pmap}} over time;
            * {{jemalloc}} profiling (as RSS is stable);
            * any {{strace}}, {{perf}}, or any {{ebpf}} based tool. Without having a clear plan on what to track, we skipped as those can be costly.
            * removing entirely the temp tables used in a test cluster.
            Pinimo PNM made changes -
            Attachment screenshot-1.png [ 73922 ]
            Pinimo PNM made changes -
            Attachment screenshot-1.png [ 73922 ]
            Pinimo PNM made changes -
            Attachment temporary-tables-optimization.png [ 73923 ]
            Pinimo PNM made changes -
            Pinimo PNM added a comment - - edited

            Hi @Sergei Golubchik, thank you for having looked at the ticket. I apologize for the delay in my answer.

            It's a good question as it's really our only serious lead yet. Basically, we updated queries on the app side to reduce the number of temporary tables per second (e.g. by replacing UNION by UNION ALL when applicable). This reduced the rate of temp tables from around 200/sec to 2/sec. See the attached screenshots temporary-tables-optimization.png and effect-on-memleak-of-temp-table-reduction.png, showing that the leak is much less intensive after the drop in temp table rate.

            PS. I also updated the bug description to provide more technical data that we gathered before opening this issue.


            Pinimo PNM added a comment - - edited Hi @Sergei Golubchik, thank you for having looked at the ticket. I apologize for the delay in my answer. It's a good question as it's really our only serious lead yet. Basically, we updated queries on the app side to reduce the number of temporary tables per second (e.g. by replacing UNION by UNION ALL when applicable). This reduced the rate of temp tables from around 200/sec to 2/sec. See the attached screenshots temporary-tables-optimization.png and effect-on-memleak-of-temp-table-reduction.png , showing that the leak is much less intensive after the drop in temp table rate. PS. I also updated the bug description to provide more technical data that we gathered before opening this issue.
            serg Sergei Golubchik made changes -
            Status Needs Feedback [ 10501 ] Open [ 1 ]

            I've checked changed to temporary tables between 10.4 and 10.6 — nothing seems to be able to cause this.
            But also I don't really understand what the server could possibly do to have active_file constantly increasing.

            There were optimizer changes between 10.4 and 10.6, of course. They could've caused more queries to prefer an execution plan with temporary tables. Did you seen an increase in the rate of temporary table creation? Is UNION queries your main source of temporary tables?

            serg Sergei Golubchik added a comment - I've checked changed to temporary tables between 10.4 and 10.6 — nothing seems to be able to cause this. But also I don't really understand what the server could possibly do to have active_file constantly increasing. There were optimizer changes between 10.4 and 10.6, of course. They could've caused more queries to prefer an execution plan with temporary tables. Did you seen an increase in the rate of temporary table creation? Is UNION queries your main source of temporary tables?
            serg Sergei Golubchik made changes -
            Status Open [ 1 ] Needs Feedback [ 10501 ]
            Pinimo PNM added a comment - - edited

            Thank you Sergei for your answer. Interesting. An overview of changelogs led us to the same conclusion too, and it's nice to hear the same from you! It will take me a few days to answer you with full details, more soon!

            Pinimo PNM added a comment - - edited Thank you Sergei for your answer. Interesting. An overview of changelogs led us to the same conclusion too, and it's nice to hear the same from you! It will take me a few days to answer you with full details, more soon!
            Pinimo PNM made changes -
            Pinimo PNM added a comment - - edited

            Hello, here are a few elements to answer some of your questions.

            Have you seen an increase in the rate of temporary table creation?

            No, there was no increase in the rate of temp tables after the upgrade. Attached is a screenshot of the corresponding graph (same graph as linked before, but with a wider time window): the vertical cursor on Jan 9th at noon is the time of upgrade. The big drop of temp tables comes from query changes done afterwards (see hereafter).

            Are UNION queries your main source of temporary tables?

            Yes, on that cluster, nearly all temp tables came from a UNION query that was changed to a UNION ALL on Jan 11 at 5:35 P.M.. The rate of temp-tables then dropped to around 2/second and the memory leak slowed down, but not entirely as shown by the effect-on-memleak-of-temp-table-reduction.png screenshot above.

            However, we will activate slow_log with a filter on temporary tables on disk, to try to have more insights on what queries generate those temp tables.

            One last thing: we might want to do some kind of git bisect to find the commit that introduced this regression. This will be kind of difficult as the execution and validation functions of the dichotomy search are complex (deploying a cluster in a clean env, then inspecting the monitoring after a while). Do you know if someone has done this before?

            Pinimo PNM added a comment - - edited Hello, here are a few elements to answer some of your questions. Have you seen an increase in the rate of temporary table creation ? No, there was no increase in the rate of temp tables after the upgrade. Attached is a screenshot of the corresponding graph (same graph as linked before, but with a wider time window): the vertical cursor on Jan 9th at noon is the time of upgrade. The big drop of temp tables comes from query changes done afterwards (see hereafter). Are UNION queries your main source of temporary tables ? Yes, on that cluster, nearly all temp tables came from a UNION query that was changed to a UNION ALL on Jan 11 at 5:35 P.M.. The rate of temp-tables then dropped to around 2/second and the memory leak slowed down, but not entirely as shown by the effect-on-memleak-of-temp-table-reduction.png screenshot above. However, we will activate slow_log with a filter on temporary tables on disk, to try to have more insights on what queries generate those temp tables. One last thing: we might want to do some kind of git bisect to find the commit that introduced this regression. This will be kind of difficult as the execution and validation functions of the dichotomy search are complex (deploying a cluster in a clean env, then inspecting the monitoring after a while). Do you know if someone has done this before?

            No, I haven't heard of anyone having this issue, I mean, related to temporary tables. May be nobody analyzed it that thoroughly.

            Few users complained about a "memory leak", that was always RSS growing due to memory fragmentation and typically solved by 1) disabling transparent huge pages (which the server is now doing automatically) or 2) switching to jemalloc or tcmalloc.

            I'd be happy to do the bisection myself (I do that often, I have scripts to make it work better), but I'd need a repeatable test case. Like sample data and a typical query that, when repeated in a loop, causes working set to grow.

            serg Sergei Golubchik added a comment - No, I haven't heard of anyone having this issue, I mean, related to temporary tables. May be nobody analyzed it that thoroughly. Few users complained about a "memory leak", that was always RSS growing due to memory fragmentation and typically solved by 1) disabling transparent huge pages (which the server is now doing automatically) or 2) switching to jemalloc or tcmalloc. I'd be happy to do the bisection myself (I do that often, I have scripts to make it work better), but I'd need a repeatable test case. Like sample data and a typical query that, when repeated in a loop, causes working set to grow.
            frivoire Florent R added a comment - - edited

            Hello Sergei,
            I'm Florent, a colleague of PNM, also working on this memory issue.

            Here is a way to reproduce the issue:

            while :; do mysql -e "SELECT table_schema, IFNULL(SUM(data_length+index_length)/1024/1024,0) AS total_mb FROM information_schema.tables GROUP BY table_schema"; done
            

            => dozen of tmp-tables (including some on disk) per query
            => increase of working-set (around +14MB/min during my test)

            And a "control" test (just to be sure it's really specific to the query, and not my test env):

            while :; do mysql -e "SELECT 1"; done
            

            => working-set stays stable

            Could it the "repeatable test case" that you're looking for ?

            NB: the mariadb test instance is 10.6.18 and it has 7 (non-system) databases + 14 tables (total), so nothing crazy.

            frivoire Florent R added a comment - - edited Hello Sergei, I'm Florent, a colleague of PNM, also working on this memory issue. Here is a way to reproduce the issue: while :; do mysql -e "SELECT table_schema, IFNULL(SUM(data_length+index_length)/1024/1024,0) AS total_mb FROM information_schema.tables GROUP BY table_schema" ; done => dozen of tmp-tables (including some on disk) per query => increase of working-set (around +14MB/min during my test) And a "control" test (just to be sure it's really specific to the query, and not my test env): while :; do mysql -e "SELECT 1" ; done => working-set stays stable Could it the "repeatable test case" that you're looking for ? NB: the mariadb test instance is 10.6.18 and it has 7 (non-system) databases + 14 tables (total), so nothing crazy.
            Pinimo PNM added a comment - - edited

            Hello, we're making progress. Thanks to Florent's test case I was able to repeat this in Docker, using the following instructions, and to identify the faulty version it's 10.5.7. I will rename the ticket accordingly.

            # Shell 1
            MARIADB_VERSION=10.5.7
            docker run --name mariadb-$MARIADB_VERSION \
              --rm --memory 200M \
              --env MARIADB_ALLOW_EMPTY_ROOT_PASSWORD=true \
              --env MYSQL_ALLOW_EMPTY_PASSWORD=true \
              mariadb:$MARIADB_VERSION
             
            # Shell 2
            MARIADB_VERSION=10.5.7
            CONTAINER_ID="mariadb-${MARIADB_VERSION}"
            docker exec -it $CONTAINER_ID bash -c "while :; do mysql -e 'SELECT table_schema, IFNULL(SUM(data_length+index_length)/1024/1024,0) AS total_mb FROM information_schema.tables GROUP BY table_schema'; echo $MARIADB_VERSION; done"
            # Ctrl-C to exit previous command and stop the container
            docker stop mariadb-$MARIADB_VERSION
             
            # Shell 3
            docker stats
            

            Here's the verdict of the bisect:

            • 10.5.4: no memleak
            • 10.5.6:
            • 10.5.7: very fast leak, ~3MB/s
            • 10.5.22: same, "fast" leak
            • 10.5.23: different leak pattern, took ~1minute to start leaking, then about 1MB/s with small up-and-downs
            • 10.5.26: probably same, slow memleak, 0.5MB/s measured on 1 minute (started leaking at once)
            • 11.6.1: still has a leak but seems slower, starts immediately, ~0.2MB/s, ups-and-downs too.
            Pinimo PNM added a comment - - edited Hello, we're making progress. Thanks to Florent's test case I was able to repeat this in Docker, using the following instructions, and to identify the faulty version it's 10.5.7. I will rename the ticket accordingly. # Shell 1 MARIADB_VERSION=10.5.7 docker run --name mariadb-$MARIADB_VERSION \ --rm --memory 200M \ --env MARIADB_ALLOW_EMPTY_ROOT_PASSWORD=true \ --env MYSQL_ALLOW_EMPTY_PASSWORD=true \ mariadb:$MARIADB_VERSION   # Shell 2 MARIADB_VERSION=10.5.7 CONTAINER_ID="mariadb-${MARIADB_VERSION}" docker exec -it $CONTAINER_ID bash -c "while :; do mysql -e 'SELECT table_schema, IFNULL(SUM(data_length+index_length)/1024/1024,0) AS total_mb FROM information_schema.tables GROUP BY table_schema'; echo $MARIADB_VERSION; done" # Ctrl-C to exit previous command and stop the container docker stop mariadb-$MARIADB_VERSION   # Shell 3 docker stats Here's the verdict of the bisect: 10.5.4: no memleak 10.5.6: 10.5.7: very fast leak, ~3MB/s 10.5.22: same, "fast" leak 10.5.23: different leak pattern, took ~1minute to start leaking, then about 1MB/s with small up-and-downs 10.5.26: probably same, slow memleak, 0.5MB/s measured on 1 minute (started leaking at once) 11.6.1: still has a leak but seems slower, starts immediately, ~0.2MB/s, ups-and-downs too.
            Pinimo PNM made changes -
            Summary Kubernetes: working-set memory leak after 10.6 upgrade Kubernetes: working-set memory leak starting with release 10.5.7
            Pinimo PNM added a comment -

            I forgot to mention something important: no OutOfMemory shutdowns are triggered, although the memory usage nears the "very red zone" (198MB/200MB in this case). There seems to be some kind of memory releasing when we get too near to the containers limit.

            Pinimo PNM added a comment - I forgot to mention something important: no OutOfMemory shutdowns are triggered, although the memory usage nears the "very red zone" (198MB/200MB in this case). There seems to be some kind of memory releasing when we get too near to the containers limit.
            Roel Roel Van de Paar made changes -
            Assignee Roel Van de Paar [ roel ]
            Roel Roel Van de Paar made changes -
            Status Needs Feedback [ 10501 ] Open [ 1 ]
            Roel Roel Van de Paar made changes -
            Affects Version/s 10.5.7 [ 25019 ]
            Roel Roel Van de Paar made changes -
            Fix Version/s 10.5 [ 23123 ]
            Fix Version/s 10.6 [ 24028 ]
            Fix Version/s 10.11 [ 27614 ]
            Fix Version/s 11.2 [ 28603 ]
            Fix Version/s 11.4 [ 29301 ]
            Fix Version/s 11.6 [ 29515 ]
            Roel Roel Van de Paar made changes -
            Fix Version/s 11.2 [ 28603 ]
            Fix Version/s 11.4 [ 29301 ]
            Fix Version/s 11.6 [ 29515 ]
            Roel Roel Van de Paar made changes -
            Environment Kubernetes cluster, managed by GCP (GKE cluster)
            Kubernetes version: 1.28.9-gke.1289000.
            Dedicated nodepool with cgroup v1 (switching to cgroup v2 does not resolve), virtual machine type n2d-highmem-32.
            Docker images: from MariaDB, e.g. mariadb:10.6.18 (Docker Hub).
            Other: uses Galera replication. No Kubernetes operators.
            Roel Roel Van de Paar made changes -
            Description h3. Summary

            On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behaviour consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

            Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

            We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

            The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

            Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

            We found it seems to be related to temporary tables, but we could not identify any specific new usage or major changes between the versions.

            It could be interesting if you know if there were significant changes in how temporary tables are managed. For instance, you might know if the pattern of {{fsync}}s changed compared to 10.4, or not at all.

            I'm attaching a screenshot of our memory monitoring right after the upgrade.


            h3. Technical investigation

            h4. Stable system monitoring variables

            By monitoring /sys/fs/cgroup/memory/memory.stat (cgroupv1), here's what we see:

            * RSS remain stable. When taking continuous traces, it grows while the buffer pool is warming, after that it remains stable as expected. We do not expect any leak there;
            * {{anon}} allocations do not show any correlation as well;
            * {{mapped_files}} are strictly stable, no variations over from day to day;
            * the cache takes longer to stabilize but its increase does not seem to match working-set memory;
            * {{lsof}} outputs are stable over time, we do not see any increase of lines returned;
            * performance schemas memory table are stable over time, we do not see any increase in current memory used.

            h4. Increasing system variable: active files

            The only significant change we noticed was a steep and constant increase of {{active_file}}.

            Starting from a warm MariaDB with an uptime of 346868 seconds (4 days), over the next 4 days {{active_file}} grows quickly

            {code:text}
            DATE: Mon Apr 8 16:32:38 UTC 2024
            | Uptime | 346868 |
            active_file 864256

            DATE: Tue Apr 9 10:00:53 UTC 2024
            | Uptime | 409763 |
            active_file 2609152

            DATE: Thu Apr 11 12:45:30 UTC 2024
            | Uptime | 592440 |
            active_file 36868096
            {code}

            {{active_file}} counts toward the workingset memory calculation (https://github.com/kubernetes/kubernetes/issues/43916).

            h3. MariaDB 10.4 vs 10.6 comparison

            When we compared running 10.4 and 10.6 clusters, here's what we found:

            * In both images, only {{innodb_flush_method = O_direct}} is used. It's by default with mariadb docker images. Method {{fsync}} would have explained a different memory usage.
            * {{innodb_flush_log_at_trx_commit = 2}}. After and before upgrade, we did not try to set it to {{1}} to avoid impact
            * both use {{jemalloc}} as {{malloc}} lib (note: using {{tcmalloc}} with 10.6 was tested and does not solve the leak).
            * {{galera.cache}} have not been changed (and {{mmap}} files are stable), we don't see usage of additional {{gcache}} pages
            * there are no usages of explicit temporary tables, no DDLs
            * {{innodb_adaptive_hash_index}} was tried both disabled and enabled, it did not seem to improve the issue. (It was disabled by default in 10.6, so we tried to match the 10.4 tuning.)
            * both 10.4 and 10.6 workload have a high buffer pool miss rate: {{Buffer pool hit rate 936 / 1000, young-making rate 36 / 1000 not 126 / 1000}}.
             
            h4. Differences in raw parameters

            {code:text}
            Variable /tmp/mariadb_104 /tmp/mariadb_106
            ========================= ========================= =========================
            back_log 70 80
            bulk_insert_buffer_size 16777216 8388608
            concurrent_insert ALWAYS AUTO
            connect_timeout 5 10
            innodb_adaptive_hash_i... ON OFF
            innodb_change_buffering all none
            innodb_checksum_algorithm crc32 full_crc32
            innodb_lru_scan_depth 1024 1536
            innodb_max_dirty_pages... 75.000000 90.000000
            innodb_purge_batch_size 300 1000
            max_recursive_iterations 4294967295 1000
            max_relay_log_size 104857600 1073741824
            pseudo_thread_id 45 29
            slave_parallel_mode conservative optimistic
            sort_buffer_size 4194304 2097152
            table_open_cache 400 2000
            thread_cache_size 100 151
            wait_timeout 600 28800
            {code}

            Some of those variables had new default values in 10.6, but they were already tuned explicitly in the custom {{my.cnf}}.

            Both 10.4 and 10.6 are running in the same Kubernetes cluster.

            h4. Temporary tables

            So far, we only found that reducing the amount of implicit temporary tables usage reduces the "leak". This reduction does not remove the leak, but it makes it happen slower.

            h3. Things we did not try

            * comparing {{pmap}} over time;
            * {{jemalloc}} profiling (as RSS is stable);
            * any {{strace}}, {{perf}}, or any {{ebpf}} based tool. Without having a clear plan on what to track, we skipped as those can be costly.
            * removing entirely the temp tables used in a test cluster.
            h3. Summary

            On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behaviour consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

            Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

            We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

            The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

            Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

            We found it seems to be related to temporary tables, but we could not identify any specific new usage or major changes between the versions.

            It could be interesting if you know if there were significant changes in how temporary tables are managed. For instance, you might know if the pattern of {{fsync}}s changed compared to 10.4, or not at all.

            I'm attaching a screenshot of our memory monitoring right after the upgrade.


            h3. Technical investigation

            h4. Stable system monitoring variables

            By monitoring /sys/fs/cgroup/memory/memory.stat (cgroupv1), here's what we see:

            * RSS remain stable. When taking continuous traces, it grows while the buffer pool is warming, after that it remains stable as expected. We do not expect any leak there;
            * {{anon}} allocations do not show any correlation as well;
            * {{mapped_files}} are strictly stable, no variations over from day to day;
            * the cache takes longer to stabilize but its increase does not seem to match working-set memory;
            * {{lsof}} outputs are stable over time, we do not see any increase of lines returned;
            * performance schemas memory table are stable over time, we do not see any increase in current memory used.

            h4. Increasing system variable: active files

            The only significant change we noticed was a steep and constant increase of {{active_file}}.

            Starting from a warm MariaDB with an uptime of 346868 seconds (4 days), over the next 4 days {{active_file}} grows quickly

            {code:text}
            DATE: Mon Apr 8 16:32:38 UTC 2024
            | Uptime | 346868 |
            active_file 864256

            DATE: Tue Apr 9 10:00:53 UTC 2024
            | Uptime | 409763 |
            active_file 2609152

            DATE: Thu Apr 11 12:45:30 UTC 2024
            | Uptime | 592440 |
            active_file 36868096
            {code}

            {{active_file}} counts toward the workingset memory calculation (https://github.com/kubernetes/kubernetes/issues/43916).

            h3. MariaDB 10.4 vs 10.6 comparison

            When we compared running 10.4 and 10.6 clusters, here's what we found:

            * In both images, only {{innodb_flush_method = O_direct}} is used. It's by default with mariadb docker images. Method {{fsync}} would have explained a different memory usage.
            * {{innodb_flush_log_at_trx_commit = 2}}. After and before upgrade, we did not try to set it to {{1}} to avoid impact
            * both use {{jemalloc}} as {{malloc}} lib (note: using {{tcmalloc}} with 10.6 was tested and does not solve the leak).
            * {{galera.cache}} have not been changed (and {{mmap}} files are stable), we don't see usage of additional {{gcache}} pages
            * there are no usages of explicit temporary tables, no DDLs
            * {{innodb_adaptive_hash_index}} was tried both disabled and enabled, it did not seem to improve the issue. (It was disabled by default in 10.6, so we tried to match the 10.4 tuning.)
            * both 10.4 and 10.6 workload have a high buffer pool miss rate: {{Buffer pool hit rate 936 / 1000, young-making rate 36 / 1000 not 126 / 1000}}.
             
            h4. Differences in raw parameters

            {code:text}
            Variable /tmp/mariadb_104 /tmp/mariadb_106
            ========================= ========================= =========================
            back_log 70 80
            bulk_insert_buffer_size 16777216 8388608
            concurrent_insert ALWAYS AUTO
            connect_timeout 5 10
            innodb_adaptive_hash_i... ON OFF
            innodb_change_buffering all none
            innodb_checksum_algorithm crc32 full_crc32
            innodb_lru_scan_depth 1024 1536
            innodb_max_dirty_pages... 75.000000 90.000000
            innodb_purge_batch_size 300 1000
            max_recursive_iterations 4294967295 1000
            max_relay_log_size 104857600 1073741824
            pseudo_thread_id 45 29
            slave_parallel_mode conservative optimistic
            sort_buffer_size 4194304 2097152
            table_open_cache 400 2000
            thread_cache_size 100 151
            wait_timeout 600 28800
            {code}

            Some of those variables had new default values in 10.6, but they were already tuned explicitly in the custom {{my.cnf}}.

            Both 10.4 and 10.6 are running in the same Kubernetes cluster.

            h4. Temporary tables

            So far, we only found that reducing the amount of implicit temporary tables usage reduces the "leak". This reduction does not remove the leak, but it makes it happen slower.

            h3. Things we did not try

            * comparing {{pmap}} over time;
            * {{jemalloc}} profiling (as RSS is stable);
            * any {{strace}}, {{perf}}, or any {{ebpf}} based tool. Without having a clear plan on what to track, we skipped as those can be costly.
            * removing entirely the temp tables used in a test cluster.

            _Arhived environment (no longer applicable) label:_
            {noformat}
            Kubernetes cluster, managed by GCP (GKE cluster)
            Kubernetes version: 1.28.9-gke.1289000.
            Dedicated nodepool with cgroup v1 (switching to cgroup v2 does not resolve), virtual machine type n2d-highmem-32.
            Docker images: from MariaDB, e.g. mariadb:10.6.18 (Docker Hub).
            Other: uses Galera replication. No Kubernetes operators.
            {noformat}

            frivoire Pinimo Hi! Nice to meet you. Thank you for the detailed report! I am looking at bisecting this down to the exact code commit that caused this.

            With the 10.5.7 release build (available here, revision 90f43d260e407c650aa8a7885d674c717618cc37) the issue does not reproduce thus far, nor does it reproduce with a manual build thereof. I am using

            while true; do ps -eo pid,pmem,rss,vsz,comm | grep ${PID}; sleep 1; done
            

            To monitor rss usage.

            Pinimo Can you please confirm that your docker instance is otherwise unused, has no data, and is not using Kubernetes?

            Thank you!

            Roel Roel Van de Paar added a comment - frivoire Pinimo Hi! Nice to meet you. Thank you for the detailed report! I am looking at bisecting this down to the exact code commit that caused this. With the 10.5.7 release build (available here , revision 90f43d260e407c650aa8a7885d674c717618cc37) the issue does not reproduce thus far, nor does it reproduce with a manual build thereof. I am using while true ; do ps -eo pid,pmem,rss,vsz, comm | grep ${PID}; sleep 1; done To monitor rss usage. Pinimo Can you please confirm that your docker instance is otherwise unused, has no data, and is not using Kubernetes? Thank you!
            Roel Roel Van de Paar added a comment - - edited

            Notes on building older 10.5 releases:

            git reset --hard
            git clean -xfd 
            git checkout --force --recurse-submodules 90f43d260e407c650aa8a7885d674c717618cc37
            sed -i 's|^INCLUDE(cmake/ConnectorName.cmake)|#INCLUDE(cmake/ConnectorName.cmake)|' libmariadb/CMakeLists.txt
            sed -i 's|^#include <libaio.h>|#include <libaio.h>\n#include <cstdio>|' tpool/aio_linux.cc
            git commit -a -m "dummy"
            cmake . -DWITH_SSL=bundled -DBUILD_CONFIG=mysql_release -DWITH_UNIT_TESTS=0 -DDEBUG_EXTNAME=OFF -DWITH_EMBEDDED_SERVER=0 -DENABLED_LOCAL_INFILE=1 -DENABLE_DTRACE=0 -DWITH_DBUG_TRACE=OFF -DWITH_ZLIB=bundled -DWITH_MARIABACKUP=0 -DFORCE_INSOURCE_BUILD=1 -DWARNING_AS_ERROR='' -DWITH_PCRE=system -DWITH_MROONGA=0 -DWITH_ROCKSDB=0 -DWITH_TOKUDB=0 -DWITHOUT_MROONGA=1 -DWITHOUT_ROCKSDB=1 -DWITHOUT_ROCKSDB=1 -DWITHOUT_TOKUDB=1 -DWITH_JEMALLOC=yes
            make -j80
            

            Patching for old ncurses (for downloaded builds):

            sudo ln -s /lib/x86_64-linux-gnu/libncursesw.so.6.4 /lib/x86_64-linux-gnu/libncurses.so.5  # Or similar
            

            Roel Roel Van de Paar added a comment - - edited Notes on building older 10.5 releases: git reset --hard git clean -xfd git checkout --force --recurse-submodules 90f43d260e407c650aa8a7885d674c717618cc37 sed -i 's|^INCLUDE(cmake/ConnectorName.cmake)|#INCLUDE(cmake/ConnectorName.cmake)|' libmariadb /CMakeLists .txt sed -i 's|^#include <libaio.h>|#include <libaio.h>\n#include <cstdio>|' tpool /aio_linux .cc git commit -a -m "dummy" cmake . -DWITH_SSL=bundled -DBUILD_CONFIG=mysql_release -DWITH_UNIT_TESTS=0 -DDEBUG_EXTNAME=OFF -DWITH_EMBEDDED_SERVER=0 -DENABLED_LOCAL_INFILE=1 -DENABLE_DTRACE=0 -DWITH_DBUG_TRACE=OFF -DWITH_ZLIB=bundled -DWITH_MARIABACKUP=0 -DFORCE_INSOURCE_BUILD=1 -DWARNING_AS_ERROR= '' -DWITH_PCRE=system -DWITH_MROONGA=0 -DWITH_ROCKSDB=0 -DWITH_TOKUDB=0 -DWITHOUT_MROONGA=1 -DWITHOUT_ROCKSDB=1 -DWITHOUT_ROCKSDB=1 -DWITHOUT_TOKUDB=1 -DWITH_JEMALLOC= yes make -j80 Patching for old ncurses (for downloaded builds): sudo ln -s /lib/x86_64-linux-gnu/libncursesw .so.6.4 /lib/x86_64-linux-gnu/libncurses .so.5 # Or similar
            Roel Roel Van de Paar added a comment - - edited

            Also tested 10.5.8 release build @ 7da6353b1558adce73320c803f0413c9bbd81185.
            For both versions there is a small increase in rss each time the queries start, but it remains stable thereafter:

            $ PID=2577901; while true; do ps -eo pid,pmem,rss,vsz,comm | grep ${PID}; sleep 1; done
            2577901  0.0 86960 1308088 mariadbd
            2577901  0.0 86960 1308088 mariadbd
            2577901  0.0 86960 1308088 mariadbd
            2577901  0.0 86960 1308088 mariadbd
            2577901  0.0 89520 1308088 mariadbd    # Queries started
            2577901  0.0 89520 1308088 mariadbd
            2577901  0.0 89520 1308088 mariadbd
            

            Roel Roel Van de Paar added a comment - - edited Also tested 10.5.8 release build @ 7da6353b1558adce73320c803f0413c9bbd81185. For both versions there is a small increase in rss each time the queries start, but it remains stable thereafter: $ PID=2577901; while true ; do ps -eo pid,pmem,rss,vsz, comm | grep ${PID}; sleep 1; done 2577901 0.0 86960 1308088 mariadbd 2577901 0.0 86960 1308088 mariadbd 2577901 0.0 86960 1308088 mariadbd 2577901 0.0 86960 1308088 mariadbd 2577901 0.0 89520 1308088 mariadbd # Queries started 2577901 0.0 89520 1308088 mariadbd 2577901 0.0 89520 1308088 mariadbd
            frivoire Florent R added a comment -

            Hello Roel

            > For both versions there is a small increase in rss each time the queries start, but it remains stable thereafter:

            I understand here that you don't reproduce the issue when you rebuild from source.
            Have you tried to reproduce with our procedure (based on docker) ?

            Otherwise, do you think we can provide more info (and which one) ?

            frivoire Florent R added a comment - Hello Roel > For both versions there is a small increase in rss each time the queries start, but it remains stable thereafter: I understand here that you don't reproduce the issue when you rebuild from source . Have you tried to reproduce with our procedure (based on docker) ? Otherwise, do you think we can provide more info (and which one) ?

            Hi frivoire! Thank you for the comment. I do not believe using docker will make a difference, especially as it looks like only a single SQL statement is being executed:

            SELECT table_schema, IFNULL(SUM(data_length+index_length)/1024/1024,0) AS total_mb FROM information_schema.tables GROUP BY table_schema
            

            On this, I asked above if Pinimo (or you) can please confirm that your docker instance is otherwise unused, has no data, and is not using Kubernetes?

            The reason I ask the question is that in one case "the mariadb test instance is 10.6.18 and it has 7 (non-system) databases + 14 tables (total), so nothing crazy." was mentioned, yet in Pinimo Docker's setup no underlaying data is mentioned, only the SELECT query.

            Roel Roel Van de Paar added a comment - Hi frivoire ! Thank you for the comment. I do not believe using docker will make a difference, especially as it looks like only a single SQL statement is being executed: SELECT table_schema, IFNULL( SUM (data_length+index_length)/1024/1024,0) AS total_mb FROM information_schema.tables GROUP BY table_schema On this, I asked above if Pinimo (or you) can please confirm that your docker instance is otherwise unused, has no data, and is not using Kubernetes? The reason I ask the question is that in one case "the mariadb test instance is 10.6.18 and it has 7 (non-system) databases + 14 tables (total), so nothing crazy." was mentioned, yet in Pinimo Docker's setup no underlaying data is mentioned, only the SELECT query.
            frivoire Florent R added a comment -

            Hello,

            > The reason I ask the question is that in one case "[...] 7 (non-system) databases + 14 tables [...]" was mentioned

            The 7/17 db/tables were the state of my K8S test only, not the docker test.
            And I think we can completely forget the K8S test, because it uses a more complex setup (K8S, some custom configs, etc.) than the 2nd test with Docker given by PNM, without providing more info.

            Sorry for the misunderstanding here.

            > yet in PNM Docker's setup no underlaying data is mentioned, only the SELECT query.

            Indeed, in this docker-based test, no additional action is required (no db/table/data to create or insert before running the test).
            NB: it's also guaranteed by the lack of `--volume` parameter in the `docker run` command given => the whole container (including `/var/lib/mysql`) starts with just the docker image content everytime.

            So you should be able to reproduce with literally just the commands given by PNM (on a linux machine with docker installed of course).

            frivoire Florent R added a comment - Hello, > The reason I ask the question is that in one case "[...] 7 (non-system) databases + 14 tables [...]" was mentioned The 7/17 db/tables were the state of my K8S test only, not the docker test. And I think we can completely forget the K8S test, because it uses a more complex setup (K8S, some custom configs, etc.) than the 2nd test with Docker given by PNM, without providing more info. Sorry for the misunderstanding here. > yet in PNM Docker's setup no underlaying data is mentioned, only the SELECT query. Indeed, in this docker-based test, no additional action is required (no db/table/data to create or insert before running the test). NB: it's also guaranteed by the lack of `--volume` parameter in the `docker run` command given => the whole container (including `/var/lib/mysql`) starts with just the docker image content everytime. So you should be able to reproduce with literally just the commands given by PNM (on a linux machine with docker installed of course).

            frivoire Thank you for the feedback, that clarifies.

            I was able to reproduce the issue with Docker and could furthermore simplify the single query needed to reproduce this to:

            SELECT index_length FROM information_schema.tables;
            

            Using data_length instead of index_length equally works, though is a little slower to reproduce the issue.

            Roel Roel Van de Paar added a comment - frivoire Thank you for the feedback, that clarifies. I was able to reproduce the issue with Docker and could furthermore simplify the single query needed to reproduce this to: SELECT index_length FROM information_schema.tables; Using data_length instead of index_length equally works, though is a little slower to reproduce the issue.
            Roel Roel Van de Paar made changes -
            Summary Kubernetes: working-set memory leak starting with release 10.5.7 SELECT [index_length/data_length] FROM information_schema.tables causes significant memory loss
            Roel Roel Van de Paar made changes -
            Component/s Galera [ 10124 ]
            Roel Roel Van de Paar made changes -
            Status Open [ 1 ] Confirmed [ 10101 ]
            Roel Roel Van de Paar added a comment - - edited

            Confirmed using only SELECT index_length FROM information_schema.tables on an otherwise empty server, using Docker + MariaDB -> versions:

            10.5.5  : -  (somewhat hovering, remains steady under ~85MiB)
            10.5.6  : -  (somewhat hovering, remains steady under ~85MiB)
            10.5.7  : Y, rises medium-to-fast up to the max 200MiB
            10.5.8  : Y, rises fast up to the max 200MiB
            10.16.19: Y, rises slow-to-medium to max 200MiB
            

            For 10.5.24, it will rise from about 75MiB to 130Mib, then drop back to 75MiB at some point, then raise slowly towards the max 200MiB. The Docker test pegs a single cpu. I am wondering if some network buffer or similar could be filling.

            Roel Roel Van de Paar added a comment - - edited Confirmed using only SELECT index_length FROM information_schema.tables on an otherwise empty server, using Docker + MariaDB -> versions: 10.5.5 : - (somewhat hovering, remains steady under ~85MiB) 10.5.6 : - (somewhat hovering, remains steady under ~85MiB) 10.5.7 : Y, rises medium-to-fast up to the max 200MiB 10.5.8 : Y, rises fast up to the max 200MiB 10.16.19: Y, rises slow-to-medium to max 200MiB For 10.5.24, it will rise from about 75MiB to 130Mib, then drop back to 75MiB at some point, then raise slowly towards the max 200MiB. The Docker test pegs a single cpu. I am wondering if some network buffer or similar could be filling.
            Roel Roel Van de Paar added a comment - - edited

            The issue can be magnified/amplified by doing:

            while true; do timeout 1s docker exec -it $CONTAINER_ID bash -c "while :; do mysql -e 'SELECT index_length FROM information_schema.tables'; echo $MARIADB_VERSION; done"; done
            

            i.e. restarting docker exec. The 200MiB is reached in 10-20 seconds this way (and the CPU usage will reach ~500% rather than around 100%)

            Roel Roel Van de Paar added a comment - - edited The issue can be magnified/amplified by doing: while true ; do timeout 1s docker exec -it $CONTAINER_ID bash -c "while :; do mysql -e 'SELECT index_length FROM information_schema.tables'; echo $MARIADB_VERSION; done" ; done i.e. restarting docker exec . The 200MiB is reached in 10-20 seconds this way (and the CPU usage will reach ~500% rather than around 100%)
            Roel Roel Van de Paar added a comment - - edited

            Docker container versions:

            10.5.6: 5b8ab1934a10966336e66751bc13fc66923b02f6: last known good
            10.5.7: 90f43d260e407c650aa8a7885d674c717618cc37: first known bad
            

            git log --oneline 5b8ab1934a10966336e66751bc13fc66923b02f6..90f43d260e407c650aa8a7885d674c717618cc37 | wc -l
            668
            

            One complexity is that the issue only reproduces in Docker thus far.

            Roel Roel Van de Paar added a comment - - edited Docker container versions: 10.5.6: 5b8ab1934a10966336e66751bc13fc66923b02f6: last known good 10.5.7: 90f43d260e407c650aa8a7885d674c717618cc37: first known bad git log --oneline 5b8ab1934a10966336e66751bc13fc66923b02f6..90f43d260e407c650aa8a7885d674c717618cc37 | wc -l 668 One complexity is that the issue only reproduces in Docker thus far.
            Roel Roel Van de Paar added a comment - - edited

            Two available avenues (unless anyone sees any other possibilities):
            1. Exclude Docker by reproducing it locally (i.e. outside of Docker). I strongly suspect this will be possible, though it is also possible the issue is Docker related or caused.
            2. Compile and run various commits inside Docker to see if the issue reproduces, assuming it reproduces for new builds to start with (i.e. a simple difference in CMAKE options between the official builds and a test build could make the issue non-reproducible).

            For the moment following path #1, I copied the failing Docker 10.5.7 version out of Docker to run it locally (including it's own readline lib) and see if the issue reproduces outside of Docker. It does not.

            Comparing various other things which could cause the offset (besides Docker itself), I observed this:

            Local 90f43d260e407c650aa8a7885d674c717618cc37 (Optimized)

            10.5.7>STATUS;
            /test/test_opt2/bin/mariadb  Ver 15.1 Distrib 10.5.7-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
             
            Connection id:		50463
            Current database:	test
            Current user:		root@localhost
            SSL:			Not in use
            Current pager:		stdout
            Using outfile:		''
            Using delimiter:	;
            Server:			MariaDB
            Server version:		10.5.7-MariaDB-1:10.5.7+maria~focal mariadb.org binary distribution
            Protocol version:	10
            Connection:		Localhost via UNIX socket
            Server characterset:	latin1
            Db     characterset:	latin1
            Client characterset:	utf8
            Conn.  characterset:	utf8
            UNIX socket:		/test/test_opt2/socket.sock
            

            Docker 90f43d260e407c650aa8a7885d674c717618cc37 (Optimized)

            MariaDB [(none)]> STATUS;
            mysql  Ver 15.1 Distrib 10.5.7-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2
             
            Connection id:		3
            Current database:	
            Current user:		root@localhost
            SSL:			Not in use
            Current pager:		stdout
            Using outfile:		''
            Using delimiter:	;
            Server:			MariaDB
            Server version:		10.5.7-MariaDB-1:10.5.7+maria~focal mariadb.org binary distribution
            Protocol version:	10
            Connection:		Localhost via UNIX socket
            Server characterset:	utf8mb4
            Db     characterset:	utf8mb4
            Client characterset:	latin1
            Conn.  characterset:	latin1
            UNIX socket:		/run/mysqld/mysqld.sock
            

            Note the charset diffs. I will try to match this next to see if it is related.

            Roel Roel Van de Paar added a comment - - edited Two available avenues (unless anyone sees any other possibilities): 1. Exclude Docker by reproducing it locally (i.e. outside of Docker). I strongly suspect this will be possible, though it is also possible the issue is Docker related or caused. 2. Compile and run various commits inside Docker to see if the issue reproduces, assuming it reproduces for new builds to start with (i.e. a simple difference in CMAKE options between the official builds and a test build could make the issue non-reproducible). For the moment following path #1, I copied the failing Docker 10.5.7 version out of Docker to run it locally (including it's own readline lib) and see if the issue reproduces outside of Docker. It does not. Comparing various other things which could cause the offset (besides Docker itself), I observed this: Local 90f43d260e407c650aa8a7885d674c717618cc37 (Optimized) 10.5.7>STATUS; /test/test_opt2/bin/mariadb Ver 15.1 Distrib 10.5.7-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2   Connection id: 50463 Current database: test Current user: root@localhost SSL: Not in use Current pager: stdout Using outfile: '' Using delimiter: ; Server: MariaDB Server version: 10.5.7-MariaDB-1:10.5.7+maria~focal mariadb.org binary distribution Protocol version: 10 Connection: Localhost via UNIX socket Server characterset: latin1 Db characterset: latin1 Client characterset: utf8 Conn. characterset: utf8 UNIX socket: /test/test_opt2/socket.sock Docker 90f43d260e407c650aa8a7885d674c717618cc37 (Optimized) MariaDB [(none)]> STATUS; mysql Ver 15.1 Distrib 10.5.7-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2   Connection id: 3 Current database: Current user: root@localhost SSL: Not in use Current pager: stdout Using outfile: '' Using delimiter: ; Server: MariaDB Server version: 10.5.7-MariaDB-1:10.5.7+maria~focal mariadb.org binary distribution Protocol version: 10 Connection: Localhost via UNIX socket Server characterset: utf8mb4 Db characterset: utf8mb4 Client characterset: latin1 Conn. characterset: latin1 UNIX socket: /run/mysqld/mysqld.sock Note the charset diffs. I will try to match this next to see if it is related.
            Roel Roel Van de Paar added a comment - - edited

            There actually does seem to be a very slow memory increase for the copied local version 10.5.7 also:

            $ while true; do ps -eo pid,pmem,rss,vsz,comm | grep ${PID}; sleep 1; done
            ...
            4184072  0.0 88816 1313812 mariadbd  # Before queries start
            ...
            4184072  0.0 91888 1313812 mariadbd  # Just after queries start
            ...
            4184072  0.0 92400 1314112 mariadbd  # After ~5-10 minutes
            ...
            4184072  0.0 92656 1314412 mariadbd  # After ~15-20 minutes
            ...
            4184072  0.0 101104 1314712 mariadbd  # After ~30-35 minutes, and various increased reproduction attempts
            

            Roel Roel Van de Paar added a comment - - edited There actually does seem to be a very slow memory increase for the copied local version 10.5.7 also: $ while true ; do ps -eo pid,pmem,rss,vsz, comm | grep ${PID}; sleep 1; done ... 4184072 0.0 88816 1313812 mariadbd # Before queries start ... 4184072 0.0 91888 1313812 mariadbd # Just after queries start ... 4184072 0.0 92400 1314112 mariadbd # After ~5-10 minutes ... 4184072 0.0 92656 1314412 mariadbd # After ~15-20 minutes ... 4184072 0.0 101104 1314712 mariadbd # After ~30-35 minutes, and various increased reproduction attempts
            Roel Roel Van de Paar added a comment - - edited

            Furhter debugging has brought some things to light;
            1. The issue is sporadically triggered on Docker. For example, with the orginal testcase, the memory may stay stable for some time even on 10.5.7 (using --memory 3000M to avoid small-memory-allocated-to-Docker issues), then suddenly start - and continue - to grow quickly.
            2. I can reliably reproduce a somewhat significant memory increase when using one of the reproduce queries and multi-threading, outside of Docker, however the memory use stagnates after some time - it does not keep growing as in Docker, or grows only slowly. Importantly, this can be repeated on MariaDB 10.5.5, MariaDB 10.3.39, MySQL 5.5 and MySQL 9.1 also.
            3. Running a Linux ps memory monitor inside a Docker bash client shows the MariaDB server memory as stable/non-increasing. This observance is not sporadic. For example, using the original testcase/query, here is one case where 1) the general issue sporadicity (i.e. delay before onset), as well as 2) the significant memory monitoring difference showed:

            docker image mariadb-10.5.7 10.5.7-MariaDB-1:10.5.7+maria~focal 90f43d260e407c650aa8a7885d674c717618cc37 (Optimized)

            root@4ae1606086cc:/# while true; do ps -eo pid,pmem,thcount,rss,vsz,comm | grep mysqld; sleep 1; done
                  1  0.0    13 77728 1287588 mysqld
                  1  0.0    13 77728 1287588 mysqld
                  1  0.0    13 77728 1287588 mysqld
            ... Queries (single thread) are started, memory use slightly increases  ...
                  1  0.0    14 81568 1353424 mysqld
                  1  0.0    14 81568 1353424 mysqld
                  1  0.0    14 81568 1353424 mysqld
            ... Memory use of docker stats remains stable at this point - hovering around 65-70MiB ...
            ... Considerably later (>5 minutes) memory use in docker stats starts growing ...
            ... Memory climbs to >2 GiB: in docker stats ...
            CONTAINER ID   NAME             CPU %     MEM USAGE / LIMIT    MEM %     NET I/O      BLOCK I/O     PIDS
            4ae1606086cc   mariadb-10.5.7   98.96%    2.147GiB / 2.93GiB   73.30%    1.3kB / 0B   0B / 26.8MB   14
            ... However in the continued ps output we see: ...
                  1  0.0    10 81568 1353424 mysqld
                  1  0.0    10 81568 1353424 mysqld
                  1  0.0    10 81568 1353424 mysqld
            ... All the time the queries visually keep running in the 3rd shell session ...
            

            In other words, the number of kernel threads has dropped from 14 to 10 and the memory has remained perfectly stable according to ps.
            4. When Docker starts running out of memory, the MariaDB process further reduces it memory footprint according to ps:

            docker image mariadb-10.5.7 10.5.7-MariaDB-1:10.5.7+maria~focal 90f43d260e407c650aa8a7885d674c717618cc37 (Optimized)

            ... Docker stats indicate Docker available memory (--memory 3000M) maxed out at 3GiB ...
            CONTAINER ID   NAME             CPU %     MEM USAGE / LIMIT    MEM %     NET I/O      BLOCK I/O         PIDS
            4ae1606086cc   mariadb-10.5.7   100.34%   2.928GiB / 2.93GiB   99.94%    1.3kB / 0B   3.05MB / 90.9MB   14
            ... ps output showing reduced memory usage (and an exra kernel thread - a cleanup thread perhaps?) ...
                  1  0.0    10 81568 1353424 mysqld
                  1  0.0    11 81568 1353424 mysqld
                  1  0.0    11 80800 1353424 mysqld
                  1  0.0    11 79776 1353424 mysqld
                  1  0.0    11 79520 1353424 mysqld
                  1  0.0    11 77984 1353424 mysqld
                  1  0.0    11 76704 1353424 mysqld
                  1  0.0    11 74400 1353424 mysqld
                  1  0.0    11 72608 1353424 mysqld
                  1  0.0    11 71072 1353424 mysqld
                  1  0.0    11 68512 1353424 mysqld
                  1  0.0    11 66464 1353424 mysqld
                  1  0.0    11 63648 1353424 mysqld
                  1  0.0    11 61600 1353424 mysqld
                  1  0.0    11 59552 1353424 mysqld
                  1  0.0    11 56992 1353424 mysqld
                  1  0.0    11 55456 1353424 mysqld
                  1  0.0    11 53152 1353424 mysqld
                  1  0.0    11 50848 1353424 mysqld
                  1  0.0    11 49568 1353424 mysqld
                  1  0.0    11 45728 1353424 mysqld
                  1  0.0    11 43680 1353424 mysqld
                  1  0.0    11 41632 1353424 mysqld
                  1  0.0    11 37792 1353424 mysqld
                  1  0.0    11 36512 1353424 mysqld
                  1  0.0    11 34464 1353424 mysqld
                  1  0.0    11 32672 1353424 mysqld
                  1  0.0    11 30624 1353424 mysqld
                  1  0.0    11 29088 1353424 mysqld
                  1  0.0    11 28064 1353424 mysqld
                  1  0.0    11 27808 1353424 mysqld
                  1  0.0    11 27808 1353424 mysqld
                  1  0.0    11 27808 1353424 mysqld
            ... Some time later the extra kernel thread dissapears ...
                  1  0.0    10 26784 1353424 mysqld
                  1  0.0    10 26784 1353424 mysqld
            ... During this the memory in docker stats quickly decreases and ends on a very low number of 9MiB ...
            CONTAINER ID   NAME             CPU %     MEM USAGE / LIMIT    MEM %     NET I/O      BLOCK I/O      PIDS
            4ae1606086cc   mariadb-10.5.7   91.10%    9.383MiB / 2.93GiB   0.31%     1.3kB / 0B   5MB / 92.9MB   15
            

            5. After this, the cycle repeats: docker stats starts growing again.

            Roel Roel Van de Paar added a comment - - edited Furhter debugging has brought some things to light; 1. The issue is sporadically triggered on Docker. For example, with the orginal testcase, the memory may stay stable for some time even on 10.5.7 (using --memory 3000M to avoid small-memory-allocated-to-Docker issues), then suddenly start - and continue - to grow quickly. 2. I can reliably reproduce a somewhat significant memory increase when using one of the reproduce queries and multi-threading, outside of Docker, however the memory use stagnates after some time - it does not keep growing as in Docker, or grows only slowly. Importantly, this can be repeated on MariaDB 10.5.5, MariaDB 10.3.39, MySQL 5.5 and MySQL 9.1 also. 3. Running a Linux ps memory monitor inside a Docker bash client shows the MariaDB server memory as stable/non-increasing. This observance is not sporadic. For example, using the original testcase/query, here is one case where 1) the general issue sporadicity (i.e. delay before onset), as well as 2) the significant memory monitoring difference showed: docker image mariadb-10.5.7 10.5.7-MariaDB-1:10.5.7+maria~focal 90f43d260e407c650aa8a7885d674c717618cc37 (Optimized) root@4ae1606086cc:/# while true; do ps -eo pid,pmem,thcount,rss,vsz,comm | grep mysqld; sleep 1; done 1 0.0 13 77728 1287588 mysqld 1 0.0 13 77728 1287588 mysqld 1 0.0 13 77728 1287588 mysqld ... Queries (single thread) are started, memory use slightly increases ... 1 0.0 14 81568 1353424 mysqld 1 0.0 14 81568 1353424 mysqld 1 0.0 14 81568 1353424 mysqld ... Memory use of docker stats remains stable at this point - hovering around 65-70MiB ... ... Considerably later (>5 minutes) memory use in docker stats starts growing ... ... Memory climbs to >2 GiB: in docker stats ... CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 4ae1606086cc mariadb-10.5.7 98.96% 2.147GiB / 2.93GiB 73.30% 1.3kB / 0B 0B / 26.8MB 14 ... However in the continued ps output we see: ... 1 0.0 10 81568 1353424 mysqld 1 0.0 10 81568 1353424 mysqld 1 0.0 10 81568 1353424 mysqld ... All the time the queries visually keep running in the 3rd shell session ... In other words, the number of kernel threads has dropped from 14 to 10 and the memory has remained perfectly stable according to ps. 4. When Docker starts running out of memory, the MariaDB process further reduces it memory footprint according to ps: docker image mariadb-10.5.7 10.5.7-MariaDB-1:10.5.7+maria~focal 90f43d260e407c650aa8a7885d674c717618cc37 (Optimized) ... Docker stats indicate Docker available memory (--memory 3000M) maxed out at 3GiB ... CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 4ae1606086cc mariadb-10.5.7 100.34% 2.928GiB / 2.93GiB 99.94% 1.3kB / 0B 3.05MB / 90.9MB 14 ... ps output showing reduced memory usage (and an exra kernel thread - a cleanup thread perhaps?) ... 1 0.0 10 81568 1353424 mysqld 1 0.0 11 81568 1353424 mysqld 1 0.0 11 80800 1353424 mysqld 1 0.0 11 79776 1353424 mysqld 1 0.0 11 79520 1353424 mysqld 1 0.0 11 77984 1353424 mysqld 1 0.0 11 76704 1353424 mysqld 1 0.0 11 74400 1353424 mysqld 1 0.0 11 72608 1353424 mysqld 1 0.0 11 71072 1353424 mysqld 1 0.0 11 68512 1353424 mysqld 1 0.0 11 66464 1353424 mysqld 1 0.0 11 63648 1353424 mysqld 1 0.0 11 61600 1353424 mysqld 1 0.0 11 59552 1353424 mysqld 1 0.0 11 56992 1353424 mysqld 1 0.0 11 55456 1353424 mysqld 1 0.0 11 53152 1353424 mysqld 1 0.0 11 50848 1353424 mysqld 1 0.0 11 49568 1353424 mysqld 1 0.0 11 45728 1353424 mysqld 1 0.0 11 43680 1353424 mysqld 1 0.0 11 41632 1353424 mysqld 1 0.0 11 37792 1353424 mysqld 1 0.0 11 36512 1353424 mysqld 1 0.0 11 34464 1353424 mysqld 1 0.0 11 32672 1353424 mysqld 1 0.0 11 30624 1353424 mysqld 1 0.0 11 29088 1353424 mysqld 1 0.0 11 28064 1353424 mysqld 1 0.0 11 27808 1353424 mysqld 1 0.0 11 27808 1353424 mysqld 1 0.0 11 27808 1353424 mysqld ... Some time later the extra kernel thread dissapears ... 1 0.0 10 26784 1353424 mysqld 1 0.0 10 26784 1353424 mysqld ... During this the memory in docker stats quickly decreases and ends on a very low number of 9MiB ... CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 4ae1606086cc mariadb-10.5.7 91.10% 9.383MiB / 2.93GiB 0.31% 1.3kB / 0B 5MB / 92.9MB 15 5. After this, the cycle repeats: docker stats starts growing again.

            sudo bash -c "echo 3 > /proc/sys/vm/drop_caches"
            

            On the host will drop the docker stats memory down to about 15MiB quickly (0-2 min) upon running it, including when docker stats reported memory use is as high as 3GiB. In other words, the reported Docker memory usage can be reduced back to 15MiB total by using a clear cache command on the host.

            Furthermore, when docker stats reports 3GiB memory usage:

            CONTAINER ID   NAME             CPU %     MEM USAGE / LIMIT    MEM %     NET I/O       BLOCK I/O       PIDS
            4ae1606086cc   mariadb-10.5.7   98.60%    2.927GiB / 2.93GiB   99.90%    1.51kB / 0B   178MB / 119MB   15
            

            We can see that inside the container (using top) the mysqld process is only using 1345228 KB (about 1.3 GB) of VIRT memory, and only about 25MB of residiual/actual memory usage (which also matches the ps memory tracking output) while the query is running:

            top - 03:03:02 up  3:18,  0 users,  load average: 203.41, 203.79, 203.76
            Tasks:   7 total,   1 running,   6 sleeping,   0 stopped,   0 zombie
            %Cpu(s): 26.1 us, 54.0 sy,  0.0 ni, 19.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
            MiB Mem : 128900.9 total,  90599.1 free,  23059.1 used,  15242.7 buff/cache
            MiB Swap:  32000.0 total,  31932.5 free,     67.5 used. 104825.5 avail Mem 
             
                PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                            
                  1 mysql     20   0 1345228  25248  22272 S  42.9   0.0  57:19.44 mysqld                                             
                212 root      20   0    4116   2560   2560 S   2.3   0.0   3:25.13 bash                                               
                184 root      20   0    4248   3072   2816 S   0.0   0.0   0:02.65 bash                                               
             243139 root      20   0    4560   1792   1792 S   0.0   0.0   0:00.01 su                                                 
             243145 root      20   0    4248   3328   2816 S   0.0   0.0   0:01.41 bash                                               
             686830 root      20   0    6096   2816   2560 R   0.0   0.0   0:00.00 top                                                
             688074 root      20   0   14900   7680   6912 S   0.0   0.0   0:00.00 mysql  
            

            Given the findings in the last comment and this one, I believe the observed Docker memory increase as seen in docker stats is not attributable to MariaDB, but is due to Docker caching. One oddity is why certain MariaDB versions trigger the Docker memory use increase more readily or more significantly. Another oddity is why it is sporadic, though that may clarify why certain versions do not show the issue [immediately].

            Still, the observed MariaDB memory increase (which semi-stagnates after some time) on given I_S queries warrants further investigation as it does not look exactly normal. This will be the continued focus of this issue.

            Roel Roel Van de Paar added a comment - sudo bash -c "echo 3 > /proc/sys/vm/drop_caches" On the host will drop the docker stats memory down to about 15MiB quickly (0-2 min) upon running it, including when docker stats reported memory use is as high as 3GiB. In other words, the reported Docker memory usage can be reduced back to 15MiB total by using a clear cache command on the host. Furthermore, when docker stats reports 3GiB memory usage: CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 4ae1606086cc mariadb-10.5.7 98.60% 2.927GiB / 2.93GiB 99.90% 1.51kB / 0B 178MB / 119MB 15 We can see that inside the container (using top ) the mysqld process is only using 1345228 KB (about 1.3 GB) of VIRT memory, and only about 25MB of residiual/actual memory usage (which also matches the ps memory tracking output) while the query is running: top - 03:03:02 up 3:18, 0 users, load average: 203.41, 203.79, 203.76 Tasks: 7 total, 1 running, 6 sleeping, 0 stopped, 0 zombie %Cpu(s): 26.1 us, 54.0 sy, 0.0 ni, 19.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st MiB Mem : 128900.9 total, 90599.1 free, 23059.1 used, 15242.7 buff/cache MiB Swap: 32000.0 total, 31932.5 free, 67.5 used. 104825.5 avail Mem   PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 mysql 20 0 1345228 25248 22272 S 42.9 0.0 57:19.44 mysqld 212 root 20 0 4116 2560 2560 S 2.3 0.0 3:25.13 bash 184 root 20 0 4248 3072 2816 S 0.0 0.0 0:02.65 bash 243139 root 20 0 4560 1792 1792 S 0.0 0.0 0:00.01 su 243145 root 20 0 4248 3328 2816 S 0.0 0.0 0:01.41 bash 686830 root 20 0 6096 2816 2560 R 0.0 0.0 0:00.00 top 688074 root 20 0 14900 7680 6912 S 0.0 0.0 0:00.00 mysql Given the findings in the last comment and this one, I believe the observed Docker memory increase as seen in docker stats is not attributable to MariaDB, but is due to Docker caching. One oddity is why certain MariaDB versions trigger the Docker memory use increase more readily or more significantly. Another oddity is why it is sporadic, though that may clarify why certain versions do not show the issue [immediately] . Still, the observed MariaDB memory increase (which semi-stagnates after some time) on given I_S queries warrants further investigation as it does not look exactly normal. This will be the continued focus of this issue.

            > We found it seems to be related to temporary tables
            It is interesting to note that on of the states of the memory increasing query is Removing tmp table

            | 25475 | root                 | localhost | NULL | Query   |    0 | Filling schema table | SELECT index_length FROM information_schema.tables |    0.000 |
            | 25473 | root                 | localhost | NULL | Query   |    0 | Removing tmp table   | SELECT index_length FROM information_schema.tables |    0.000 |
            | 25474 | root                 | localhost | NULL | Query   |    0 | Opening tables       | SELECT index_length FROM information_schema.tables |    0.000 |
            | 25341 | root                 | localhost | NULL | Query   |    0 | closing tables       | SELECT index_length FROM information_schema.tables |    0.000 |
            

            Roel Roel Van de Paar added a comment - > We found it seems to be related to temporary tables It is interesting to note that on of the states of the memory increasing query is Removing tmp table | 25475 | root | localhost | NULL | Query | 0 | Filling schema table | SELECT index_length FROM information_schema.tables | 0.000 | | 25473 | root | localhost | NULL | Query | 0 | Removing tmp table | SELECT index_length FROM information_schema.tables | 0.000 | | 25474 | root | localhost | NULL | Query | 0 | Opening tables | SELECT index_length FROM information_schema.tables | 0.000 | | 25341 | root | localhost | NULL | Query | 0 | closing tables | SELECT index_length FROM information_schema.tables | 0.000 |
            Roel Roel Van de Paar made changes -
            Summary SELECT [index_length/data_length] FROM information_schema.tables causes significant memory loss SELECT [index_length/data_length] FROM information_schema.tables causes significant memory use
            Roel Roel Van de Paar added a comment - - edited

            To reproduce the quick and substantial memory use increase when using the given I_S query, simply start a opt/release server, any version, ontain the PID, and start a memory monitor:

            PID=your_pid
            echo 'PID  %MEM  THREADS  RSS  VSZ  Command'
            while true; do ps -eo pid,pmem,thcount,rss,vsz,comm | grep ${PID} | grep -vE 'bash|mariadb$|mysql$'; sleep 1; done
            

            Then start the load generator (change the socket location if required):

            QUERY="SELECT index_length FROM information_schema.tables"
            if [ -x bin/mariadb ]; then BIN='bin/mariadb'; else BIN='bin/mysql'; fi
            rm -f ./sml
            printf '%s\n' {1..200} | xargs -P200 -I{} bash -c "while true; do if [ -r ./sml ]; then break; fi; bin/mariadb -uroot -e '${QUERY}' -S${PWD}/socket.sock --silent --skip-column-names --unbuffered >/dev/null 2>&1; done &"
            

            If you want to later on stop the client threads, you can run touch ./sml.

            Note this is a static/non-growing load. However the memory, while stagnating, does slowly increase. This happens on for example MariaDB 10.5.5 also (first thought to be unaffected, likely due to Docker caching oddities as decribed above).

            Here is how this looks for 11.2 opt:

            CS 11.2.6 12a91b57e27b979819924cf89614e6e51f24b37b (Optimized)

            PID  %MEM  THREADS  RSS  VSZ  Command
            1527523  0.0    13 98752 1298980 mariadbd
            1527523  0.0    13 98752 1298980 mariadbd
            1527523  0.0    13 98752 1298980 mariadbd
            1527523  0.0    13 98752 1298980 mariadbd
            1527523  0.0    13 98752 1298980 mariadbd
            1527523  0.0    13 98752 1298980 mariadbd
            1527523  0.0    13 98752 1298980 mariadbd
            1527523  0.0    13 98752 1298980 mariadbd
            1527523  0.1    57 184512 4196064 mariadbd  # 200 client threads with ongoing queries commence
            1527523  0.4   213 556480 14466312 mariadbd
            1527523  0.4   213 562880 14466312 mariadbd
            1527523  0.4   213 564928 14466312 mariadbd
            1527523  0.4   213 565696 14466312 mariadbd
            1527523  0.4   213 566208 14466312 mariadbd
            1527523  0.4   213 566976 14466312 mariadbd
            1527523  0.4   213 568000 14466312 mariadbd
            1527523  0.4   213 571072 14466312 mariadbd
            1527523  0.4   213 571328 14466312 mariadbd
            1527523  0.4   213 571840 14466312 mariadbd
            1527523  0.4   213 573376 14466312 mariadbd
            1527523  0.4   213 574144 14466312 mariadbd
            1527523  0.4   213 574912 14466312 mariadbd
            1527523  0.4   213 575424 14466312 mariadbd
            1527523  0.4   213 576192 14466312 mariadbd
            1527523  0.4   213 576704 14466312 mariadbd
            1527523  0.4   213 577472 14466312 mariadbd
            1527523  0.4   213 578496 14466312 mariadbd
            1527523  0.4   213 578496 14466312 mariadbd
            1527523  0.4   213 578752 14466312 mariadbd
            1527523  0.4   213 579008 14466312 mariadbd
            1527523  0.4   213 579264 14466312 mariadbd
            1527523  0.4   213 579776 14466312 mariadbd
            1527523  0.4   213 580800 14466312 mariadbd
            1527523  0.4   213 580800 14466312 mariadbd
            1527523  0.4   213 581056 14466312 mariadbd
            1527523  0.4   214 583104 14532148 mariadbd
            1527523  0.4   214 583616 14532148 mariadbd
            1527523  0.4   214 584384 14532148 mariadbd
            1527523  0.4   214 584640 14532148 mariadbd
            1527523  0.4   214 584896 14532148 mariadbd
            1527523  0.4   214 584896 14532148 mariadbd
            1527523  0.4   214 585152 14532148 mariadbd
            1527523  0.4   214 585920 14532148 mariadbd
            1527523  0.4   214 586176 14532148 mariadbd
            1527523  0.4   214 586176 14532148 mariadbd
            1527523  0.4   214 586432 14532148 mariadbd
            1527523  0.4   214 586432 14532148 mariadbd
            1527523  0.4   214 586432 14532148 mariadbd
            1527523  0.4   214 586688 14532148 mariadbd
            1527523  0.4   214 586944 14532148 mariadbd
            1527523  0.4   214 587456 14532148 mariadbd
            1527523  0.4   214 587712 14532148 mariadbd
            1527523  0.4   214 587968 14532148 mariadbd
            1527523  0.4   214 588736 14532148 mariadbd
            1527523  0.4   214 588736 14532148 mariadbd
            1527523  0.4   214 588992 14532148 mariadbd
            1527523  0.4   214 589248 14532148 mariadbd
            1527523  0.4   214 589248 14532148 mariadbd
            1527523  0.4   210 590016 14532148 mariadbd
            1527523  0.4   210 590272 14532148 mariadbd
            1527523  0.4   210 590272 14532148 mariadbd
            1527523  0.4   210 590528 14532148 mariadbd
            1527523  0.4   210 590784 14532148 mariadbd
            1527523  0.4   210 591552 14532148 mariadbd
            1527523  0.4   210 591552 14532148 mariadbd
            1527523  0.4   210 591552 14532148 mariadbd
            1527523  0.4   210 591808 14532148 mariadbd
            1527523  0.4   210 592064 14532148 mariadbd
            1527523  0.4   210 592320 14532148 mariadbd
            1527523  0.4   210 592320 14532148 mariadbd
            1527523  0.4   210 592576 14532148 mariadbd
            1527523  0.4   210 592832 14532148 mariadbd
            1527523  0.4   210 592832 14532148 mariadbd
            1527523  0.4   210 592832 14532148 mariadbd
            1527523  0.4   210 593088 14532148 mariadbd
            1527523  0.4   210 593344 14532148 mariadbd
            1527523  0.4   210 593344 14532148 mariadbd
            1527523  0.4   210 593344 14532148 mariadbd
            1527523  0.4   210 593344 14532148 mariadbd
            1527523  0.4   210 593344 14532148 mariadbd
            1527523  0.4   210 593600 14532148 mariadbd
            1527523  0.4   210 593856 14532148 mariadbd
            1527523  0.4   210 593856 14532148 mariadbd
            1527523  0.4   210 594112 14532148 mariadbd
            1527523  0.4   210 594368 14532148 mariadbd
            1527523  0.4   210 594368 14532148 mariadbd
            1527523  0.4   210 594368 14532148 mariadbd
            1527523  0.4   210 594368 14532148 mariadbd
            1527523  0.4   210 594368 14532148 mariadbd
            1527523  0.4   210 594624 14532148 mariadbd
            1527523  0.4   210 594880 14532148 mariadbd
            1527523  0.4   210 594880 14532148 mariadbd
            1527523  0.4   210 594880 14532148 mariadbd
            1527523  0.4   210 594880 14532148 mariadbd
            1527523  0.4   210 595136 14532148 mariadbd
            1527523  0.4   210 595136 14532148 mariadbd
            1527523  0.4   210 595392 14532148 mariadbd
            1527523  0.4   210 595392 14532148 mariadbd
            1527523  0.4   210 595648 14532148 mariadbd
            1527523  0.4   210 595648 14532148 mariadbd
            1527523  0.4   210 596416 14532148 mariadbd
            1527523  0.4   210 596672 14532148 mariadbd
            1527523  0.4   210 596928 14532148 mariadbd
            1527523  0.4   210 596928 14532148 mariadbd
            1527523  0.4   210 597184 14532148 mariadbd
            1527523  0.4   210 597440 14532148 mariadbd
            1527523  0.4   210 597696 14532148 mariadbd
            1527523  0.4   210 597952 14532148 mariadbd
            1527523  0.4   210 597952 14532148 mariadbd
            1527523  0.4   210 597952 14532148 mariadbd
            1527523  0.4   210 598208 14532148 mariadbd
            1527523  0.4   210 598208 14532148 mariadbd
            1527523  0.4   210 598208 14532148 mariadbd
            1527523  0.4   210 598464 14532148 mariadbd
            1527523  0.4   210 598720 14532148 mariadbd
            1527523  0.4   210 598720 14532148 mariadbd
            1527523  0.4   210 598720 14532148 mariadbd
            1527523  0.4   210 598976 14532148 mariadbd
            1527523  0.4   210 598976 14532148 mariadbd
            1527523  0.4   210 598976 14532148 mariadbd
            1527523  0.4   210 598976 14532148 mariadbd
            1527523  0.4   210 598976 14532148 mariadbd
            1527523  0.4   210 598976 14532148 mariadbd
            1527523  0.4   210 598976 14532148 mariadbd
            1527523  0.4   210 599232 14532148 mariadbd
            1527523  0.4   210 599488 14532148 mariadbd
            1527523  0.4   210 599488 14532148 mariadbd
            1527523  0.4   210 599488 14532148 mariadbd
            1527523  0.4   210 599488 14532148 mariadbd
            1527523  0.4   210 599744 14532148 mariadbd
            1527523  0.4   210 599744 14532148 mariadbd
            1527523  0.4   210 599744 14532148 mariadbd
            1527523  0.4   210 599744 14532148 mariadbd
            1527523  0.4   210 599744 14532148 mariadbd
            1527523  0.4   210 599744 14532148 mariadbd
            1527523  0.4   210 599744 14532148 mariadbd
            1527523  0.4   210 599744 14532148 mariadbd
            1527523  0.4   210 599744 14532148 mariadbd
            1527523  0.4   210 599744 14532148 mariadbd
            1527523  0.4   210 599744 14532148 mariadbd
            1527523  0.4   210 599744 14532148 mariadbd
            1527523  0.4   210 599744 14532148 mariadbd
            1527523  0.4   210 600000 14532148 mariadbd
            1527523  0.4   210 600000 14532148 mariadbd
            1527523  0.4   210 600256 14532148 mariadbd
            1527523  0.4   210 600256 14532148 mariadbd
            1527523  0.4   210 600256 14532148 mariadbd
            1527523  0.4   210 600256 14532148 mariadbd
            1527523  0.4   210 600768 14532148 mariadbd
            1527523  0.4   210 601024 14532148 mariadbd
            1527523  0.4   210 601024 14532148 mariadbd
            1527523  0.4   210 601280 14532148 mariadbd
            1527523  0.4   210 601280 14532148 mariadbd
            1527523  0.4   210 601280 14532148 mariadbd
            1527523  0.4   210 601280 14532148 mariadbd
            1527523  0.4   210 601280 14532148 mariadbd
            1527523  0.4   210 601536 14532148 mariadbd
            1527523  0.4   210 601536 14532148 mariadbd
            1527523  0.4   210 601536 14532148 mariadbd
            1527523  0.4   210 601536 14532148 mariadbd
            1527523  0.4   210 601536 14532148 mariadbd
            1527523  0.4   210 601536 14532148 mariadbd
            1527523  0.4   210 601792 14532148 mariadbd
            1527523  0.4   210 602048 14532148 mariadbd
            1527523  0.4   210 602048 14532148 mariadbd
            1527523  0.4   210 602048 14532148 mariadbd
            1527523  0.4   210 602048 14532148 mariadbd
            1527523  0.4   210 602048 14532148 mariadbd
            1527523  0.4   210 602048 14532148 mariadbd
            1527523  0.4   210 602048 14532148 mariadbd
            1527523  0.4   210 602304 14532148 mariadbd
            1527523  0.4   210 602304 14532148 mariadbd
            1527523  0.4   210 602304 14532148 mariadbd
            1527523  0.4   210 602304 14532148 mariadbd
            1527523  0.4   210 602304 14532148 mariadbd
            1527523  0.4   210 602304 14532148 mariadbd
            1527523  0.4   210 602304 14532148 mariadbd
            1527523  0.4   210 602304 14532148 mariadbd
            1527523  0.4   210 602304 14532148 mariadbd
            1527523  0.4   210 602304 14532148 mariadbd
            1527523  0.4   210 602304 14532148 mariadbd
            1527523  0.4   210 602304 14532148 mariadbd
            1527523  0.4   210 602304 14532148 mariadbd
            1527523  0.4   210 602304 14532148 mariadbd
            1527523  0.4   210 602560 14532148 mariadbd
            1527523  0.4   210 602560 14532148 mariadbd
            1527523  0.4   210 602560 14532148 mariadbd
            1527523  0.4   210 602560 14532148 mariadbd
            1527523  0.4   210 602560 14532148 mariadbd
            1527523  0.4   210 602816 14532148 mariadbd
            1527523  0.4   210 602816 14532148 mariadbd
            1527523  0.4   210 603072 14532148 mariadbd
            1527523  0.4   210 603072 14532148 mariadbd
            1527523  0.4   210 603072 14532148 mariadbd
            1527523  0.4   210 603072 14532148 mariadbd
            1527523  0.4   210 603072 14532148 mariadbd
            1527523  0.4   210 603072 14532148 mariadbd
            1527523  0.4   210 603072 14532148 mariadbd
            1527523  0.4   210 603072 14532148 mariadbd
            1527523  0.4   210 603072 14532148 mariadbd
            1527523  0.4   210 603072 14532148 mariadbd
            1527523  0.4   210 603328 14532148 mariadbd
            1527523  0.4   210 603328 14532148 mariadbd
            1527523  0.4   210 603328 14532148 mariadbd
            1527523  0.4   210 603328 14532148 mariadbd
            1527523  0.4   210 603328 14532148 mariadbd
            1527523  0.4   210 603328 14532148 mariadbd
            1527523  0.4   210 603328 14532148 mariadbd
            1527523  0.4   210 603584 14532148 mariadbd
            1527523  0.4   210 603584 14532148 mariadbd
            1527523  0.4   210 603584 14532148 mariadbd
            1527523  0.4   210 603840 14532148 mariadbd
            1527523  0.4   210 603840 14532148 mariadbd
            1527523  0.4   210 603840 14532148 mariadbd
            1527523  0.4   210 603840 14532148 mariadbd
            1527523  0.4   210 603840 14532148 mariadbd
            1527523  0.4   210 603840 14532148 mariadbd
            1527523  0.4   210 603840 14532148 mariadbd
            1527523  0.4   210 603840 14532148 mariadbd
            1527523  0.4   210 603840 14532148 mariadbd
            1527523  0.4   210 603840 14532148 mariadbd
            1527523  0.4   210 604096 14532148 mariadbd
            1527523  0.4   210 604096 14532148 mariadbd
            1527523  0.4   210 604352 14532148 mariadbd
            1527523  0.4   210 604352 14532148 mariadbd
            1527523  0.4   210 604352 14532148 mariadbd
            1527523  0.4   210 604608 14532148 mariadbd
            1527523  0.4   210 604864 14532148 mariadbd
            1527523  0.4   210 604864 14532148 mariadbd
            1527523  0.4   210 604864 14532148 mariadbd
            1527523  0.4   210 604864 14532148 mariadbd
            1527523  0.4   210 604864 14532148 mariadbd
            1527523  0.4   210 604864 14532148 mariadbd
            1527523  0.4   210 604864 14532148 mariadbd
            1527523  0.4   210 605120 14532148 mariadbd
            1527523  0.4   210 605376 14532148 mariadbd
            1527523  0.4   210 605632 14532148 mariadbd
            1527523  0.4   210 605632 14532148 mariadbd
            1527523  0.4   210 605632 14532148 mariadbd
            1527523  0.4   210 605632 14532148 mariadbd
            1527523  0.4   210 605632 14532148 mariadbd
            1527523  0.4   210 605632 14532148 mariadbd
            1527523  0.4   210 605632 14532148 mariadbd
            1527523  0.4   210 605632 14532148 mariadbd
            1527523  0.4   210 605632 14532148 mariadbd
            1527523  0.4   210 605632 14532148 mariadbd
            1527523  0.4   210 605632 14532148 mariadbd
            1527523  0.4   210 605632 14532148 mariadbd
            1527523  0.4   210 611008 14532148 mariadbd
            1527523  0.4   210 613056 14532148 mariadbd
            1527523  0.4   210 613312 14532148 mariadbd
            1527523  0.4   210 613568 14532148 mariadbd
            1527523  0.4   210 613824 14532148 mariadbd
            1527523  0.4   210 613824 14532148 mariadbd
            1527523  0.4   210 614080 14532148 mariadbd
            1527523  0.4   210 614080 14532148 mariadbd
            1527523  0.4   210 614080 14532148 mariadbd
            1527523  0.4   210 614336 14532148 mariadbd
            1527523  0.4   210 614336 14532148 mariadbd
            1527523  0.4   210 614336 14532148 mariadbd
            1527523  0.4   210 614336 14532148 mariadbd
            1527523  0.4   210 614592 14532148 mariadbd
            1527523  0.4   210 614848 14532148 mariadbd
            1527523  0.4   210 614848 14532148 mariadbd
            1527523  0.4   210 614848 14532148 mariadbd
            1527523  0.4   210 614848 14532148 mariadbd
            1527523  0.4   210 614848 14532148 mariadbd
            1527523  0.4   210 614848 14532148 mariadbd
            1527523  0.4   210 614848 14532148 mariadbd
            1527523  0.4   210 615104 14532148 mariadbd
            1527523  0.4   210 615104 14532148 mariadbd
            1527523  0.4   210 615104 14532148 mariadbd
            1527523  0.4   210 615104 14532148 mariadbd
            1527523  0.4   210 615360 14532148 mariadbd
            1527523  0.4   210 615616 14532148 mariadbd
            1527523  0.4   210 615872 14532148 mariadbd
            1527523  0.4   210 615872 14532148 mariadbd
            1527523  0.4   210 615872 14532148 mariadbd
            1527523  0.4   210 615872 14532148 mariadbd
            1527523  0.4   210 615872 14532148 mariadbd
            1527523  0.4   210 616128 14532148 mariadbd
            1527523  0.4   210 616128 14532148 mariadbd
            1527523  0.4   210 616128 14532148 mariadbd
            1527523  0.4   210 616384 14532148 mariadbd
            1527523  0.4   210 616384 14532148 mariadbd
            1527523  0.4   210 616384 14532148 mariadbd
            1527523  0.4   210 616640 14532148 mariadbd
            1527523  0.4   210 616640 14532148 mariadbd
            1527523  0.4   210 616640 14532148 mariadbd
            1527523  0.4   210 616896 14532148 mariadbd
            1527523  0.4   210 616896 14532148 mariadbd
            1527523  0.4   210 616896 14532148 mariadbd
            1527523  0.4   210 616896 14532148 mariadbd
            1527523  0.4   210 616896 14532148 mariadbd
            1527523  0.4   210 616896 14532148 mariadbd
            1527523  0.4   210 616896 14532148 mariadbd
            1527523  0.4   210 616896 14532148 mariadbd
            1527523  0.4   210 616896 14532148 mariadbd
            1527523  0.4   210 617408 14532148 mariadbd
            1527523  0.4   210 617664 14532148 mariadbd
            1527523  0.4   210 617664 14532148 mariadbd
            1527523  0.4   210 617920 14532148 mariadbd
            1527523  0.4   210 617920 14532148 mariadbd
            1527523  0.4   210 617920 14532148 mariadbd
            1527523  0.4   210 617920 14532148 mariadbd
            1527523  0.4   210 617920 14532148 mariadbd
            1527523  0.4   210 618432 14532148 mariadbd
            1527523  0.4   210 618432 14532148 mariadbd
            1527523  0.4   210 618432 14532148 mariadbd
            1527523  0.4   210 618432 14532148 mariadbd
            1527523  0.4   210 618432 14532148 mariadbd
            1527523  0.4   210 618432 14532148 mariadbd
            1527523  0.4   210 618688 14532148 mariadbd
            1527523  0.4   210 618688 14532148 mariadbd
            1527523  0.4   210 618688 14532148 mariadbd
            1527523  0.4   210 618688 14532148 mariadbd
            1527523  0.4   210 618944 14532148 mariadbd
            1527523  0.4   210 618944 14532148 mariadbd
            1527523  0.4   210 618944 14532148 mariadbd
            1527523  0.4   210 618944 14532148 mariadbd
            1527523  0.4   210 618944 14532148 mariadbd
            1527523  0.4   210 619200 14532148 mariadbd
            1527523  0.4   210 619456 14532148 mariadbd
            1527523  0.4   210 619712 14532148 mariadbd
            1527523  0.4   210 619712 14532148 mariadbd
            1527523  0.4   210 619712 14532148 mariadbd
            1527523  0.4   210 619712 14532148 mariadbd
            1527523  0.4   210 619712 14532148 mariadbd
            1527523  0.4   210 619712 14532148 mariadbd
            1527523  0.4   210 619712 14532148 mariadbd
            1527523  0.4   210 619968 14532148 mariadbd
            1527523  0.4   210 619968 14532148 mariadbd
            1527523  0.4   210 619968 14532148 mariadbd
            1527523  0.4   210 620224 14532148 mariadbd
            1527523  0.4   210 620224 14532148 mariadbd
            1527523  0.4   210 620224 14532148 mariadbd
            1527523  0.4   210 620224 14532148 mariadbd
            1527523  0.4   210 620224 14532148 mariadbd
            1527523  0.4   210 620224 14532148 mariadbd
            1527523  0.4   210 620224 14532148 mariadbd
            1527523  0.4   210 620224 14532148 mariadbd
            1527523  0.4   210 620224 14532148 mariadbd
            1527523  0.4   210 620480 14532148 mariadbd
            1527523  0.4   210 620480 14532148 mariadbd
            1527523  0.4   210 620480 14532148 mariadbd
            1527523  0.4   210 620480 14532148 mariadbd
            1527523  0.4   210 620736 14532148 mariadbd
            1527523  0.4   210 620736 14532148 mariadbd
            1527523  0.4   210 620992 14532148 mariadbd
            1527523  0.4   210 621504 14532148 mariadbd
            1527523  0.4   210 621504 14532148 mariadbd
            1527523  0.4   210 621504 14532148 mariadbd
            1527523  0.4   210 621504 14532148 mariadbd
            1527523  0.4   210 621760 14532148 mariadbd
            1527523  0.4   210 621760 14532148 mariadbd
            1527523  0.4   210 621760 14532148 mariadbd
            1527523  0.4   210 621760 14532148 mariadbd
            1527523  0.4   210 622016 14532148 mariadbd
            1527523  0.4   210 622016 14532148 mariadbd
            1527523  0.4   210 622016 14532148 mariadbd
            1527523  0.4   210 622272 14532148 mariadbd
            1527523  0.4   210 622272 14532148 mariadbd
            1527523  0.4   210 622272 14532148 mariadbd
            1527523  0.4   210 622272 14532148 mariadbd
            1527523  0.4   210 622272 14532148 mariadbd
            1527523  0.4   210 622272 14532148 mariadbd
            1527523  0.4   210 622272 14532148 mariadbd
            1527523  0.4   210 622272 14532148 mariadbd
            1527523  0.4   210 622272 14532148 mariadbd
            1527523  0.4   210 622272 14532148 mariadbd
            1527523  0.4   210 622528 14532148 mariadbd
            1527523  0.4   210 622528 14532148 mariadbd
            1527523  0.4   210 622528 14532148 mariadbd
            1527523  0.4   210 622528 14532148 mariadbd
            1527523  0.4   210 622528 14532148 mariadbd
            1527523  0.4   210 622528 14532148 mariadbd
            1527523  0.4   210 622784 14532148 mariadbd
            1527523  0.4   210 623296 14532148 mariadbd
            1527523  0.4   210 623296 14532148 mariadbd
            1527523  0.4   210 623296 14532148 mariadbd
            1527523  0.4   210 623296 14532148 mariadbd
            1527523  0.4   210 623296 14532148 mariadbd
            1527523  0.4   210 623296 14532148 mariadbd
            1527523  0.4   210 623296 14532148 mariadbd
            1527523  0.4   210 623296 14532148 mariadbd
            1527523  0.4   210 623296 14532148 mariadbd
            1527523  0.4   210 623296 14532148 mariadbd
            1527523  0.4   210 623296 14532148 mariadbd
            1527523  0.4   210 623296 14532148 mariadbd
            1527523  0.4   210 623296 14532148 mariadbd
            1527523  0.4   210 623808 14532148 mariadbd
            1527523  0.4   210 623808 14532148 mariadbd
            1527523  0.4   210 623808 14532148 mariadbd
            1527523  0.4   210 623808 14532148 mariadbd
            1527523  0.4   210 623808 14532148 mariadbd
            1527523  0.4   210 623808 14532148 mariadbd
            1527523  0.4   210 624064 14532148 mariadbd
            1527523  0.4   210 624064 14532148 mariadbd
            1527523  0.4   211 626368 14532448 mariadbd
            1527523  0.4   211 626368 14532448 mariadbd
            1527523  0.4   211 626368 14532448 mariadbd
            1527523  0.4   211 626368 14532448 mariadbd
            1527523  0.4   211 626624 14532448 mariadbd
            1527523  0.4   211 626624 14532448 mariadbd
            1527523  0.4   211 626624 14532448 mariadbd
            1527523  0.4   211 626624 14532448 mariadbd
            1527523  0.4   211 626624 14532448 mariadbd
            1527523  0.4   211 626624 14532448 mariadbd
            1527523  0.4   211 626624 14532448 mariadbd
            1527523  0.4   211 626880 14532448 mariadbd
            1527523  0.4   211 626880 14532448 mariadbd
            1527523  0.4   211 626880 14532448 mariadbd
            1527523  0.4   211 626880 14532448 mariadbd
            1527523  0.4   211 626880 14532448 mariadbd
            1527523  0.4   211 627136 14532448 mariadbd
            1527523  0.4   211 627136 14532448 mariadbd
            1527523  0.4   211 627136 14532448 mariadbd
            1527523  0.4   211 627392 14532448 mariadbd
            1527523  0.4   211 627648 14532448 mariadbd
            1527523  0.4   211 627904 14532448 mariadbd
            1527523  0.4   211 627904 14532448 mariadbd
            1527523  0.4   211 627904 14532448 mariadbd
            1527523  0.4   211 627904 14532448 mariadbd
            1527523  0.4   211 627904 14532448 mariadbd
            1527523  0.4   211 628160 14532448 mariadbd
            1527523  0.4   211 628416 14532448 mariadbd
            1527523  0.4   211 628672 14532448 mariadbd
            1527523  0.4   211 628672 14532448 mariadbd
            1527523  0.4   211 628672 14532448 mariadbd
            1527523  0.4   211 628672 14532448 mariadbd
            1527523  0.4   211 628672 14532448 mariadbd
            1527523  0.4   211 628672 14532448 mariadbd
            1527523  0.4   211 628672 14532448 mariadbd
            1527523  0.4   211 628672 14532448 mariadbd
            1527523  0.4   211 628672 14532448 mariadbd
            1527523  0.4   211 628672 14532448 mariadbd
            1527523  0.4   211 628672 14532448 mariadbd
            1527523  0.4   211 629184 14532448 mariadbd
            1527523  0.4   211 629184 14532448 mariadbd
            1527523  0.4   211 629184 14532448 mariadbd
            1527523  0.4   211 629184 14532448 mariadbd
            1527523  0.4   211 629184 14532448 mariadbd
            1527523  0.4   211 629184 14532448 mariadbd
            1527523  0.4   211 629184 14532448 mariadbd
            1527523  0.4   211 629184 14532448 mariadbd
            1527523  0.4   211 629184 14532448 mariadbd
            1527523  0.4   211 629184 14532448 mariadbd
            1527523  0.4   211 629440 14532448 mariadbd
            1527523  0.4   211 629440 14532448 mariadbd
            1527523  0.4   211 629696 14532448 mariadbd
            1527523  0.4   211 629696 14532448 mariadbd
            1527523  0.4   211 629696 14532448 mariadbd
            1527523  0.4   211 629696 14532448 mariadbd
            1527523  0.4   211 629696 14532448 mariadbd
            1527523  0.4   211 629696 14532448 mariadbd
            1527523  0.4   211 629696 14532448 mariadbd
            1527523  0.4   211 629696 14532448 mariadbd
            1527523  0.4   211 629696 14532448 mariadbd
            1527523  0.4   211 629696 14532448 mariadbd
            1527523  0.4   211 629696 14532448 mariadbd
            1527523  0.4   211 629696 14532448 mariadbd
            1527523  0.4   211 629952 14532448 mariadbd
            1527523  0.4   211 630208 14532448 mariadbd
            1527523  0.4   211 630208 14532448 mariadbd
            1527523  0.4   211 630208 14532448 mariadbd
            1527523  0.4   211 630208 14532448 mariadbd
            1527523  0.4   211 630208 14532448 mariadbd
            1527523  0.4   211 630208 14532448 mariadbd
            1527523  0.4   211 630208 14532448 mariadbd
            1527523  0.4   211 630720 14532448 mariadbd
            1527523  0.4   211 630720 14532448 mariadbd
            1527523  0.4   211 630720 14532448 mariadbd
            1527523  0.4   211 630976 14532448 mariadbd
            1527523  0.4   211 630976 14532448 mariadbd
            1527523  0.4   211 630976 14532448 mariadbd
            1527523  0.4   211 630976 14532448 mariadbd
            1527523  0.4   211 630976 14532448 mariadbd
            1527523  0.4   211 630976 14532448 mariadbd
            1527523  0.4   211 630976 14532448 mariadbd
            1527523  0.4   211 631232 14532448 mariadbd
            1527523  0.4   211 631232 14532448 mariadbd
            1527523  0.4   211 631232 14532448 mariadbd
            1527523  0.4   211 631232 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631488 14532448 mariadbd
            1527523  0.4   211 631744 14532448 mariadbd
            1527523  0.4   211 631744 14532448 mariadbd
            1527523  0.4   211 631744 14532448 mariadbd
            1527523  0.4   211 631744 14532448 mariadbd
            1527523  0.4   211 631744 14532448 mariadbd
            1527523  0.4   211 632000 14532448 mariadbd
            1527523  0.4   211 632000 14532448 mariadbd
            1527523  0.4   211 632000 14532448 mariadbd
            1527523  0.4   211 632000 14532448 mariadbd
            1527523  0.4   211 632000 14532448 mariadbd
            1527523  0.4   211 632000 14532448 mariadbd
            1527523  0.4   211 632000 14532448 mariadbd
            1527523  0.4   211 632000 14532448 mariadbd
            1527523  0.4   211 632000 14532448 mariadbd
            1527523  0.4   211 632000 14532448 mariadbd
            1527523  0.4   211 632000 14532448 mariadbd
            1527523  0.4   211 632000 14532448 mariadbd
            1527523  0.4   211 632000 14532448 mariadbd
            1527523  0.4   211 632000 14532448 mariadbd
            1527523  0.4   211 632256 14532448 mariadbd
            1527523  0.4   211 632256 14532448 mariadbd
            1527523  0.4   211 632256 14532448 mariadbd
            1527523  0.4   211 632256 14532448 mariadbd
            1527523  0.4   211 632256 14532448 mariadbd
            1527523  0.4   211 632256 14532448 mariadbd
            1527523  0.4   211 632512 14532448 mariadbd
            1527523  0.4   211 632512 14532448 mariadbd
            1527523  0.4   211 632512 14532448 mariadbd
            1527523  0.4   211 632512 14532448 mariadbd
            1527523  0.4   211 632512 14532448 mariadbd
            1527523  0.4   211 632512 14532448 mariadbd
            1527523  0.4   211 632512 14532448 mariadbd
            1527523  0.4   211 632512 14532448 mariadbd
            1527523  0.4   211 632512 14532448 mariadbd
            1527523  0.4   211 632512 14532448 mariadbd
            1527523  0.4   211 632512 14532448 mariadbd
            1527523  0.4   211 632512 14532448 mariadbd
            1527523  0.4   211 632512 14532448 mariadbd
            1527523  0.4   211 632512 14532448 mariadbd
            1527523  0.4   211 632512 14532448 mariadbd
            1527523  0.4   211 632512 14532448 mariadbd
            1527523  0.4   211 632768 14532448 mariadbd
            1527523  0.4   211 632768 14532448 mariadbd
            1527523  0.4   211 632768 14532448 mariadbd
            1527523  0.4   211 632768 14532448 mariadbd
            1527523  0.4   211 632768 14532448 mariadbd
            1527523  0.4   211 632768 14532448 mariadbd
            1527523  0.4   211 632768 14532448 mariadbd
            1527523  0.4   211 632768 14532448 mariadbd
            1527523  0.4   211 632768 14532448 mariadbd
            1527523  0.4   211 632768 14532448 mariadbd
            1527523  0.4   211 632768 14532448 mariadbd
            1527523  0.4   211 632768 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633024 14532448 mariadbd
            1527523  0.4   211 633280 14532448 mariadbd
            1527523  0.4   211 633280 14532448 mariadbd
            1527523  0.4   211 633280 14532448 mariadbd
            1527523  0.4   211 633280 14532448 mariadbd
            1527523  0.4   211 633280 14532448 mariadbd
            1527523  0.4   211 633280 14532448 mariadbd
            1527523  0.4   211 633280 14532448 mariadbd
            1527523  0.4   211 633280 14532448 mariadbd
            1527523  0.4   211 633280 14532448 mariadbd
            ...
            

            Notice the extra thread also appearing (automatically), and the memory very slowly growing.

            What is also interesting is that the OS cache clear command mentioned in the last comment has no effect here. There is actually a small memory increase when it is executed (with no subequent decline):

            1527523  0.4   211 647360 14532448 mariadbd
            1527523  0.4   211 647616 14532448 mariadbd
            1527523  0.4   211 647616 14532448 mariadbd
            ... repeats the same ...
            

            Roel Roel Van de Paar added a comment - - edited To reproduce the quick and substantial memory use increase when using the given I_S query, simply start a opt/release server, any version, ontain the PID, and start a memory monitor: PID=your_pid echo 'PID %MEM THREADS RSS VSZ Command' while true ; do ps -eo pid,pmem,thcount,rss,vsz, comm | grep ${PID} | grep -vE 'bash|mariadb$|mysql$' ; sleep 1; done Then start the load generator (change the socket location if required): QUERY= "SELECT index_length FROM information_schema.tables" if [ -x bin /mariadb ]; then BIN= 'bin/mariadb' ; else BIN= 'bin/mysql' ; fi rm -f . /sml printf '%s\n' {1..200} | xargs -P200 -I{} bash -c "while true; do if [ -r ./sml ]; then break; fi; bin/mariadb -uroot -e '${QUERY}' -S${PWD}/socket.sock --silent --skip-column-names --unbuffered >/dev/null 2>&1; done &" If you want to later on stop the client threads, you can run touch ./sml . Note this is a static/non-growing load. However the memory, while stagnating, does slowly increase. This happens on for example MariaDB 10.5.5 also (first thought to be unaffected, likely due to Docker caching oddities as decribed above). Here is how this looks for 11.2 opt: CS 11.2.6 12a91b57e27b979819924cf89614e6e51f24b37b (Optimized) PID %MEM THREADS RSS VSZ Command 1527523 0.0 13 98752 1298980 mariadbd 1527523 0.0 13 98752 1298980 mariadbd 1527523 0.0 13 98752 1298980 mariadbd 1527523 0.0 13 98752 1298980 mariadbd 1527523 0.0 13 98752 1298980 mariadbd 1527523 0.0 13 98752 1298980 mariadbd 1527523 0.0 13 98752 1298980 mariadbd 1527523 0.0 13 98752 1298980 mariadbd 1527523 0.1 57 184512 4196064 mariadbd # 200 client threads with ongoing queries commence 1527523 0.4 213 556480 14466312 mariadbd 1527523 0.4 213 562880 14466312 mariadbd 1527523 0.4 213 564928 14466312 mariadbd 1527523 0.4 213 565696 14466312 mariadbd 1527523 0.4 213 566208 14466312 mariadbd 1527523 0.4 213 566976 14466312 mariadbd 1527523 0.4 213 568000 14466312 mariadbd 1527523 0.4 213 571072 14466312 mariadbd 1527523 0.4 213 571328 14466312 mariadbd 1527523 0.4 213 571840 14466312 mariadbd 1527523 0.4 213 573376 14466312 mariadbd 1527523 0.4 213 574144 14466312 mariadbd 1527523 0.4 213 574912 14466312 mariadbd 1527523 0.4 213 575424 14466312 mariadbd 1527523 0.4 213 576192 14466312 mariadbd 1527523 0.4 213 576704 14466312 mariadbd 1527523 0.4 213 577472 14466312 mariadbd 1527523 0.4 213 578496 14466312 mariadbd 1527523 0.4 213 578496 14466312 mariadbd 1527523 0.4 213 578752 14466312 mariadbd 1527523 0.4 213 579008 14466312 mariadbd 1527523 0.4 213 579264 14466312 mariadbd 1527523 0.4 213 579776 14466312 mariadbd 1527523 0.4 213 580800 14466312 mariadbd 1527523 0.4 213 580800 14466312 mariadbd 1527523 0.4 213 581056 14466312 mariadbd 1527523 0.4 214 583104 14532148 mariadbd 1527523 0.4 214 583616 14532148 mariadbd 1527523 0.4 214 584384 14532148 mariadbd 1527523 0.4 214 584640 14532148 mariadbd 1527523 0.4 214 584896 14532148 mariadbd 1527523 0.4 214 584896 14532148 mariadbd 1527523 0.4 214 585152 14532148 mariadbd 1527523 0.4 214 585920 14532148 mariadbd 1527523 0.4 214 586176 14532148 mariadbd 1527523 0.4 214 586176 14532148 mariadbd 1527523 0.4 214 586432 14532148 mariadbd 1527523 0.4 214 586432 14532148 mariadbd 1527523 0.4 214 586432 14532148 mariadbd 1527523 0.4 214 586688 14532148 mariadbd 1527523 0.4 214 586944 14532148 mariadbd 1527523 0.4 214 587456 14532148 mariadbd 1527523 0.4 214 587712 14532148 mariadbd 1527523 0.4 214 587968 14532148 mariadbd 1527523 0.4 214 588736 14532148 mariadbd 1527523 0.4 214 588736 14532148 mariadbd 1527523 0.4 214 588992 14532148 mariadbd 1527523 0.4 214 589248 14532148 mariadbd 1527523 0.4 214 589248 14532148 mariadbd 1527523 0.4 210 590016 14532148 mariadbd 1527523 0.4 210 590272 14532148 mariadbd 1527523 0.4 210 590272 14532148 mariadbd 1527523 0.4 210 590528 14532148 mariadbd 1527523 0.4 210 590784 14532148 mariadbd 1527523 0.4 210 591552 14532148 mariadbd 1527523 0.4 210 591552 14532148 mariadbd 1527523 0.4 210 591552 14532148 mariadbd 1527523 0.4 210 591808 14532148 mariadbd 1527523 0.4 210 592064 14532148 mariadbd 1527523 0.4 210 592320 14532148 mariadbd 1527523 0.4 210 592320 14532148 mariadbd 1527523 0.4 210 592576 14532148 mariadbd 1527523 0.4 210 592832 14532148 mariadbd 1527523 0.4 210 592832 14532148 mariadbd 1527523 0.4 210 592832 14532148 mariadbd 1527523 0.4 210 593088 14532148 mariadbd 1527523 0.4 210 593344 14532148 mariadbd 1527523 0.4 210 593344 14532148 mariadbd 1527523 0.4 210 593344 14532148 mariadbd 1527523 0.4 210 593344 14532148 mariadbd 1527523 0.4 210 593344 14532148 mariadbd 1527523 0.4 210 593600 14532148 mariadbd 1527523 0.4 210 593856 14532148 mariadbd 1527523 0.4 210 593856 14532148 mariadbd 1527523 0.4 210 594112 14532148 mariadbd 1527523 0.4 210 594368 14532148 mariadbd 1527523 0.4 210 594368 14532148 mariadbd 1527523 0.4 210 594368 14532148 mariadbd 1527523 0.4 210 594368 14532148 mariadbd 1527523 0.4 210 594368 14532148 mariadbd 1527523 0.4 210 594624 14532148 mariadbd 1527523 0.4 210 594880 14532148 mariadbd 1527523 0.4 210 594880 14532148 mariadbd 1527523 0.4 210 594880 14532148 mariadbd 1527523 0.4 210 594880 14532148 mariadbd 1527523 0.4 210 595136 14532148 mariadbd 1527523 0.4 210 595136 14532148 mariadbd 1527523 0.4 210 595392 14532148 mariadbd 1527523 0.4 210 595392 14532148 mariadbd 1527523 0.4 210 595648 14532148 mariadbd 1527523 0.4 210 595648 14532148 mariadbd 1527523 0.4 210 596416 14532148 mariadbd 1527523 0.4 210 596672 14532148 mariadbd 1527523 0.4 210 596928 14532148 mariadbd 1527523 0.4 210 596928 14532148 mariadbd 1527523 0.4 210 597184 14532148 mariadbd 1527523 0.4 210 597440 14532148 mariadbd 1527523 0.4 210 597696 14532148 mariadbd 1527523 0.4 210 597952 14532148 mariadbd 1527523 0.4 210 597952 14532148 mariadbd 1527523 0.4 210 597952 14532148 mariadbd 1527523 0.4 210 598208 14532148 mariadbd 1527523 0.4 210 598208 14532148 mariadbd 1527523 0.4 210 598208 14532148 mariadbd 1527523 0.4 210 598464 14532148 mariadbd 1527523 0.4 210 598720 14532148 mariadbd 1527523 0.4 210 598720 14532148 mariadbd 1527523 0.4 210 598720 14532148 mariadbd 1527523 0.4 210 598976 14532148 mariadbd 1527523 0.4 210 598976 14532148 mariadbd 1527523 0.4 210 598976 14532148 mariadbd 1527523 0.4 210 598976 14532148 mariadbd 1527523 0.4 210 598976 14532148 mariadbd 1527523 0.4 210 598976 14532148 mariadbd 1527523 0.4 210 598976 14532148 mariadbd 1527523 0.4 210 599232 14532148 mariadbd 1527523 0.4 210 599488 14532148 mariadbd 1527523 0.4 210 599488 14532148 mariadbd 1527523 0.4 210 599488 14532148 mariadbd 1527523 0.4 210 599488 14532148 mariadbd 1527523 0.4 210 599744 14532148 mariadbd 1527523 0.4 210 599744 14532148 mariadbd 1527523 0.4 210 599744 14532148 mariadbd 1527523 0.4 210 599744 14532148 mariadbd 1527523 0.4 210 599744 14532148 mariadbd 1527523 0.4 210 599744 14532148 mariadbd 1527523 0.4 210 599744 14532148 mariadbd 1527523 0.4 210 599744 14532148 mariadbd 1527523 0.4 210 599744 14532148 mariadbd 1527523 0.4 210 599744 14532148 mariadbd 1527523 0.4 210 599744 14532148 mariadbd 1527523 0.4 210 599744 14532148 mariadbd 1527523 0.4 210 599744 14532148 mariadbd 1527523 0.4 210 600000 14532148 mariadbd 1527523 0.4 210 600000 14532148 mariadbd 1527523 0.4 210 600256 14532148 mariadbd 1527523 0.4 210 600256 14532148 mariadbd 1527523 0.4 210 600256 14532148 mariadbd 1527523 0.4 210 600256 14532148 mariadbd 1527523 0.4 210 600768 14532148 mariadbd 1527523 0.4 210 601024 14532148 mariadbd 1527523 0.4 210 601024 14532148 mariadbd 1527523 0.4 210 601280 14532148 mariadbd 1527523 0.4 210 601280 14532148 mariadbd 1527523 0.4 210 601280 14532148 mariadbd 1527523 0.4 210 601280 14532148 mariadbd 1527523 0.4 210 601280 14532148 mariadbd 1527523 0.4 210 601536 14532148 mariadbd 1527523 0.4 210 601536 14532148 mariadbd 1527523 0.4 210 601536 14532148 mariadbd 1527523 0.4 210 601536 14532148 mariadbd 1527523 0.4 210 601536 14532148 mariadbd 1527523 0.4 210 601536 14532148 mariadbd 1527523 0.4 210 601792 14532148 mariadbd 1527523 0.4 210 602048 14532148 mariadbd 1527523 0.4 210 602048 14532148 mariadbd 1527523 0.4 210 602048 14532148 mariadbd 1527523 0.4 210 602048 14532148 mariadbd 1527523 0.4 210 602048 14532148 mariadbd 1527523 0.4 210 602048 14532148 mariadbd 1527523 0.4 210 602048 14532148 mariadbd 1527523 0.4 210 602304 14532148 mariadbd 1527523 0.4 210 602304 14532148 mariadbd 1527523 0.4 210 602304 14532148 mariadbd 1527523 0.4 210 602304 14532148 mariadbd 1527523 0.4 210 602304 14532148 mariadbd 1527523 0.4 210 602304 14532148 mariadbd 1527523 0.4 210 602304 14532148 mariadbd 1527523 0.4 210 602304 14532148 mariadbd 1527523 0.4 210 602304 14532148 mariadbd 1527523 0.4 210 602304 14532148 mariadbd 1527523 0.4 210 602304 14532148 mariadbd 1527523 0.4 210 602304 14532148 mariadbd 1527523 0.4 210 602304 14532148 mariadbd 1527523 0.4 210 602304 14532148 mariadbd 1527523 0.4 210 602560 14532148 mariadbd 1527523 0.4 210 602560 14532148 mariadbd 1527523 0.4 210 602560 14532148 mariadbd 1527523 0.4 210 602560 14532148 mariadbd 1527523 0.4 210 602560 14532148 mariadbd 1527523 0.4 210 602816 14532148 mariadbd 1527523 0.4 210 602816 14532148 mariadbd 1527523 0.4 210 603072 14532148 mariadbd 1527523 0.4 210 603072 14532148 mariadbd 1527523 0.4 210 603072 14532148 mariadbd 1527523 0.4 210 603072 14532148 mariadbd 1527523 0.4 210 603072 14532148 mariadbd 1527523 0.4 210 603072 14532148 mariadbd 1527523 0.4 210 603072 14532148 mariadbd 1527523 0.4 210 603072 14532148 mariadbd 1527523 0.4 210 603072 14532148 mariadbd 1527523 0.4 210 603072 14532148 mariadbd 1527523 0.4 210 603328 14532148 mariadbd 1527523 0.4 210 603328 14532148 mariadbd 1527523 0.4 210 603328 14532148 mariadbd 1527523 0.4 210 603328 14532148 mariadbd 1527523 0.4 210 603328 14532148 mariadbd 1527523 0.4 210 603328 14532148 mariadbd 1527523 0.4 210 603328 14532148 mariadbd 1527523 0.4 210 603584 14532148 mariadbd 1527523 0.4 210 603584 14532148 mariadbd 1527523 0.4 210 603584 14532148 mariadbd 1527523 0.4 210 603840 14532148 mariadbd 1527523 0.4 210 603840 14532148 mariadbd 1527523 0.4 210 603840 14532148 mariadbd 1527523 0.4 210 603840 14532148 mariadbd 1527523 0.4 210 603840 14532148 mariadbd 1527523 0.4 210 603840 14532148 mariadbd 1527523 0.4 210 603840 14532148 mariadbd 1527523 0.4 210 603840 14532148 mariadbd 1527523 0.4 210 603840 14532148 mariadbd 1527523 0.4 210 603840 14532148 mariadbd 1527523 0.4 210 604096 14532148 mariadbd 1527523 0.4 210 604096 14532148 mariadbd 1527523 0.4 210 604352 14532148 mariadbd 1527523 0.4 210 604352 14532148 mariadbd 1527523 0.4 210 604352 14532148 mariadbd 1527523 0.4 210 604608 14532148 mariadbd 1527523 0.4 210 604864 14532148 mariadbd 1527523 0.4 210 604864 14532148 mariadbd 1527523 0.4 210 604864 14532148 mariadbd 1527523 0.4 210 604864 14532148 mariadbd 1527523 0.4 210 604864 14532148 mariadbd 1527523 0.4 210 604864 14532148 mariadbd 1527523 0.4 210 604864 14532148 mariadbd 1527523 0.4 210 605120 14532148 mariadbd 1527523 0.4 210 605376 14532148 mariadbd 1527523 0.4 210 605632 14532148 mariadbd 1527523 0.4 210 605632 14532148 mariadbd 1527523 0.4 210 605632 14532148 mariadbd 1527523 0.4 210 605632 14532148 mariadbd 1527523 0.4 210 605632 14532148 mariadbd 1527523 0.4 210 605632 14532148 mariadbd 1527523 0.4 210 605632 14532148 mariadbd 1527523 0.4 210 605632 14532148 mariadbd 1527523 0.4 210 605632 14532148 mariadbd 1527523 0.4 210 605632 14532148 mariadbd 1527523 0.4 210 605632 14532148 mariadbd 1527523 0.4 210 605632 14532148 mariadbd 1527523 0.4 210 611008 14532148 mariadbd 1527523 0.4 210 613056 14532148 mariadbd 1527523 0.4 210 613312 14532148 mariadbd 1527523 0.4 210 613568 14532148 mariadbd 1527523 0.4 210 613824 14532148 mariadbd 1527523 0.4 210 613824 14532148 mariadbd 1527523 0.4 210 614080 14532148 mariadbd 1527523 0.4 210 614080 14532148 mariadbd 1527523 0.4 210 614080 14532148 mariadbd 1527523 0.4 210 614336 14532148 mariadbd 1527523 0.4 210 614336 14532148 mariadbd 1527523 0.4 210 614336 14532148 mariadbd 1527523 0.4 210 614336 14532148 mariadbd 1527523 0.4 210 614592 14532148 mariadbd 1527523 0.4 210 614848 14532148 mariadbd 1527523 0.4 210 614848 14532148 mariadbd 1527523 0.4 210 614848 14532148 mariadbd 1527523 0.4 210 614848 14532148 mariadbd 1527523 0.4 210 614848 14532148 mariadbd 1527523 0.4 210 614848 14532148 mariadbd 1527523 0.4 210 614848 14532148 mariadbd 1527523 0.4 210 615104 14532148 mariadbd 1527523 0.4 210 615104 14532148 mariadbd 1527523 0.4 210 615104 14532148 mariadbd 1527523 0.4 210 615104 14532148 mariadbd 1527523 0.4 210 615360 14532148 mariadbd 1527523 0.4 210 615616 14532148 mariadbd 1527523 0.4 210 615872 14532148 mariadbd 1527523 0.4 210 615872 14532148 mariadbd 1527523 0.4 210 615872 14532148 mariadbd 1527523 0.4 210 615872 14532148 mariadbd 1527523 0.4 210 615872 14532148 mariadbd 1527523 0.4 210 616128 14532148 mariadbd 1527523 0.4 210 616128 14532148 mariadbd 1527523 0.4 210 616128 14532148 mariadbd 1527523 0.4 210 616384 14532148 mariadbd 1527523 0.4 210 616384 14532148 mariadbd 1527523 0.4 210 616384 14532148 mariadbd 1527523 0.4 210 616640 14532148 mariadbd 1527523 0.4 210 616640 14532148 mariadbd 1527523 0.4 210 616640 14532148 mariadbd 1527523 0.4 210 616896 14532148 mariadbd 1527523 0.4 210 616896 14532148 mariadbd 1527523 0.4 210 616896 14532148 mariadbd 1527523 0.4 210 616896 14532148 mariadbd 1527523 0.4 210 616896 14532148 mariadbd 1527523 0.4 210 616896 14532148 mariadbd 1527523 0.4 210 616896 14532148 mariadbd 1527523 0.4 210 616896 14532148 mariadbd 1527523 0.4 210 616896 14532148 mariadbd 1527523 0.4 210 617408 14532148 mariadbd 1527523 0.4 210 617664 14532148 mariadbd 1527523 0.4 210 617664 14532148 mariadbd 1527523 0.4 210 617920 14532148 mariadbd 1527523 0.4 210 617920 14532148 mariadbd 1527523 0.4 210 617920 14532148 mariadbd 1527523 0.4 210 617920 14532148 mariadbd 1527523 0.4 210 617920 14532148 mariadbd 1527523 0.4 210 618432 14532148 mariadbd 1527523 0.4 210 618432 14532148 mariadbd 1527523 0.4 210 618432 14532148 mariadbd 1527523 0.4 210 618432 14532148 mariadbd 1527523 0.4 210 618432 14532148 mariadbd 1527523 0.4 210 618432 14532148 mariadbd 1527523 0.4 210 618688 14532148 mariadbd 1527523 0.4 210 618688 14532148 mariadbd 1527523 0.4 210 618688 14532148 mariadbd 1527523 0.4 210 618688 14532148 mariadbd 1527523 0.4 210 618944 14532148 mariadbd 1527523 0.4 210 618944 14532148 mariadbd 1527523 0.4 210 618944 14532148 mariadbd 1527523 0.4 210 618944 14532148 mariadbd 1527523 0.4 210 618944 14532148 mariadbd 1527523 0.4 210 619200 14532148 mariadbd 1527523 0.4 210 619456 14532148 mariadbd 1527523 0.4 210 619712 14532148 mariadbd 1527523 0.4 210 619712 14532148 mariadbd 1527523 0.4 210 619712 14532148 mariadbd 1527523 0.4 210 619712 14532148 mariadbd 1527523 0.4 210 619712 14532148 mariadbd 1527523 0.4 210 619712 14532148 mariadbd 1527523 0.4 210 619712 14532148 mariadbd 1527523 0.4 210 619968 14532148 mariadbd 1527523 0.4 210 619968 14532148 mariadbd 1527523 0.4 210 619968 14532148 mariadbd 1527523 0.4 210 620224 14532148 mariadbd 1527523 0.4 210 620224 14532148 mariadbd 1527523 0.4 210 620224 14532148 mariadbd 1527523 0.4 210 620224 14532148 mariadbd 1527523 0.4 210 620224 14532148 mariadbd 1527523 0.4 210 620224 14532148 mariadbd 1527523 0.4 210 620224 14532148 mariadbd 1527523 0.4 210 620224 14532148 mariadbd 1527523 0.4 210 620224 14532148 mariadbd 1527523 0.4 210 620480 14532148 mariadbd 1527523 0.4 210 620480 14532148 mariadbd 1527523 0.4 210 620480 14532148 mariadbd 1527523 0.4 210 620480 14532148 mariadbd 1527523 0.4 210 620736 14532148 mariadbd 1527523 0.4 210 620736 14532148 mariadbd 1527523 0.4 210 620992 14532148 mariadbd 1527523 0.4 210 621504 14532148 mariadbd 1527523 0.4 210 621504 14532148 mariadbd 1527523 0.4 210 621504 14532148 mariadbd 1527523 0.4 210 621504 14532148 mariadbd 1527523 0.4 210 621760 14532148 mariadbd 1527523 0.4 210 621760 14532148 mariadbd 1527523 0.4 210 621760 14532148 mariadbd 1527523 0.4 210 621760 14532148 mariadbd 1527523 0.4 210 622016 14532148 mariadbd 1527523 0.4 210 622016 14532148 mariadbd 1527523 0.4 210 622016 14532148 mariadbd 1527523 0.4 210 622272 14532148 mariadbd 1527523 0.4 210 622272 14532148 mariadbd 1527523 0.4 210 622272 14532148 mariadbd 1527523 0.4 210 622272 14532148 mariadbd 1527523 0.4 210 622272 14532148 mariadbd 1527523 0.4 210 622272 14532148 mariadbd 1527523 0.4 210 622272 14532148 mariadbd 1527523 0.4 210 622272 14532148 mariadbd 1527523 0.4 210 622272 14532148 mariadbd 1527523 0.4 210 622272 14532148 mariadbd 1527523 0.4 210 622528 14532148 mariadbd 1527523 0.4 210 622528 14532148 mariadbd 1527523 0.4 210 622528 14532148 mariadbd 1527523 0.4 210 622528 14532148 mariadbd 1527523 0.4 210 622528 14532148 mariadbd 1527523 0.4 210 622528 14532148 mariadbd 1527523 0.4 210 622784 14532148 mariadbd 1527523 0.4 210 623296 14532148 mariadbd 1527523 0.4 210 623296 14532148 mariadbd 1527523 0.4 210 623296 14532148 mariadbd 1527523 0.4 210 623296 14532148 mariadbd 1527523 0.4 210 623296 14532148 mariadbd 1527523 0.4 210 623296 14532148 mariadbd 1527523 0.4 210 623296 14532148 mariadbd 1527523 0.4 210 623296 14532148 mariadbd 1527523 0.4 210 623296 14532148 mariadbd 1527523 0.4 210 623296 14532148 mariadbd 1527523 0.4 210 623296 14532148 mariadbd 1527523 0.4 210 623296 14532148 mariadbd 1527523 0.4 210 623296 14532148 mariadbd 1527523 0.4 210 623808 14532148 mariadbd 1527523 0.4 210 623808 14532148 mariadbd 1527523 0.4 210 623808 14532148 mariadbd 1527523 0.4 210 623808 14532148 mariadbd 1527523 0.4 210 623808 14532148 mariadbd 1527523 0.4 210 623808 14532148 mariadbd 1527523 0.4 210 624064 14532148 mariadbd 1527523 0.4 210 624064 14532148 mariadbd 1527523 0.4 211 626368 14532448 mariadbd 1527523 0.4 211 626368 14532448 mariadbd 1527523 0.4 211 626368 14532448 mariadbd 1527523 0.4 211 626368 14532448 mariadbd 1527523 0.4 211 626624 14532448 mariadbd 1527523 0.4 211 626624 14532448 mariadbd 1527523 0.4 211 626624 14532448 mariadbd 1527523 0.4 211 626624 14532448 mariadbd 1527523 0.4 211 626624 14532448 mariadbd 1527523 0.4 211 626624 14532448 mariadbd 1527523 0.4 211 626624 14532448 mariadbd 1527523 0.4 211 626880 14532448 mariadbd 1527523 0.4 211 626880 14532448 mariadbd 1527523 0.4 211 626880 14532448 mariadbd 1527523 0.4 211 626880 14532448 mariadbd 1527523 0.4 211 626880 14532448 mariadbd 1527523 0.4 211 627136 14532448 mariadbd 1527523 0.4 211 627136 14532448 mariadbd 1527523 0.4 211 627136 14532448 mariadbd 1527523 0.4 211 627392 14532448 mariadbd 1527523 0.4 211 627648 14532448 mariadbd 1527523 0.4 211 627904 14532448 mariadbd 1527523 0.4 211 627904 14532448 mariadbd 1527523 0.4 211 627904 14532448 mariadbd 1527523 0.4 211 627904 14532448 mariadbd 1527523 0.4 211 627904 14532448 mariadbd 1527523 0.4 211 628160 14532448 mariadbd 1527523 0.4 211 628416 14532448 mariadbd 1527523 0.4 211 628672 14532448 mariadbd 1527523 0.4 211 628672 14532448 mariadbd 1527523 0.4 211 628672 14532448 mariadbd 1527523 0.4 211 628672 14532448 mariadbd 1527523 0.4 211 628672 14532448 mariadbd 1527523 0.4 211 628672 14532448 mariadbd 1527523 0.4 211 628672 14532448 mariadbd 1527523 0.4 211 628672 14532448 mariadbd 1527523 0.4 211 628672 14532448 mariadbd 1527523 0.4 211 628672 14532448 mariadbd 1527523 0.4 211 628672 14532448 mariadbd 1527523 0.4 211 629184 14532448 mariadbd 1527523 0.4 211 629184 14532448 mariadbd 1527523 0.4 211 629184 14532448 mariadbd 1527523 0.4 211 629184 14532448 mariadbd 1527523 0.4 211 629184 14532448 mariadbd 1527523 0.4 211 629184 14532448 mariadbd 1527523 0.4 211 629184 14532448 mariadbd 1527523 0.4 211 629184 14532448 mariadbd 1527523 0.4 211 629184 14532448 mariadbd 1527523 0.4 211 629184 14532448 mariadbd 1527523 0.4 211 629440 14532448 mariadbd 1527523 0.4 211 629440 14532448 mariadbd 1527523 0.4 211 629696 14532448 mariadbd 1527523 0.4 211 629696 14532448 mariadbd 1527523 0.4 211 629696 14532448 mariadbd 1527523 0.4 211 629696 14532448 mariadbd 1527523 0.4 211 629696 14532448 mariadbd 1527523 0.4 211 629696 14532448 mariadbd 1527523 0.4 211 629696 14532448 mariadbd 1527523 0.4 211 629696 14532448 mariadbd 1527523 0.4 211 629696 14532448 mariadbd 1527523 0.4 211 629696 14532448 mariadbd 1527523 0.4 211 629696 14532448 mariadbd 1527523 0.4 211 629696 14532448 mariadbd 1527523 0.4 211 629952 14532448 mariadbd 1527523 0.4 211 630208 14532448 mariadbd 1527523 0.4 211 630208 14532448 mariadbd 1527523 0.4 211 630208 14532448 mariadbd 1527523 0.4 211 630208 14532448 mariadbd 1527523 0.4 211 630208 14532448 mariadbd 1527523 0.4 211 630208 14532448 mariadbd 1527523 0.4 211 630208 14532448 mariadbd 1527523 0.4 211 630720 14532448 mariadbd 1527523 0.4 211 630720 14532448 mariadbd 1527523 0.4 211 630720 14532448 mariadbd 1527523 0.4 211 630976 14532448 mariadbd 1527523 0.4 211 630976 14532448 mariadbd 1527523 0.4 211 630976 14532448 mariadbd 1527523 0.4 211 630976 14532448 mariadbd 1527523 0.4 211 630976 14532448 mariadbd 1527523 0.4 211 630976 14532448 mariadbd 1527523 0.4 211 630976 14532448 mariadbd 1527523 0.4 211 631232 14532448 mariadbd 1527523 0.4 211 631232 14532448 mariadbd 1527523 0.4 211 631232 14532448 mariadbd 1527523 0.4 211 631232 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631488 14532448 mariadbd 1527523 0.4 211 631744 14532448 mariadbd 1527523 0.4 211 631744 14532448 mariadbd 1527523 0.4 211 631744 14532448 mariadbd 1527523 0.4 211 631744 14532448 mariadbd 1527523 0.4 211 631744 14532448 mariadbd 1527523 0.4 211 632000 14532448 mariadbd 1527523 0.4 211 632000 14532448 mariadbd 1527523 0.4 211 632000 14532448 mariadbd 1527523 0.4 211 632000 14532448 mariadbd 1527523 0.4 211 632000 14532448 mariadbd 1527523 0.4 211 632000 14532448 mariadbd 1527523 0.4 211 632000 14532448 mariadbd 1527523 0.4 211 632000 14532448 mariadbd 1527523 0.4 211 632000 14532448 mariadbd 1527523 0.4 211 632000 14532448 mariadbd 1527523 0.4 211 632000 14532448 mariadbd 1527523 0.4 211 632000 14532448 mariadbd 1527523 0.4 211 632000 14532448 mariadbd 1527523 0.4 211 632000 14532448 mariadbd 1527523 0.4 211 632256 14532448 mariadbd 1527523 0.4 211 632256 14532448 mariadbd 1527523 0.4 211 632256 14532448 mariadbd 1527523 0.4 211 632256 14532448 mariadbd 1527523 0.4 211 632256 14532448 mariadbd 1527523 0.4 211 632256 14532448 mariadbd 1527523 0.4 211 632512 14532448 mariadbd 1527523 0.4 211 632512 14532448 mariadbd 1527523 0.4 211 632512 14532448 mariadbd 1527523 0.4 211 632512 14532448 mariadbd 1527523 0.4 211 632512 14532448 mariadbd 1527523 0.4 211 632512 14532448 mariadbd 1527523 0.4 211 632512 14532448 mariadbd 1527523 0.4 211 632512 14532448 mariadbd 1527523 0.4 211 632512 14532448 mariadbd 1527523 0.4 211 632512 14532448 mariadbd 1527523 0.4 211 632512 14532448 mariadbd 1527523 0.4 211 632512 14532448 mariadbd 1527523 0.4 211 632512 14532448 mariadbd 1527523 0.4 211 632512 14532448 mariadbd 1527523 0.4 211 632512 14532448 mariadbd 1527523 0.4 211 632512 14532448 mariadbd 1527523 0.4 211 632768 14532448 mariadbd 1527523 0.4 211 632768 14532448 mariadbd 1527523 0.4 211 632768 14532448 mariadbd 1527523 0.4 211 632768 14532448 mariadbd 1527523 0.4 211 632768 14532448 mariadbd 1527523 0.4 211 632768 14532448 mariadbd 1527523 0.4 211 632768 14532448 mariadbd 1527523 0.4 211 632768 14532448 mariadbd 1527523 0.4 211 632768 14532448 mariadbd 1527523 0.4 211 632768 14532448 mariadbd 1527523 0.4 211 632768 14532448 mariadbd 1527523 0.4 211 632768 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633024 14532448 mariadbd 1527523 0.4 211 633280 14532448 mariadbd 1527523 0.4 211 633280 14532448 mariadbd 1527523 0.4 211 633280 14532448 mariadbd 1527523 0.4 211 633280 14532448 mariadbd 1527523 0.4 211 633280 14532448 mariadbd 1527523 0.4 211 633280 14532448 mariadbd 1527523 0.4 211 633280 14532448 mariadbd 1527523 0.4 211 633280 14532448 mariadbd 1527523 0.4 211 633280 14532448 mariadbd ... Notice the extra thread also appearing (automatically), and the memory very slowly growing. What is also interesting is that the OS cache clear command mentioned in the last comment has no effect here. There is actually a small memory increase when it is executed (with no subequent decline): 1527523 0.4 211 647360 14532448 mariadbd 1527523 0.4 211 647616 14532448 mariadbd 1527523 0.4 211 647616 14532448 mariadbd ... repeats the same ...
            Roel Roel Van de Paar made changes -
            Attachment MDEV-34577_Memory_Load_Graph.png [ 74181 ]
            Roel Roel Van de Paar added a comment - - edited

            An overview of the quick memory growth and then semi-stagnation (graphed result of the output in the last comment):

            This may also explain the originally mentioned observation "on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent."

            Roel Roel Van de Paar added a comment - - edited An overview of the quick memory growth and then semi-stagnation (graphed result of the output in the last comment): This may also explain the originally mentioned observation "on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent."

            Control: validate that the 200 client connections are not causing the issue, as well as the specific query being the cause of the growth:

            QUERY="SELECT 1 FROM information_schema.tables"
            if [ -x bin/mariadb ]; then BIN='bin/mariadb'; else BIN='bin/mysql'; fi
            rm -f ./sml
            printf '%s\n' {1..200} | xargs -P200 -I{} bash -c "while true; do if [ -r ./sml ]; then break; fi; bin/mariadb -uroot -e '${QUERY}' -S${PWD}/socket.sock --silent --skip-column-names --unbuffered >/dev/null 2>&1; done &"
            

            The only change in the above is that SELECT index_length FROM information_schema.tables became SELECT 1 FROM information_schema.tables.
            Outcome:

            CS 11.2.6 12a91b57e27b979819924cf89614e6e51f24b37b (Optimized)

            PID  %MEM  THREADS  RSS  VSZ  Command
            4183094  0.0    14 97728 1366568 mariadbd
            4183094  0.0    14 97728 1366568 mariadbd
            4183094  0.0    14 97728 1366568 mariadbd
            4183094  0.0    14 97728 1366568 mariadbd
            4183094  0.0    14 97728 1366568 mariadbd
            4183094  0.0    14 97728 1366568 mariadbd
            4183094  0.0    75 110784 5382564 mariadbd    # 200 client threads with ongoing control queries commence
            4183094  0.0    77 112576 5514236 mariadbd
            4183094  0.0    77 112576 5514236 mariadbd
            4183094  0.0    77 112576 5514236 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    90 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            4183094  0.0    85 116672 6370104 mariadbd
            ... Remains steady at same numbers repeated for quite some time ...
            ... After a long time memory has only increase a little ...
            4183094  0.0    90 117696 6371736 mariadbd
            4183094  0.0    90 117696 6371736 mariadbd
            4183094  0.0    90 117696 6371736 mariadbd
            4183094  0.0    90 117696 6371736 mariadbd
            

            After a long time memory has only increased to 115 MB (117696 KB), compared with our case above which sees >600MB around the same duration.

            Roel Roel Van de Paar added a comment - Control: validate that the 200 client connections are not causing the issue, as well as the specific query being the cause of the growth: QUERY= "SELECT 1 FROM information_schema.tables" if [ -x bin /mariadb ]; then BIN= 'bin/mariadb' ; else BIN= 'bin/mysql' ; fi rm -f . /sml printf '%s\n' {1..200} | xargs -P200 -I{} bash -c "while true; do if [ -r ./sml ]; then break; fi; bin/mariadb -uroot -e '${QUERY}' -S${PWD}/socket.sock --silent --skip-column-names --unbuffered >/dev/null 2>&1; done &" The only change in the above is that SELECT index_length FROM information_schema.tables became SELECT 1 FROM information_schema.tables . Outcome: CS 11.2.6 12a91b57e27b979819924cf89614e6e51f24b37b (Optimized) PID %MEM THREADS RSS VSZ Command 4183094 0.0 14 97728 1366568 mariadbd 4183094 0.0 14 97728 1366568 mariadbd 4183094 0.0 14 97728 1366568 mariadbd 4183094 0.0 14 97728 1366568 mariadbd 4183094 0.0 14 97728 1366568 mariadbd 4183094 0.0 14 97728 1366568 mariadbd 4183094 0.0 75 110784 5382564 mariadbd # 200 client threads with ongoing control queries commence 4183094 0.0 77 112576 5514236 mariadbd 4183094 0.0 77 112576 5514236 mariadbd 4183094 0.0 77 112576 5514236 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 90 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd 4183094 0.0 85 116672 6370104 mariadbd ... Remains steady at same numbers repeated for quite some time ... ... After a long time memory has only increase a little ... 4183094 0.0 90 117696 6371736 mariadbd 4183094 0.0 90 117696 6371736 mariadbd 4183094 0.0 90 117696 6371736 mariadbd 4183094 0.0 90 117696 6371736 mariadbd After a long time memory has only increased to 115 MB (117696 KB), compared with our case above which sees >600MB around the same duration.

            Summary:
            1) There is a sharp memory increase followed by a semi-stagnation when using given I_S queries like SELECT index_length FROM information_schema.tables
            2) This issue exists in all MariaDB versions tested, including older ones originally not thought affected (like 10.5.5), as well as in MySQL server 5.5 to 9.1.
            3) The orginally described WS increase with stagnant RSS, matching the docker stats output (increasing memory use) in all subsequent comments/testcases thereafter, is apparently a Docker [caching] issue or matter and is otherwise unrelated to MariaDB, as proven in various ways (cache clearing clears the docker stats memory use, as welll as ps and top outputs reporting a much smaller actual memory use by MariaDB).
            4) The Docker issue is confirmed sporadic, and happens at times only after quite some time has passed, which may explain why certain MariaDB versions were deemed unaffected. That said, it is possible that certain MariaDB versions (like 10.5.7) cause Docker to cache more aggresively, quicker.
            5) A repeatable testcase for the sharp memory increase was given in the comment above.

            Actionable:
            1) Find out why there is such a large memory increase for the simple I_S query SELECT index_length FROM information_schema.tables and similar ones, an issue not repeatable with a control query in the same setting.
            2) Find out why the query seems to use temporary tables (state Removing tmp table observed regularly), and why memory continues to increase very slowly afterwards - without additional load being added.
            3) If possible, improve the issue, which looks to be a bug based on comparison with control.

            serg returning to you for developer assignment.

            I am happy to assist reproducing, if needbe, though it is easy to do so as explained in the comment above.

            Roel Roel Van de Paar added a comment - Summary: 1) There is a sharp memory increase followed by a semi-stagnation when using given I_S queries like SELECT index_length FROM information_schema.tables 2) This issue exists in all MariaDB versions tested, including older ones originally not thought affected (like 10.5.5), as well as in MySQL server 5.5 to 9.1. 3) The orginally described WS increase with stagnant RSS, matching the docker stats output (increasing memory use) in all subsequent comments/testcases thereafter, is apparently a Docker [caching] issue or matter and is otherwise unrelated to MariaDB, as proven in various ways (cache clearing clears the docker stats memory use, as welll as ps and top outputs reporting a much smaller actual memory use by MariaDB). 4) The Docker issue is confirmed sporadic, and happens at times only after quite some time has passed, which may explain why certain MariaDB versions were deemed unaffected. That said, it is possible that certain MariaDB versions (like 10.5.7) cause Docker to cache more aggresively, quicker. 5) A repeatable testcase for the sharp memory increase was given in the comment above . Actionable: 1) Find out why there is such a large memory increase for the simple I_S query SELECT index_length FROM information_schema.tables and similar ones, an issue not repeatable with a control query in the same setting. 2) Find out why the query seems to use temporary tables (state Removing tmp table observed regularly), and why memory continues to increase very slowly afterwards - without additional load being added. 3) If possible, improve the issue, which looks to be a bug based on comparison with control. serg returning to you for developer assignment. I am happy to assist reproducing, if needbe, though it is easy to do so as explained in the comment above .
            Roel Roel Van de Paar made changes -
            Assignee Roel Van de Paar [ roel ] Sergei Golubchik [ serg ]
            Roel Roel Van de Paar made changes -

            There was one item missing and that was proof of the issue being present in MariaDB 10.5.5. Using the same testcase:

            CS 10.5.5 3535b1637fed2df3bcbe9bbebb40e2a3cde081e9 (Optimized)

            PID  %MEM  THREADS  RSS  VSZ  Command
             144620  0.0    14 87048 1310644 mariadbd
             144620  0.0    14 87048 1310644 mariadbd
             144620  0.0    14 87048 1310644 mariadbd
             144620  0.0    28 96520 2232348 mariadbd
             144620  0.1   212 209672 14346436 mariadbd
             144620  0.1   212 217096 14346436 mariadbd
             144620  0.1   212 217096 14346436 mariadbd
             144620  0.1   212 217608 14346436 mariadbd
             144620  0.1   212 218376 14346436 mariadbd
             144620  0.1   212 218376 14346436 mariadbd
             144620  0.1   212 218376 14346436 mariadbd
             144620  0.1   212 218376 14346436 mariadbd
             144620  0.1   212 218376 14346436 mariadbd
             144620  0.1   212 218376 14346436 mariadbd
             144620  0.1   212 218632 14346436 mariadbd
             144620  0.1   212 218632 14346436 mariadbd
             144620  0.1   212 218632 14346436 mariadbd
             144620  0.1   212 218632 14346436 mariadbd
             144620  0.1   212 218632 14346436 mariadbd
             144620  0.1   212 218888 14346436 mariadbd
             144620  0.1   212 218888 14346436 mariadbd
             144620  0.1   212 218888 14346436 mariadbd
             144620  0.1   212 219144 14346436 mariadbd
             144620  0.1   212 219400 14346436 mariadbd
             144620  0.1   212 219400 14346436 mariadbd
             144620  0.1   212 219400 14346436 mariadbd
             144620  0.1   212 219912 14346436 mariadbd
             144620  0.1   212 219912 14346436 mariadbd
             144620  0.1   212 219912 14346436 mariadbd
             144620  0.1   212 219912 14346436 mariadbd
             144620  0.1   212 220168 14346436 mariadbd
             144620  0.1   212 220168 14346436 mariadbd
             144620  0.1   212 220168 14346436 mariadbd
             144620  0.1   212 220168 14346436 mariadbd
             144620  0.1   212 220168 14346436 mariadbd
             144620  0.1   212 220168 14346436 mariadbd
             144620  0.1   212 220168 14346436 mariadbd
             144620  0.1   208 220168 14346436 mariadbd
             144620  0.1   208 220168 14346436 mariadbd
             144620  0.1   207 220168 14346436 mariadbd
             144620  0.1   207 220168 14346436 mariadbd
             144620  0.1   207 220424 14346436 mariadbd
             144620  0.1   207 220680 14346436 mariadbd
             144620  0.1   207 220680 14346436 mariadbd
             144620  0.1   207 220680 14346436 mariadbd
             144620  0.1   207 220680 14346436 mariadbd
             144620  0.1   207 220680 14346436 mariadbd
             144620  0.1   207 220680 14346436 mariadbd
             144620  0.1   207 220680 14346436 mariadbd
             144620  0.1   207 220680 14346436 mariadbd
             144620  0.1   207 221448 14346436 mariadbd
             144620  0.1   207 221448 14346436 mariadbd
             144620  0.1   207 221448 14346436 mariadbd
             144620  0.1   207 221448 14346436 mariadbd
             144620  0.1   207 221704 14346436 mariadbd
             144620  0.1   207 221704 14346436 mariadbd
             144620  0.1   207 221704 14346436 mariadbd
             144620  0.1   207 221704 14346436 mariadbd
             144620  0.1   207 221704 14346436 mariadbd
             144620  0.1   207 221704 14346436 mariadbd
             144620  0.1   207 221960 14346436 mariadbd
             144620  0.1   207 221960 14346436 mariadbd
             144620  0.1   207 221960 14346436 mariadbd
             144620  0.1   207 221960 14346436 mariadbd
             144620  0.1   207 221960 14346436 mariadbd
             144620  0.1   207 221960 14346436 mariadbd
             144620  0.1   207 221960 14346436 mariadbd
             144620  0.1   207 222216 14346436 mariadbd
             144620  0.1   207 222216 14346436 mariadbd
             144620  0.1   207 222216 14346436 mariadbd
             144620  0.1   207 222216 14346436 mariadbd
             144620  0.1   207 222216 14346436 mariadbd
             144620  0.1   207 222216 14346436 mariadbd
             144620  0.1   207 222216 14346436 mariadbd
             144620  0.1   207 222216 14346436 mariadbd
             144620  0.1   207 222216 14346436 mariadbd
             144620  0.1   207 222472 14346436 mariadbd
             144620  0.1   207 222472 14346436 mariadbd
             144620  0.1   207 222472 14346436 mariadbd
             144620  0.1   207 222472 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223240 14346436 mariadbd
             144620  0.1   207 223496 14346436 mariadbd
             144620  0.1   207 223496 14346436 mariadbd
             144620  0.1   207 223496 14346436 mariadbd
             144620  0.1   207 223752 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   207 225288 14346436 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225544 14346736 mariadbd
             144620  0.1   208 225800 14346736 mariadbd
             144620  0.1   208 225800 14346736 mariadbd
             144620  0.1   208 225800 14346736 mariadbd
             144620  0.1   208 225800 14346736 mariadbd
             144620  0.1   208 225800 14346736 mariadbd
             144620  0.1   208 225800 14346736 mariadbd
             144620  0.1   208 225800 14346736 mariadbd
             144620  0.1   208 225800 14346736 mariadbd
             144620  0.1   208 225800 14346736 mariadbd
             144620  0.1   208 225800 14346736 mariadbd
             144620  0.1   208 225800 14346736 mariadbd
             144620  0.1   208 225800 14346736 mariadbd
             144620  0.1   208 225800 14346736 mariadbd
             144620  0.1   208 226056 14346736 mariadbd
             144620  0.1   208 226312 14346736 mariadbd
             144620  0.1   208 226312 14346736 mariadbd
             144620  0.1   208 226312 14346736 mariadbd
             144620  0.1   208 226312 14346736 mariadbd
             144620  0.1   208 226312 14346736 mariadbd
             144620  0.1   208 226568 14346736 mariadbd
             144620  0.1   208 226568 14346736 mariadbd
             144620  0.1   208 226568 14346736 mariadbd
             144620  0.1   208 226568 14346736 mariadbd
             144620  0.1   208 226568 14346736 mariadbd
             144620  0.1   208 226568 14346736 mariadbd
             144620  0.1   208 226568 14346736 mariadbd
             144620  0.1   208 226568 14346736 mariadbd
             144620  0.1   208 226568 14346736 mariadbd
             144620  0.1   208 226568 14346736 mariadbd
             144620  0.1   208 226568 14346736 mariadbd
             144620  0.1   208 226568 14346736 mariadbd
             144620  0.1   208 226824 14346736 mariadbd
             144620  0.1   208 226824 14346736 mariadbd
             144620  0.1   208 226824 14346736 mariadbd
             144620  0.1   208 227080 14346736 mariadbd
             144620  0.1   208 227080 14346736 mariadbd
             144620  0.1   208 227080 14346736 mariadbd
             144620  0.1   208 227080 14346736 mariadbd
             144620  0.1   208 227080 14346736 mariadbd
             144620  0.1   208 227080 14346736 mariadbd
             144620  0.1   208 227080 14346736 mariadbd
             144620  0.1   208 227080 14346736 mariadbd
             144620  0.1   208 227080 14346736 mariadbd
             144620  0.1   208 227080 14346736 mariadbd
             144620  0.1   208 227080 14346736 mariadbd
             144620  0.1   208 227080 14346736 mariadbd
             144620  0.1   208 227336 14346736 mariadbd
             144620  0.1   208 227336 14346736 mariadbd
             144620  0.1   208 227336 14346736 mariadbd
             144620  0.1   208 227336 14346736 mariadbd
             144620  0.1   208 227336 14346736 mariadbd
             144620  0.1   208 227336 14346736 mariadbd
             144620  0.1   208 227336 14346736 mariadbd
             144620  0.1   208 227336 14346736 mariadbd
             144620  0.1   208 227336 14346736 mariadbd
             144620  0.1   208 227336 14346736 mariadbd
             144620  0.1   208 227336 14346736 mariadbd
             144620  0.1   208 227592 14346736 mariadbd
             144620  0.1   208 227848 14346736 mariadbd
             144620  0.1   208 227848 14346736 mariadbd
             144620  0.1   208 227848 14346736 mariadbd
             144620  0.1   208 227848 14346736 mariadbd
             144620  0.1   208 227848 14346736 mariadbd
             144620  0.1   208 227848 14346736 mariadbd
             144620  0.1   208 227848 14346736 mariadbd
             144620  0.1   208 227848 14346736 mariadbd
             144620  0.1   208 227848 14346736 mariadbd
             144620  0.1   208 227848 14346736 mariadbd
             144620  0.1   208 227848 14346736 mariadbd
             144620  0.1   208 227848 14346736 mariadbd
             144620  0.1   208 227848 14346736 mariadbd
             144620  0.1   208 228104 14346736 mariadbd
             144620  0.1   208 228104 14346736 mariadbd
             144620  0.1   208 228104 14346736 mariadbd
             144620  0.1   208 228104 14346736 mariadbd
             144620  0.1   208 228104 14346736 mariadbd
             144620  0.1   208 228104 14346736 mariadbd
             144620  0.1   208 228104 14346736 mariadbd
             144620  0.1   208 228104 14346736 mariadbd
             144620  0.1   208 228104 14346736 mariadbd
             144620  0.1   208 228104 14346736 mariadbd
             144620  0.1   208 228104 14346736 mariadbd
             144620  0.1   208 228104 14346736 mariadbd
             144620  0.1   208 228104 14346736 mariadbd
             144620  0.1   208 228104 14346736 mariadbd
             144620  0.1   208 228104 14346736 mariadbd
             144620  0.1   208 228104 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228360 14346736 mariadbd
             144620  0.1   208 228616 14346736 mariadbd
             144620  0.1   208 228616 14346736 mariadbd
             144620  0.1   208 228616 14346736 mariadbd
             144620  0.1   208 228616 14346736 mariadbd
             144620  0.1   208 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228616 14346736 mariadbd
             144620  0.1   209 228872 14346736 mariadbd
             144620  0.1   209 228872 14346736 mariadbd
             144620  0.1   209 228872 14346736 mariadbd
             144620  0.1   209 228872 14346736 mariadbd
             144620  0.1   209 228872 14346736 mariadbd
             144620  0.1   209 228872 14346736 mariadbd
             144620  0.1   209 228872 14346736 mariadbd
             144620  0.1   209 228872 14346736 mariadbd
             144620  0.1   209 228872 14346736 mariadbd
             144620  0.1   209 228872 14346736 mariadbd
             144620  0.1   209 228872 14346736 mariadbd
             144620  0.1   209 228872 14346736 mariadbd
             144620  0.1   209 228872 14346736 mariadbd
             144620  0.1   209 228872 14346736 mariadbd
             144620  0.1   209 228872 14346736 mariadbd
             144620  0.1   209 228872 14346736 mariadbd
             144620  0.1   208 228872 14346736 mariadbd
             144620  0.1   208 228872 14346736 mariadbd
             144620  0.1   208 228872 14346736 mariadbd
             144620  0.1   208 228872 14346736 mariadbd
             144620  0.1   208 228872 14346736 mariadbd
             144620  0.1   208 228872 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229640 14346736 mariadbd
             144620  0.1   208 229896 14346736 mariadbd
             144620  0.1   208 229896 14346736 mariadbd
             144620  0.1   208 229896 14346736 mariadbd
             144620  0.1   208 229896 14346736 mariadbd
             144620  0.1   208 229896 14346736 mariadbd
             144620  0.1   208 229896 14346736 mariadbd
             144620  0.1   208 229896 14346736 mariadbd
             144620  0.1   208 229896 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 230152 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231176 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231432 14346736 mariadbd
             144620  0.1   208 231688 14346736 mariadbd
             144620  0.1   208 231688 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 231944 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232200 14346736 mariadbd
             144620  0.1   208 232456 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   208 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232712 14346736 mariadbd
             144620  0.1   209 232968 14346736 mariadbd
             144620  0.1   209 232968 14346736 mariadbd
             144620  0.1   209 232968 14346736 mariadbd
             144620  0.1   209 232968 14346736 mariadbd
             144620  0.1   209 232968 14346736 mariadbd
             144620  0.1   209 232968 14346736 mariadbd
             144620  0.1   209 232968 14346736 mariadbd
             144620  0.1   208 232968 14346736 mariadbd
             144620  0.1   208 232968 14346736 mariadbd
             144620  0.1   208 232968 14346736 mariadbd
             144620  0.1   208 232968 14346736 mariadbd
             144620  0.1   208 232968 14346736 mariadbd
             144620  0.1   208 232968 14346736 mariadbd
             144620  0.1   208 232968 14346736 mariadbd
             144620  0.1   208 232968 14346736 mariadbd
             144620  0.1   208 232968 14346736 mariadbd
             144620  0.1   208 232968 14346736 mariadbd
             144620  0.1   208 232968 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233224 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
             144620  0.1   208 233480 14346736 mariadbd
            

            The issue is less severe (>200MiB) but similar. Graph:

            Roel Roel Van de Paar added a comment - There was one item missing and that was proof of the issue being present in MariaDB 10.5.5. Using the same testcase: CS 10.5.5 3535b1637fed2df3bcbe9bbebb40e2a3cde081e9 (Optimized) PID %MEM THREADS RSS VSZ Command 144620 0.0 14 87048 1310644 mariadbd 144620 0.0 14 87048 1310644 mariadbd 144620 0.0 14 87048 1310644 mariadbd 144620 0.0 28 96520 2232348 mariadbd 144620 0.1 212 209672 14346436 mariadbd 144620 0.1 212 217096 14346436 mariadbd 144620 0.1 212 217096 14346436 mariadbd 144620 0.1 212 217608 14346436 mariadbd 144620 0.1 212 218376 14346436 mariadbd 144620 0.1 212 218376 14346436 mariadbd 144620 0.1 212 218376 14346436 mariadbd 144620 0.1 212 218376 14346436 mariadbd 144620 0.1 212 218376 14346436 mariadbd 144620 0.1 212 218376 14346436 mariadbd 144620 0.1 212 218632 14346436 mariadbd 144620 0.1 212 218632 14346436 mariadbd 144620 0.1 212 218632 14346436 mariadbd 144620 0.1 212 218632 14346436 mariadbd 144620 0.1 212 218632 14346436 mariadbd 144620 0.1 212 218888 14346436 mariadbd 144620 0.1 212 218888 14346436 mariadbd 144620 0.1 212 218888 14346436 mariadbd 144620 0.1 212 219144 14346436 mariadbd 144620 0.1 212 219400 14346436 mariadbd 144620 0.1 212 219400 14346436 mariadbd 144620 0.1 212 219400 14346436 mariadbd 144620 0.1 212 219912 14346436 mariadbd 144620 0.1 212 219912 14346436 mariadbd 144620 0.1 212 219912 14346436 mariadbd 144620 0.1 212 219912 14346436 mariadbd 144620 0.1 212 220168 14346436 mariadbd 144620 0.1 212 220168 14346436 mariadbd 144620 0.1 212 220168 14346436 mariadbd 144620 0.1 212 220168 14346436 mariadbd 144620 0.1 212 220168 14346436 mariadbd 144620 0.1 212 220168 14346436 mariadbd 144620 0.1 212 220168 14346436 mariadbd 144620 0.1 208 220168 14346436 mariadbd 144620 0.1 208 220168 14346436 mariadbd 144620 0.1 207 220168 14346436 mariadbd 144620 0.1 207 220168 14346436 mariadbd 144620 0.1 207 220424 14346436 mariadbd 144620 0.1 207 220680 14346436 mariadbd 144620 0.1 207 220680 14346436 mariadbd 144620 0.1 207 220680 14346436 mariadbd 144620 0.1 207 220680 14346436 mariadbd 144620 0.1 207 220680 14346436 mariadbd 144620 0.1 207 220680 14346436 mariadbd 144620 0.1 207 220680 14346436 mariadbd 144620 0.1 207 220680 14346436 mariadbd 144620 0.1 207 221448 14346436 mariadbd 144620 0.1 207 221448 14346436 mariadbd 144620 0.1 207 221448 14346436 mariadbd 144620 0.1 207 221448 14346436 mariadbd 144620 0.1 207 221704 14346436 mariadbd 144620 0.1 207 221704 14346436 mariadbd 144620 0.1 207 221704 14346436 mariadbd 144620 0.1 207 221704 14346436 mariadbd 144620 0.1 207 221704 14346436 mariadbd 144620 0.1 207 221704 14346436 mariadbd 144620 0.1 207 221960 14346436 mariadbd 144620 0.1 207 221960 14346436 mariadbd 144620 0.1 207 221960 14346436 mariadbd 144620 0.1 207 221960 14346436 mariadbd 144620 0.1 207 221960 14346436 mariadbd 144620 0.1 207 221960 14346436 mariadbd 144620 0.1 207 221960 14346436 mariadbd 144620 0.1 207 222216 14346436 mariadbd 144620 0.1 207 222216 14346436 mariadbd 144620 0.1 207 222216 14346436 mariadbd 144620 0.1 207 222216 14346436 mariadbd 144620 0.1 207 222216 14346436 mariadbd 144620 0.1 207 222216 14346436 mariadbd 144620 0.1 207 222216 14346436 mariadbd 144620 0.1 207 222216 14346436 mariadbd 144620 0.1 207 222216 14346436 mariadbd 144620 0.1 207 222472 14346436 mariadbd 144620 0.1 207 222472 14346436 mariadbd 144620 0.1 207 222472 14346436 mariadbd 144620 0.1 207 222472 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223240 14346436 mariadbd 144620 0.1 207 223496 14346436 mariadbd 144620 0.1 207 223496 14346436 mariadbd 144620 0.1 207 223496 14346436 mariadbd 144620 0.1 207 223752 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 207 225288 14346436 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225544 14346736 mariadbd 144620 0.1 208 225800 14346736 mariadbd 144620 0.1 208 225800 14346736 mariadbd 144620 0.1 208 225800 14346736 mariadbd 144620 0.1 208 225800 14346736 mariadbd 144620 0.1 208 225800 14346736 mariadbd 144620 0.1 208 225800 14346736 mariadbd 144620 0.1 208 225800 14346736 mariadbd 144620 0.1 208 225800 14346736 mariadbd 144620 0.1 208 225800 14346736 mariadbd 144620 0.1 208 225800 14346736 mariadbd 144620 0.1 208 225800 14346736 mariadbd 144620 0.1 208 225800 14346736 mariadbd 144620 0.1 208 225800 14346736 mariadbd 144620 0.1 208 226056 14346736 mariadbd 144620 0.1 208 226312 14346736 mariadbd 144620 0.1 208 226312 14346736 mariadbd 144620 0.1 208 226312 14346736 mariadbd 144620 0.1 208 226312 14346736 mariadbd 144620 0.1 208 226312 14346736 mariadbd 144620 0.1 208 226568 14346736 mariadbd 144620 0.1 208 226568 14346736 mariadbd 144620 0.1 208 226568 14346736 mariadbd 144620 0.1 208 226568 14346736 mariadbd 144620 0.1 208 226568 14346736 mariadbd 144620 0.1 208 226568 14346736 mariadbd 144620 0.1 208 226568 14346736 mariadbd 144620 0.1 208 226568 14346736 mariadbd 144620 0.1 208 226568 14346736 mariadbd 144620 0.1 208 226568 14346736 mariadbd 144620 0.1 208 226568 14346736 mariadbd 144620 0.1 208 226568 14346736 mariadbd 144620 0.1 208 226824 14346736 mariadbd 144620 0.1 208 226824 14346736 mariadbd 144620 0.1 208 226824 14346736 mariadbd 144620 0.1 208 227080 14346736 mariadbd 144620 0.1 208 227080 14346736 mariadbd 144620 0.1 208 227080 14346736 mariadbd 144620 0.1 208 227080 14346736 mariadbd 144620 0.1 208 227080 14346736 mariadbd 144620 0.1 208 227080 14346736 mariadbd 144620 0.1 208 227080 14346736 mariadbd 144620 0.1 208 227080 14346736 mariadbd 144620 0.1 208 227080 14346736 mariadbd 144620 0.1 208 227080 14346736 mariadbd 144620 0.1 208 227080 14346736 mariadbd 144620 0.1 208 227080 14346736 mariadbd 144620 0.1 208 227336 14346736 mariadbd 144620 0.1 208 227336 14346736 mariadbd 144620 0.1 208 227336 14346736 mariadbd 144620 0.1 208 227336 14346736 mariadbd 144620 0.1 208 227336 14346736 mariadbd 144620 0.1 208 227336 14346736 mariadbd 144620 0.1 208 227336 14346736 mariadbd 144620 0.1 208 227336 14346736 mariadbd 144620 0.1 208 227336 14346736 mariadbd 144620 0.1 208 227336 14346736 mariadbd 144620 0.1 208 227336 14346736 mariadbd 144620 0.1 208 227592 14346736 mariadbd 144620 0.1 208 227848 14346736 mariadbd 144620 0.1 208 227848 14346736 mariadbd 144620 0.1 208 227848 14346736 mariadbd 144620 0.1 208 227848 14346736 mariadbd 144620 0.1 208 227848 14346736 mariadbd 144620 0.1 208 227848 14346736 mariadbd 144620 0.1 208 227848 14346736 mariadbd 144620 0.1 208 227848 14346736 mariadbd 144620 0.1 208 227848 14346736 mariadbd 144620 0.1 208 227848 14346736 mariadbd 144620 0.1 208 227848 14346736 mariadbd 144620 0.1 208 227848 14346736 mariadbd 144620 0.1 208 227848 14346736 mariadbd 144620 0.1 208 228104 14346736 mariadbd 144620 0.1 208 228104 14346736 mariadbd 144620 0.1 208 228104 14346736 mariadbd 144620 0.1 208 228104 14346736 mariadbd 144620 0.1 208 228104 14346736 mariadbd 144620 0.1 208 228104 14346736 mariadbd 144620 0.1 208 228104 14346736 mariadbd 144620 0.1 208 228104 14346736 mariadbd 144620 0.1 208 228104 14346736 mariadbd 144620 0.1 208 228104 14346736 mariadbd 144620 0.1 208 228104 14346736 mariadbd 144620 0.1 208 228104 14346736 mariadbd 144620 0.1 208 228104 14346736 mariadbd 144620 0.1 208 228104 14346736 mariadbd 144620 0.1 208 228104 14346736 mariadbd 144620 0.1 208 228104 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228360 14346736 mariadbd 144620 0.1 208 228616 14346736 mariadbd 144620 0.1 208 228616 14346736 mariadbd 144620 0.1 208 228616 14346736 mariadbd 144620 0.1 208 228616 14346736 mariadbd 144620 0.1 208 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228616 14346736 mariadbd 144620 0.1 209 228872 14346736 mariadbd 144620 0.1 209 228872 14346736 mariadbd 144620 0.1 209 228872 14346736 mariadbd 144620 0.1 209 228872 14346736 mariadbd 144620 0.1 209 228872 14346736 mariadbd 144620 0.1 209 228872 14346736 mariadbd 144620 0.1 209 228872 14346736 mariadbd 144620 0.1 209 228872 14346736 mariadbd 144620 0.1 209 228872 14346736 mariadbd 144620 0.1 209 228872 14346736 mariadbd 144620 0.1 209 228872 14346736 mariadbd 144620 0.1 209 228872 14346736 mariadbd 144620 0.1 209 228872 14346736 mariadbd 144620 0.1 209 228872 14346736 mariadbd 144620 0.1 209 228872 14346736 mariadbd 144620 0.1 209 228872 14346736 mariadbd 144620 0.1 208 228872 14346736 mariadbd 144620 0.1 208 228872 14346736 mariadbd 144620 0.1 208 228872 14346736 mariadbd 144620 0.1 208 228872 14346736 mariadbd 144620 0.1 208 228872 14346736 mariadbd 144620 0.1 208 228872 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229640 14346736 mariadbd 144620 0.1 208 229896 14346736 mariadbd 144620 0.1 208 229896 14346736 mariadbd 144620 0.1 208 229896 14346736 mariadbd 144620 0.1 208 229896 14346736 mariadbd 144620 0.1 208 229896 14346736 mariadbd 144620 0.1 208 229896 14346736 mariadbd 144620 0.1 208 229896 14346736 mariadbd 144620 0.1 208 229896 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 230152 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231176 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231432 14346736 mariadbd 144620 0.1 208 231688 14346736 mariadbd 144620 0.1 208 231688 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 231944 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232200 14346736 mariadbd 144620 0.1 208 232456 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 208 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232712 14346736 mariadbd 144620 0.1 209 232968 14346736 mariadbd 144620 0.1 209 232968 14346736 mariadbd 144620 0.1 209 232968 14346736 mariadbd 144620 0.1 209 232968 14346736 mariadbd 144620 0.1 209 232968 14346736 mariadbd 144620 0.1 209 232968 14346736 mariadbd 144620 0.1 209 232968 14346736 mariadbd 144620 0.1 208 232968 14346736 mariadbd 144620 0.1 208 232968 14346736 mariadbd 144620 0.1 208 232968 14346736 mariadbd 144620 0.1 208 232968 14346736 mariadbd 144620 0.1 208 232968 14346736 mariadbd 144620 0.1 208 232968 14346736 mariadbd 144620 0.1 208 232968 14346736 mariadbd 144620 0.1 208 232968 14346736 mariadbd 144620 0.1 208 232968 14346736 mariadbd 144620 0.1 208 232968 14346736 mariadbd 144620 0.1 208 232968 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233224 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd 144620 0.1 208 233480 14346736 mariadbd The issue is less severe (>200MiB) but similar. Graph:

            This is very confusing.

            The original complain was Kubernetes: working-set memory leak starting with release 10.5.7 and now it's neither a docker issue nor 10.5.7+.

            It kind of looks like this went on a tangent and the I_S query memory growth is not what the original report was about.

            This docker caching issue, may be it was the original problem that was reported?

            Pinimo or frivoire, could you confirm that clearing cache on the host, as Roel has shown above, shrinks working set memory back? Not because I suggest it's an acceptable workaround, but simply to verify we're talking about the same issue.

            serg Sergei Golubchik added a comment - This is very confusing. The original complain was Kubernetes: working-set memory leak starting with release 10.5.7 and now it's neither a docker issue nor 10.5.7+. It kind of looks like this went on a tangent and the I_S query memory growth is not what the original report was about. This docker caching issue, may be it was the original problem that was reported? Pinimo or frivoire , could you confirm that clearing cache on the host, as Roel has shown above, shrinks working set memory back? Not because I suggest it's an acceptable workaround, but simply to verify we're talking about the same issue.
            serg Sergei Golubchik made changes -
            Status Confirmed [ 10101 ] Open [ 1 ]
            serg Sergei Golubchik made changes -
            Status Open [ 1 ] Needs Feedback [ 10501 ]
            Roel Roel Van de Paar added a comment - - edited

            Some further research:

            • The docker history (i.e. setup commands) for both 10.5.6 and 10.5.7 are basically indentical.
            • Comparing SHOW GLOBAL VARIABLES between the two versions, there are a few InnoDB changes:
              Different settings for innodb_lru_scan_depth and innodb_max_dirty_pages_pct, however setting these equal makes no difference.
              There are also two new InnoDB variables (innodb_lru_flush_size, innodb_max_purge_lag_wait) which would seem unlikely to affect things.
            • For the cache drops, values of 1, 2 or 3 can be written to /proc/sys/vm/drop_caches.
              Writing 1 clears the PageCache only, and writing 2 clears dentries and inodes caches only. Writing 3 clears both.
              Just writing 1 does not drop the docker stats memory usage value back to ~10MiB. Writing 2 however does.
              The issue, observed when using Docker in combination with 10.5.7, is thus likely related to dentries and inodes caches.
              Directory entries (dentries) map file names to their corresponding inodes. inodes hold metadata about files like permissions, timestamps etc.
            • This at first glance corresponds with the shared findings that there seems to be some connection to temporary tables usage, and as we saw before temp tables are used.
              However, both 10.5.6 and 10.5.7 show increasing Created_tmp_disk_tables (and other relevant tmp) counters in an equally rising trend
              IOW, values match the number of queries/questions on both versions and there is no real difference between the versions
              I also compared all GLOBAL STATUS counter ratio's - both within the same server version as well as across versions - and they show that everything lines up/matches perfectly.
              There are some very small differences for InnoDB, but in summary there is no perceivable difference in operation between 10.5.6 and 10.5.7.
            • Observed again that sometimes it takes time before the Docker memory stats issue starts happening. It can look "normal" for 5-10 minutes, then start increasing.
            • There is no difference wheter the client is started as docker exec -it $CONTAINER_ID bash -c "while :; do mysql... or directly started inside the container as mysql ...
            • Interestingly, the issue is not reproducible with a 10.5.7 build compiled directly inside the 10.5.6 container. The memory used remains very stable when queries run (inc overnight).
              I also verified here that all relevant SHOW GLOBAL VARIABLES match, as well as all that all relevant SHOW GLOBAL STATUS counter ratio's are about equal, and again everything matches.
              Here too the summary is that in all observable 10.5.6 vs 10.5.7 areas (config/status counters), there are no differences between the two. This also matches the I_S query findings.
            Roel Roel Van de Paar added a comment - - edited Some further research: The docker history (i.e. setup commands) for both 10.5.6 and 10.5.7 are basically indentical. Comparing SHOW GLOBAL VARIABLES between the two versions, there are a few InnoDB changes: Different settings for innodb_lru_scan_depth and innodb_max_dirty_pages_pct, however setting these equal makes no difference. There are also two new InnoDB variables (innodb_lru_flush_size, innodb_max_purge_lag_wait) which would seem unlikely to affect things. For the cache drops, values of 1, 2 or 3 can be written to /proc/sys/vm/drop_caches. Writing 1 clears the PageCache only, and writing 2 clears dentries and inodes caches only. Writing 3 clears both. Just writing 1 does not drop the docker stats memory usage value back to ~10MiB. Writing 2 however does. The issue, observed when using Docker in combination with 10.5.7, is thus likely related to dentries and inodes caches. Directory entries (dentries) map file names to their corresponding inodes. inodes hold metadata about files like permissions, timestamps etc. This at first glance corresponds with the shared findings that there seems to be some connection to temporary tables usage, and as we saw before temp tables are used. However , both 10.5.6 and 10.5.7 show increasing Created_tmp_disk_tables (and other relevant tmp) counters in an equally rising trend IOW, values match the number of queries/questions on both versions and there is no real difference between the versions I also compared all GLOBAL STATUS counter ratio's - both within the same server version as well as across versions - and they show that everything lines up/matches perfectly. There are some very small differences for InnoDB, but in summary there is no perceivable difference in operation between 10.5.6 and 10.5.7. Observed again that sometimes it takes time before the Docker memory stats issue starts happening. It can look "normal" for 5-10 minutes, then start increasing. There is no difference wheter the client is started as docker exec -it $CONTAINER_ID bash -c "while :; do mysql... or directly started inside the container as mysql ... Interestingly, the issue is not reproducible with a 10.5.7 build compiled directly inside the 10.5.6 container. The memory used remains very stable when queries run (inc overnight). I also verified here that all relevant SHOW GLOBAL VARIABLES match, as well as all that all relevant SHOW GLOBAL STATUS counter ratio's are about equal, and again everything matches. Here too the summary is that in all observable 10.5.6 vs 10.5.7 areas (config/status counters), there are no differences between the two. This also matches the I_S query findings.

            Pinimo, frivoire, there was no reply to my question for a month. Let's give it another month and then close the issue as incomplete. But, please, don't hesitate to comment anytime, now or later, whenever you want. If the issue will be closed, we'll reopen it.

            serg Sergei Golubchik added a comment - Pinimo , frivoire , there was no reply to my question for a month. Let's give it another month and then close the issue as incomplete. But, please, don't hesitate to comment anytime, now or later, whenever you want. If the issue will be closed, we'll reopen it.
            julien.fritsch Julien Fritsch made changes -
            Fix Version/s N/A [ 14700 ]
            Fix Version/s 10.5 [ 23123 ]
            Fix Version/s 10.6 [ 24028 ]
            Fix Version/s 10.11 [ 27614 ]
            Resolution Incomplete [ 4 ]
            Status Needs Feedback [ 10501 ] Closed [ 6 ]
            serg Sergei Golubchik made changes -
            Resolution Incomplete [ 4 ]
            Status Closed [ 6 ] Stalled [ 10000 ]
            serg Sergei Golubchik made changes -
            Fix Version/s 10.5 [ 23123 ]
            Fix Version/s 10.6 [ 24028 ]
            Fix Version/s 10.11 [ 27614 ]
            Fix Version/s N/A [ 14700 ]
            serg Sergei Golubchik made changes -
            Status Stalled [ 10000 ] Needs Feedback [ 10501 ]
            Pinimo PNM made changes -
            Pinimo PNM made changes -
            Pinimo PNM added a comment - - edited

            Hello Sergei and Roel,

            Thank you for the deep investigation and your time on this ticket. We have not had time recently to dive into this problem again, sorry about that. Here are some answers to your questions, that we prepared with @frivoire:

            could you confirm that clearing cache on the host, as Roel has shown above, shrinks working set memory back?

            We just tested it on our Kubernetes env, and yes, clearing the cache on the host (Kubernetes node) significantly shrinks the working-set size there.

            More precisely, with several tests (drop caches 1 then 2 then 3, or just 2 directly), we have observed that the working-set memory shrinks when we perform "drop caches 2" (and also 3), but not 1, and that the resident-set is never affected.

            See screenshot below:

            The original complaint was Kubernetes: working-set memory leak starting with release 10.5.7 and now it's neither a docker issue nor 10.5.7+.

            What we know for sure (from managing our dozen of instances in K8S) is that we have observed a big change in memory usage pattern when we upgraded from MariaDB 10.4 to 10.6.

            The precise version (10.5.7) was found during the investigation using the "docker-run + select from I_S" reproduction tests, but maybe we are not reproducing our issue very precisely (as Roel found out, the increase might take longer to be visible sometimes).

            Otherwise, we haven't performed tests outside of container envs (K8s or Docker), so we don't really know, except that we have the issue in our stack (Kubernetes).

            and the I_S query memory growth is not what the original report was about.

            Our initial issue is memory usage (WS) increase, so global on the MariaDB instance.

            And we quickly observed that one simple way to reproduce an increase of WS memory was the I_S query. But the increase of WS is also triggered by other queries, like one "UNION" query on a "normal" table. To show this, we just changed the “I_S in Docker” test and created a new “UNION in Docker” test that might be more useful for you.

            NB: this query also generates an on-disk temp-table (a single one), which we checked through the slow-log beforehand.

            # Shell 1
             
            MARIADB_VERSION=10.X.XX
            docker run --name mariadb-$MARIADB_VERSION \
              --rm --memory 200M \
              --env MARIADB_ALLOW_EMPTY_ROOT_PASSWORD=true \
              --env MYSQL_ALLOW_EMPTY_PASSWORD=true \
              mariadb:$MARIADB_VERSION
             
            # Shell 2
             
            MARIADB_VERSION=10.X.XX
            CONTAINER_ID="mariadb-${MARIADB_VERSION}"   
            docker exec -it $CONTAINER_ID bash -c "echo $MARIADB_VERSION; mysql -e \"
            CREATE DATABASE test_db;
            CREATE TABLE test_db.test (
                comment text
            );\"
            while :; do
            mysql -e \"(SELECT * FROM test_db.test) UNION (SELECT * FROM test_db.test);\"; done"
             
            # Shell 3
             
            docker stats
            

            Test results:

            • 10.4.31: OK, no memleak (or a very slow and invisible one)
            • 10.5.7: memleak (at least +15MiB/minute)
            • 10.6.18: memleak (around +2.3MiB/minute)

            On K8S (not shown on graph here), the UNION test-case above also triggers an increase of WS.

            NB: it's around +2.4MiB/minute on 10.6.18 during the same test with while true which does ~80qps. So, a basic "rule of 3" says that each ondisk-tmp-table generates a ~500 bytes increase of WS.

            We hope this can help clarifying the issue. I guess the ticket could perhaps be named to something like "Queries with on-disk temp-tables cause excessive memory usage", or equivalent.

            Last, not least, have a happy 2025

            Pinimo PNM added a comment - - edited Hello Sergei and Roel , Thank you for the deep investigation and your time on this ticket. We have not had time recently to dive into this problem again, sorry about that. Here are some answers to your questions, that we prepared with @frivoire: could you confirm that clearing cache on the host, as Roel has shown above, shrinks working set memory back ? We just tested it on our Kubernetes env, and yes, clearing the cache on the host (Kubernetes node) significantly shrinks the working-set size there. More precisely, with several tests (drop caches 1 then 2 then 3, or just 2 directly), we have observed that the working-set memory shrinks when we perform "drop caches 2" (and also 3), but not 1, and that the resident-set is never affected. See screenshot below: The original complaint was Kubernetes: working-set memory leak starting with release 10.5.7 and now it's neither a docker issue nor 10.5.7+. What we know for sure (from managing our dozen of instances in K8S) is that we have observed a big change in memory usage pattern when we upgraded from MariaDB 10.4 to 10.6. The precise version (10.5.7) was found during the investigation using the "docker-run + select from I_S" reproduction tests, but maybe we are not reproducing our issue very precisely (as Roel found out, the increase might take longer to be visible sometimes). Otherwise, we haven't performed tests outside of container envs (K8s or Docker), so we don't really know, except that we have the issue in our stack (Kubernetes). and the I_S query memory growth is not what the original report was about. Our initial issue is memory usage (WS) increase, so global on the MariaDB instance. And we quickly observed that one simple way to reproduce an increase of WS memory was the I_S query. But the increase of WS is also triggered by other queries, like one "UNION" query on a "normal" table. To show this, we just changed the “I_S in Docker” test and created a new “ UNION in Docker” test that might be more useful for you. NB: this query also generates an on-disk temp-table (a single one), which we checked through the slow-log beforehand. # Shell 1   MARIADB_VERSION=10.X.XX docker run --name mariadb-$MARIADB_VERSION \ --rm --memory 200M \ --env MARIADB_ALLOW_EMPTY_ROOT_PASSWORD=true \ --env MYSQL_ALLOW_EMPTY_PASSWORD=true \ mariadb:$MARIADB_VERSION   # Shell 2   MARIADB_VERSION=10.X.XX CONTAINER_ID="mariadb-${MARIADB_VERSION}" docker exec -it $CONTAINER_ID bash -c "echo $MARIADB_VERSION; mysql -e \" CREATE DATABASE test_db; CREATE TABLE test_db.test ( comment text );\" while :; do mysql -e \"(SELECT * FROM test_db.test) UNION (SELECT * FROM test_db.test);\"; done"   # Shell 3   docker stats Test results: 10.4.31: OK, no memleak (or a very slow and invisible one) 10.5.7: memleak (at least +15MiB/minute) 10.6.18: memleak (around +2.3MiB/minute) On K8S (not shown on graph here), the UNION test-case above also triggers an increase of WS. NB: it's around +2.4MiB/minute on 10.6.18 during the same test with while true which does ~80qps. So, a basic "rule of 3" says that each ondisk-tmp-table generates a ~500 bytes increase of WS. We hope this can help clarifying the issue. I guess the ticket could perhaps be named to something like "Queries with on-disk temp-tables cause excessive memory usage", or equivalent. Last, not least, have a happy 2025

            where is your tmpdir located?

            serg Sergei Golubchik added a comment - where is your tmpdir located?
            Pinimo PNM added a comment -

            where is your tmpdir located?

            The tmpdir should be the default of the Docker image. I can see it's /tmp in the container from the following:

            MariaDB [(none)]> show variables like 'tmpdir';
            +---------------+-------+
            | Variable_name | Value |
            +---------------+-------+
            | tmpdir        | /tmp  |
            +---------------+-------+
            1 row in set (0.001 sec)
            

            Pinimo PNM added a comment - where is your tmpdir located ? The tmpdir should be the default of the Docker image. I can see it's /tmp in the container from the following: MariaDB [(none)]> show variables like 'tmpdir'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | tmpdir | /tmp | +---------------+-------+ 1 row in set (0.001 sec)
            Roel Roel Van de Paar made changes -
            Assignee Sergei Golubchik [ serg ] Roel Van de Paar [ roel ]
            Roel Roel Van de Paar made changes -
            Status Needs Feedback [ 10501 ] Open [ 1 ]
            Roel Roel Van de Paar made changes -
            Status Open [ 1 ] In Progress [ 3 ]
            Roel Roel Van de Paar made changes -
            Status In Progress [ 3 ] In Testing [ 10301 ]
            Roel Roel Van de Paar added a comment - - edited

            Issue confirmed as described above, and as follows:

            # SHELL 1
            MARIADB_VERSION=10.5.7; docker run --name mariadb-$MARIADB_VERSION --rm --memory 200M --env MARIADB_ALLOW_EMPTY_ROOT_PASSWORD=true --env MYSQL_ALLOW_EMPTY_PASSWORD=true mariadb:$MARIADB_VERSION
            # SHELL 2
            docker stats
            # SHELL 3
            MARIADB_VERSION=10.5.7; CONTAINER_ID="mariadb-${MARIADB_VERSION}"; docker exec -it $CONTAINER_ID bash -c "echo $MARIADB_VERSION; mysql -e \"CREATE DATABASE test_db; CREATE TABLE test_db.test (comment text);\"; while :; do mysql -e \"(SELECT * FROM test_db.test) UNION (SELECT * FROM test_db.test);\"; done"
            # SHELL 4
            MARIADB_VERSION=10.5.7; CONTAINER_ID="mariadb-${MARIADB_VERSION}"; docker exec -it $CONTAINER_ID bash
            # SHELL 4: once inside container do:
            cd /tmp; while true; do ls -l | grep -v 'total'; done
            

            (swap 10.5.7 to 10.4.31 for the 2nd/compare test)

            Leads to:
            10.4.31: memory started at ±37% and stayed stable there within 2%
            10.5.7: memory started at ±43% then increased steadily to 99% at which points a repeating cycle starts of it dropping to x (for example 75%) and growing again. Using sudo bash -c "echo 2 > /proc/sys/vm/drop_caches" immediately returns it to about 30% use (and at times it will start at ±30% and only increase later).

            Then, importantly, the final command (checking ongoing temp tables in the Docker-specific /tmp dir) shows:
            10.4.31: no action at all (i.e. no temp tables created)
            10.5.7: continual creations of temp tables. Snapshot example:

            # cd /tmp; while true; do ls -l | grep -v 'total'; done
            -rw-rw---- 1 mysql mysql 0 Jan  4 04:29 #sql-temptable-1-8ef8-8ef4.MAD
            -????????? ? ?     ?     ?            ? #sql-temptable-1-8ef8-8ef4.MAI
            ls: '#sql-temptable-1-8efb-8ef7.MAI': No such file or directory
            ls: cannot access '#sql-temptable-1-8efb-8ef7.MAD': No such file or directory
            -????????? ? ?     ?        ?            ? #sql-temptable-1-8efb-8ef7.MAD
            -rw-rw---- 0 mysql mysql 8192 Jan  4 04:29 #sql-temptable-1-8efb-8ef7.MAI
            ls: '#sql-temptable-1-8efc-8ef8.MAI': No such file or directory
            ls: cannot access '#sql-temptable-1-8efc-8ef8.MAD': No such file or directory
            -????????? ? ?     ?        ?            ? #sql-temptable-1-8efc-8ef8.MAD
            -rw-rw---- 0 mysql mysql 8192 Jan  4 04:29 #sql-temptable-1-8efc-8ef8.MAI
            ls: cannot access '#sql-temptable-1-8f05-8f01.MAI': No such file or directory
            -rw-rw---- 1 mysql mysql 0 Jan  4 04:29 #sql-temptable-1-8f05-8f01.MAD
            -????????? ? ?     ?     ?            ? #sql-temptable-1-8f05-8f01.MAI
            ls: '#sql-temptable-1-8f11-8f0d.MAD': No such file or directory
            -rw-rw---- 0 mysql mysql 0 Jan  4 04:29 #sql-temptable-1-8f11-8f0d.MAD
            ls: cannot access '#sql-temptable-1-8f15-8f11.MAI': No such file or directory
            -rw-rw---- 1 mysql mysql 0 Jan  4 04:29 #sql-temptable-1-8f15-8f11.MAD
            -????????? ? ?     ?     ?            ? #sql-temptable-1-8f15-8f11.MAI
            -rw-rw---- 1 mysql mysql 510 Jan  4 04:29 #sql-temptable-1-8f18-8f14.MAI
            ls: '#sql-temptable-1-8f21-8f1d.MAI': No such file or directory
            ls: cannot access '#sql-temptable-1-8f21-8f1d.MAD': No such file or directory
            -????????? ? ?     ?        ?            ? #sql-temptable-1-8f21-8f1d.MAD
            -rw-rw---- 0 mysql mysql 8192 Jan  4 04:29 #sql-temptable-1-8f21-8f1d.MAI
            ls: cannot access '#sql-temptable-1-8f33-8f2f.MAD': No such file or directory
            ...
            

            Note that Docker's /tmp is not the host's /tmp.: the /tmp directory inside a container is stored within the container's writable layer, which is managed by Docker's storage driver (e.g., overlay2 etc.). The location on the host depends on the configuration and storage driver in use. To get the directory used on the host OS, use:

            docker inspect 945af150caba | grep MergedDir
            

            Where '945af150caba' is the container ID (swap it with yours as per docker stats output). This will give the actual location, for example:

               "MergedDir": "/var/lib/docker/overlay2/5ba81d0b2392def9e7ff0121cd0c7a64678bf9ddbc6740168468ad99d1acc5d3/merged",
            

            Which can then also be used to confirm the temporary files being created directly on the host (add /tmp suffix):

            $ while true; do sudo ls -l /var/lib/docker/overlay2/5ba81d0b2392def9e7ff0121cd0c7a64678bf9ddbc6740168468ad99d1acc5d3/merged/tmp | grep -v 'total'; done
            -rw-rw---- 0 systemd-timesync systemd-timesync 0 Jan  4 15:35 #sql-temptable-1-202ad-202a9.MAD
            -rw-rw---- 1 systemd-timesync systemd-timesync 510 Jan  4 15:35 #sql-temptable-1-202e8-202e4.MAI
            ls: cannot access '/var/lib/docker/overlay2/5ba81d0b2392def9e7ff0121cd0c7a64678bf9ddbc6740168468ad99d1acc5d3/merged/tmp/#sql-temptable-1-202ec-202e8.MAI': No such file or directory
            -rw-rw---- 1 systemd-timesync systemd-timesync 0 Jan  4 15:35 #sql-temptable-1-202ec-202e8.MAD
            -????????? ? ?                ?                ?            ? #sql-temptable-1-202ec-202e8.MAI
            -rw-rw---- 1 systemd-timesync systemd-timesync    0 Jan  4 15:35 #sql-temptable-1-203e4-203e0.MAD
            -rw-rw---- 1 systemd-timesync systemd-timesync 8192 Jan  4 15:35 #sql-temptable-1-203e4-203e0.MAI
            -rw-rw---- 1 systemd-timesync systemd-timesync 510 Jan  4 15:35 #sql-temptable-1-20471-2046d.MAI
            -rw-rw---- 1 systemd-timesync systemd-timesync 510 Jan  4 15:35 #sql-temptable-1-20498-20494.MAI
            ...
            

            I also checked this for 10.4.31 and the above in-container finding was confirmed: 10.4.31 does not create such temp tables:

            $ while true; do sudo ls -l /var/lib/docker/overlay2/aed713298515be1180f59ea34c0af6f9fe891366ed694f54237b32743f8290cf/merged/tmp | grep -v 'total'; done
            

            Remains permanently empty while looping.

            Note: As found before/mentioned above, it may take 5 or more minutes before memory growth on 10.5.7 commences, even if temporary tables are being created as soon as the query loop starts.

            Agreed on preliminary summary update proposal with some small changes: Queries with on-disk tmp-tables cause significant additional memory use in Docker

            Next steps;
            1. Confirm if the issue is Docker only. It could be a docker-only caching issue or even be a feature. i.e. Docker may be using additional memory to aid/avoid disk usage (which in itself seems somewhat confirmed by the fact that clearing the cache drops the memory usage).
            2. The second question is why a change between 10.4.31 and 10.5.7 is now causing the use of temporary tables on-disk for a given set of queries (while they may fit in memory).

            Roel Roel Van de Paar added a comment - - edited Issue confirmed as described above, and as follows: # SHELL 1 MARIADB_VERSION=10.5.7; docker run --name mariadb-$MARIADB_VERSION -- rm --memory 200M -- env MARIADB_ALLOW_EMPTY_ROOT_PASSWORD= true -- env MYSQL_ALLOW_EMPTY_PASSWORD= true mariadb:$MARIADB_VERSION # SHELL 2 docker stats # SHELL 3 MARIADB_VERSION=10.5.7; CONTAINER_ID= "mariadb-${MARIADB_VERSION}" ; docker exec -it $CONTAINER_ID bash -c "echo $MARIADB_VERSION; mysql -e \"CREATE DATABASE test_db; CREATE TABLE test_db.test (comment text);\"; while :; do mysql -e \"(SELECT * FROM test_db.test) UNION (SELECT * FROM test_db.test);\"; done" # SHELL 4 MARIADB_VERSION=10.5.7; CONTAINER_ID= "mariadb-${MARIADB_VERSION}" ; docker exec -it $CONTAINER_ID bash # SHELL 4: once inside container do: cd /tmp ; while true ; do ls -l | grep - v 'total' ; done (swap 10.5.7 to 10.4.31 for the 2nd/compare test) Leads to: 10.4.31: memory started at ±37% and stayed stable there within 2% 10.5.7: memory started at ±43% then increased steadily to 99% at which points a repeating cycle starts of it dropping to x (for example 75%) and growing again. Using sudo bash -c "echo 2 > /proc/sys/vm/drop_caches" immediately returns it to about 30% use (and at times it will start at ±30% and only increase later). Then, importantly, the final command (checking ongoing temp tables in the Docker-specific /tmp dir) shows: 10.4.31: no action at all (i.e. no temp tables created ) 10.5.7: continual creations of temp tables . Snapshot example: # cd /tmp; while true; do ls -l | grep -v 'total'; done -rw-rw---- 1 mysql mysql 0 Jan 4 04:29 #sql-temptable-1-8ef8-8ef4.MAD -????????? ? ? ? ? ? #sql-temptable-1-8ef8-8ef4.MAI ls : '#sql-temptable-1-8efb-8ef7.MAI' : No such file or directory ls : cannot access '#sql-temptable-1-8efb-8ef7.MAD' : No such file or directory -????????? ? ? ? ? ? #sql-temptable-1-8efb-8ef7.MAD -rw-rw---- 0 mysql mysql 8192 Jan 4 04:29 #sql-temptable-1-8efb-8ef7.MAI ls : '#sql-temptable-1-8efc-8ef8.MAI' : No such file or directory ls : cannot access '#sql-temptable-1-8efc-8ef8.MAD' : No such file or directory -????????? ? ? ? ? ? #sql-temptable-1-8efc-8ef8.MAD -rw-rw---- 0 mysql mysql 8192 Jan 4 04:29 #sql-temptable-1-8efc-8ef8.MAI ls : cannot access '#sql-temptable-1-8f05-8f01.MAI' : No such file or directory -rw-rw---- 1 mysql mysql 0 Jan 4 04:29 #sql-temptable-1-8f05-8f01.MAD -????????? ? ? ? ? ? #sql-temptable-1-8f05-8f01.MAI ls : '#sql-temptable-1-8f11-8f0d.MAD' : No such file or directory -rw-rw---- 0 mysql mysql 0 Jan 4 04:29 #sql-temptable-1-8f11-8f0d.MAD ls : cannot access '#sql-temptable-1-8f15-8f11.MAI' : No such file or directory -rw-rw---- 1 mysql mysql 0 Jan 4 04:29 #sql-temptable-1-8f15-8f11.MAD -????????? ? ? ? ? ? #sql-temptable-1-8f15-8f11.MAI -rw-rw---- 1 mysql mysql 510 Jan 4 04:29 #sql-temptable-1-8f18-8f14.MAI ls : '#sql-temptable-1-8f21-8f1d.MAI' : No such file or directory ls : cannot access '#sql-temptable-1-8f21-8f1d.MAD' : No such file or directory -????????? ? ? ? ? ? #sql-temptable-1-8f21-8f1d.MAD -rw-rw---- 0 mysql mysql 8192 Jan 4 04:29 #sql-temptable-1-8f21-8f1d.MAI ls : cannot access '#sql-temptable-1-8f33-8f2f.MAD' : No such file or directory ... Note that Docker's /tmp is not the host's /tmp.: the /tmp directory inside a container is stored within the container's writable layer, which is managed by Docker's storage driver (e.g., overlay2 etc.). The location on the host depends on the configuration and storage driver in use. To get the directory used on the host OS, use: docker inspect 945af150caba | grep MergedDir Where '945af150caba' is the container ID (swap it with yours as per docker stats output). This will give the actual location, for example: "MergedDir": "/var/lib/docker/overlay2/5ba81d0b2392def9e7ff0121cd0c7a64678bf9ddbc6740168468ad99d1acc5d3/merged", Which can then also be used to confirm the temporary files being created directly on the host (add /tmp suffix): $ while true ; do sudo ls -l /var/lib/docker/overlay2/5ba81d0b2392def9e7ff0121cd0c7a64678bf9ddbc6740168468ad99d1acc5d3/merged/tmp | grep - v 'total' ; done -rw-rw---- 0 systemd-timesync systemd-timesync 0 Jan 4 15:35 #sql-temptable-1-202ad-202a9.MAD -rw-rw---- 1 systemd-timesync systemd-timesync 510 Jan 4 15:35 #sql-temptable-1-202e8-202e4.MAI ls : cannot access '/var/lib/docker/overlay2/5ba81d0b2392def9e7ff0121cd0c7a64678bf9ddbc6740168468ad99d1acc5d3/merged/tmp/#sql-temptable-1-202ec-202e8.MAI' : No such file or directory -rw-rw---- 1 systemd-timesync systemd-timesync 0 Jan 4 15:35 #sql-temptable-1-202ec-202e8.MAD -????????? ? ? ? ? ? #sql-temptable-1-202ec-202e8.MAI -rw-rw---- 1 systemd-timesync systemd-timesync 0 Jan 4 15:35 #sql-temptable-1-203e4-203e0.MAD -rw-rw---- 1 systemd-timesync systemd-timesync 8192 Jan 4 15:35 #sql-temptable-1-203e4-203e0.MAI -rw-rw---- 1 systemd-timesync systemd-timesync 510 Jan 4 15:35 #sql-temptable-1-20471-2046d.MAI -rw-rw---- 1 systemd-timesync systemd-timesync 510 Jan 4 15:35 #sql-temptable-1-20498-20494.MAI ... I also checked this for 10.4.31 and the above in-container finding was confirmed: 10.4.31 does not create such temp tables: $ while true ; do sudo ls -l /var/lib/docker/overlay2/aed713298515be1180f59ea34c0af6f9fe891366ed694f54237b32743f8290cf/merged/tmp | grep - v 'total' ; done Remains permanently empty while looping. Note: As found before/mentioned above, it may take 5 or more minutes before memory growth on 10.5.7 commences, even if temporary tables are being created as soon as the query loop starts. Agreed on preliminary summary update proposal with some small changes: Queries with on-disk tmp-tables cause significant additional memory use in Docker Next steps; 1. Confirm if the issue is Docker only. It could be a docker-only caching issue or even be a feature. i.e. Docker may be using additional memory to aid/avoid disk usage (which in itself seems somewhat confirmed by the fact that clearing the cache drops the memory usage). 2. The second question is why a change between 10.4.31 and 10.5.7 is now causing the use of temporary tables on-disk for a given set of queries (while they may fit in memory).
            Roel Roel Van de Paar made changes -
            Summary SELECT [index_length/data_length] FROM information_schema.tables causes significant memory use Queries with on-disk tmp-tables cause significant additional memory use in Docker

            Roel, thanks, this is great!

            What if you do something in 10.4.31 that constantly creates tables in /tmp ? like, a query with group by or union, or selecting from an information schema with blobs. if docker write set will grow the same in 10.4.31, then the reason is some change in the optimizer, and docker behavior is independent from MariaDB version and simply indicates that many temp tables are created. Then we can take it out of equation and look at the change in plans only.

            serg Sergei Golubchik added a comment - Roel , thanks, this is great! What if you do something in 10.4.31 that constantly creates tables in /tmp ? like, a query with group by or union, or selecting from an information schema with blobs. if docker write set will grow the same in 10.4.31, then the reason is some change in the optimizer, and docker behavior is independent from MariaDB version and simply indicates that many temp tables are created. Then we can take it out of equation and look at the change in plans only.
            Pinimo PNM made changes -
            Pinimo PNM added a comment - - edited

            Hi Roel and Sergei, good news here, we think we just identified the problem's direct cause.

            We ran a git bisect doing this Docker UNION test and isolated the following buggy commit: be974e56203c723b021a1a5e7719065298d7ceda.

            NB. We ran the test with a slightly different SQL query, to make sure the optimizer always uses temp tables on disk, even in older versions. The updated SQL query was (SELECT * FROM test_db.test) UNION (SELECT * FROM test_db.test WHERE comment LIKE '%');.

            This commit changes the default value of --temp-pool to 0, which means, as far as I understand, that temporary files will no longer be "pooled" and reused once freed, but will be continuously created (and destroyed I suppose).

            When we test again, on Kubernetes and Docker, with the --temp-pool=1 flag on mysqld/mariadbd, we do not see the memory leak anymore, no matter what the version, as visible on the following screenshot:

            To sum up the investigations, I think we know the following things:

            1. The memory leak is related to temp-tables on disk: when they are frequent, the leak is faster and conversely;
            2. The memory leak happens only when temporary files are not pooled.

            I think it could be interesting to test (bisect) a larger portion of the git history, forcing --temp-pool=0, to see if this "bug" has been present since a long time, and maybe see how it was started.

            Have a nice day

            Pinimo PNM added a comment - - edited Hi Roel and Sergei, good news here, we think we just identified the problem's direct cause. We ran a git bisect doing this Docker UNION test and isolated the following buggy commit: be974e56203c723b021a1a5e7719065298d7ceda . NB. We ran the test with a slightly different SQL query, to make sure the optimizer always uses temp tables on disk, even in older versions. The updated SQL query was (SELECT * FROM test_db.test) UNION (SELECT * FROM test_db.test WHERE comment LIKE '%'); . This commit changes the default value of --temp-pool to 0 , which means, as far as I understand, that temporary files will no longer be "pooled" and reused once freed, but will be continuously created (and destroyed I suppose). When we test again, on Kubernetes and Docker, with the --temp-pool=1 flag on mysqld / mariadbd , we do not see the memory leak anymore , no matter what the version, as visible on the following screenshot: To sum up the investigations, I think we know the following things: The memory leak is related to temp-tables on disk: when they are frequent, the leak is faster and conversely; The memory leak happens only when temporary files are not pooled. I think it could be interesting to test (bisect) a larger portion of the git history, forcing --temp-pool=0 , to see if this "bug" has been present since a long time, and maybe see how it was started. Have a nice day
            Roel Roel Van de Paar added a comment - - edited

            Great find, thank you Pinimo and frivoire.
            serg this is MDEV-22278.

            Roel Roel Van de Paar added a comment - - edited Great find, thank you Pinimo and frivoire . serg this is MDEV-22278 .
            Roel Roel Van de Paar made changes -
            Roel Roel Van de Paar added a comment - - edited

            https://dev.mysql.com/worklog/task/?id=8396
            http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_temp-pool
            Introduced in 3.23, became default in 4.0.3 for Linux, deprecated in 5.7.18 and removed in 8.0.

            From the manpage:
            "On Linux, it causes most temporary files created by the server to use a small set of names, rather than a unique name for each new file. This works around a problem in the Linux kernel dealing with creating many new files with different names. With the old behavior, Linux seems to “leak” memory, because it is being allocated to the directory entry cache rather than to the disk cache."

            The relevant MariaDB commits are listed in MDEV-22278

            Roel Roel Van de Paar added a comment - - edited https://dev.mysql.com/worklog/task/?id=8396 http://dev.mysql.com/doc/refman/5.7/en/server-options.html#option_mysqld_temp-pool Introduced in 3.23, became default in 4.0.3 for Linux, deprecated in 5.7.18 and removed in 8.0. From the manpage: "On Linux, it causes most temporary files created by the server to use a small set of names, rather than a unique name for each new file. This works around a problem in the Linux kernel dealing with creating many new files with different names. With the old behavior, Linux seems to “leak” memory, because it is being allocated to the directory entry cache rather than to the disk cache." The relevant MariaDB commits are listed in MDEV-22278

            It's not deprecated and is not going away in MariaDB, you can safely keep it enabled. I've also clarified emp-pool in the documentation.

            serg Sergei Golubchik added a comment - It's not deprecated and is not going away in MariaDB, you can safely keep it enabled. I've also clarified emp-pool in the documentation.
            Pinimo PNM added a comment -

            Hello serg and Roel, thank you for those inputs.

            I am not quite aware of the QA setup at MariaDB, but I had a question: do you already run some regression tests in the official Docker images, or do you plan to do so in the future? (I was thinking of this especially as the --temp-pool flag was deprecated following some benchmarking tests, but only non-Docker tests apparently.)

            Indeed, I suspect the Docker packaging is widely used to distribute the software, as it has become an industry standard. Also, it would be consistent with the usage of the mariadb-operator in Kubernetes, to increase the product's quality coverage over upgrades.

            Also, out of curiosity, what do you see as the next step? (Although I have to admit I will probably be out of it, for bandwidth reasons.) Do you think it is better:

            1. to investigate the root memory issue under Docker (e.g. with a new git bisect)?
            2. or to find another resolution to the non-Docker mutex locking issue than the temp-pool deactivation?
            3. or something else?

            Thank you, have a nice day.

            Pinimo PNM added a comment - Hello serg and Roel , thank you for those inputs. I am not quite aware of the QA setup at MariaDB, but I had a question: do you already run some regression tests in the official Docker images , or do you plan to do so in the future? (I was thinking of this especially as the --temp-pool flag was deprecated following some benchmarking tests, but only non-Docker tests apparently.) Indeed, I suspect the Docker packaging is widely used to distribute the software, as it has become an industry standard. Also, it would be consistent with the usage of the mariadb-operator in Kubernetes, to increase the product's quality coverage over upgrades. Also, out of curiosity, what do you see as the next step? (Although I have to admit I will probably be out of it, for bandwidth reasons.) Do you think it is better: to investigate the root memory issue under Docker (e.g. with a new git bisect )? or to find another resolution to the non-Docker mutex locking issue than the temp-pool deactivation? or something else? Thank you, have a nice day.
            Pinimo PNM made changes -
            Description h3. Summary

            On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behaviour consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

            Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

            We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

            The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

            Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

            We found it seems to be related to temporary tables, but we could not identify any specific new usage or major changes between the versions.

            It could be interesting if you know if there were significant changes in how temporary tables are managed. For instance, you might know if the pattern of {{fsync}}s changed compared to 10.4, or not at all.

            I'm attaching a screenshot of our memory monitoring right after the upgrade.


            h3. Technical investigation

            h4. Stable system monitoring variables

            By monitoring /sys/fs/cgroup/memory/memory.stat (cgroupv1), here's what we see:

            * RSS remain stable. When taking continuous traces, it grows while the buffer pool is warming, after that it remains stable as expected. We do not expect any leak there;
            * {{anon}} allocations do not show any correlation as well;
            * {{mapped_files}} are strictly stable, no variations over from day to day;
            * the cache takes longer to stabilize but its increase does not seem to match working-set memory;
            * {{lsof}} outputs are stable over time, we do not see any increase of lines returned;
            * performance schemas memory table are stable over time, we do not see any increase in current memory used.

            h4. Increasing system variable: active files

            The only significant change we noticed was a steep and constant increase of {{active_file}}.

            Starting from a warm MariaDB with an uptime of 346868 seconds (4 days), over the next 4 days {{active_file}} grows quickly

            {code:text}
            DATE: Mon Apr 8 16:32:38 UTC 2024
            | Uptime | 346868 |
            active_file 864256

            DATE: Tue Apr 9 10:00:53 UTC 2024
            | Uptime | 409763 |
            active_file 2609152

            DATE: Thu Apr 11 12:45:30 UTC 2024
            | Uptime | 592440 |
            active_file 36868096
            {code}

            {{active_file}} counts toward the workingset memory calculation (https://github.com/kubernetes/kubernetes/issues/43916).

            h3. MariaDB 10.4 vs 10.6 comparison

            When we compared running 10.4 and 10.6 clusters, here's what we found:

            * In both images, only {{innodb_flush_method = O_direct}} is used. It's by default with mariadb docker images. Method {{fsync}} would have explained a different memory usage.
            * {{innodb_flush_log_at_trx_commit = 2}}. After and before upgrade, we did not try to set it to {{1}} to avoid impact
            * both use {{jemalloc}} as {{malloc}} lib (note: using {{tcmalloc}} with 10.6 was tested and does not solve the leak).
            * {{galera.cache}} have not been changed (and {{mmap}} files are stable), we don't see usage of additional {{gcache}} pages
            * there are no usages of explicit temporary tables, no DDLs
            * {{innodb_adaptive_hash_index}} was tried both disabled and enabled, it did not seem to improve the issue. (It was disabled by default in 10.6, so we tried to match the 10.4 tuning.)
            * both 10.4 and 10.6 workload have a high buffer pool miss rate: {{Buffer pool hit rate 936 / 1000, young-making rate 36 / 1000 not 126 / 1000}}.
             
            h4. Differences in raw parameters

            {code:text}
            Variable /tmp/mariadb_104 /tmp/mariadb_106
            ========================= ========================= =========================
            back_log 70 80
            bulk_insert_buffer_size 16777216 8388608
            concurrent_insert ALWAYS AUTO
            connect_timeout 5 10
            innodb_adaptive_hash_i... ON OFF
            innodb_change_buffering all none
            innodb_checksum_algorithm crc32 full_crc32
            innodb_lru_scan_depth 1024 1536
            innodb_max_dirty_pages... 75.000000 90.000000
            innodb_purge_batch_size 300 1000
            max_recursive_iterations 4294967295 1000
            max_relay_log_size 104857600 1073741824
            pseudo_thread_id 45 29
            slave_parallel_mode conservative optimistic
            sort_buffer_size 4194304 2097152
            table_open_cache 400 2000
            thread_cache_size 100 151
            wait_timeout 600 28800
            {code}

            Some of those variables had new default values in 10.6, but they were already tuned explicitly in the custom {{my.cnf}}.

            Both 10.4 and 10.6 are running in the same Kubernetes cluster.

            h4. Temporary tables

            So far, we only found that reducing the amount of implicit temporary tables usage reduces the "leak". This reduction does not remove the leak, but it makes it happen slower.

            h3. Things we did not try

            * comparing {{pmap}} over time;
            * {{jemalloc}} profiling (as RSS is stable);
            * any {{strace}}, {{perf}}, or any {{ebpf}} based tool. Without having a clear plan on what to track, we skipped as those can be costly.
            * removing entirely the temp tables used in a test cluster.

            _Arhived environment (no longer applicable) label:_
            {noformat}
            Kubernetes cluster, managed by GCP (GKE cluster)
            Kubernetes version: 1.28.9-gke.1289000.
            Dedicated nodepool with cgroup v1 (switching to cgroup v2 does not resolve), virtual machine type n2d-highmem-32.
            Docker images: from MariaDB, e.g. mariadb:10.6.18 (Docker Hub).
            Other: uses Galera replication. No Kubernetes operators.
            {noformat}
            h3. Summary

            On many MariaDB Galera clusters deployed in Kubernetes, after migrating from 10.4 to 10.6, we observed a brutal and consistent change in the pattern of working-set memory (WS). The resident state size memory (RSS), that used to be very correlated to the WS, is stable; only the WS is affected by the leak. This behaviour consistently reproduces all the time: after the usual warmup phase, a slow leak starts, with WS slowly diverging from the RSS.

            Working-set memory is what is used by Kubernetes to trigger out-of-memory pod restarts. That is why this leak is potentially impactful for us.

            We have investigated on Kubernetes side too (and are open to suggestions of course), however, so far we could not identify why it happened after the upgrade from 10.4.31 to >=10.6.17. The situation reproduced on every cluster upgraded so far. However, on some larger clusters (>100GB buffer pool size) the leak is fortunately not very apparent.

            The leak also takes place on one 10.11 cluster. That cluster was never upgraded but was created directly in 10.11.

            Our main expectation is the following: gaining insights about any low-level changes that have been introduced between the latest 10.4 and 10.6, and that would be likely to trigger this behavior.

            We found it seems to be related to temporary tables, but we could not identify any specific new usage or major changes between the versions.

            It could be interesting if you know if there were significant changes in how temporary tables are managed. For instance, you might know if the pattern of {{fsync}}s changed compared to 10.4, or not at all.

            I'm attaching a screenshot of our memory monitoring right after the upgrade.


            h3. Technical investigation

            h4. Stable system monitoring variables

            By monitoring /sys/fs/cgroup/memory/memory.stat (cgroupv1), here's what we see:

            * RSS remain stable. When taking continuous traces, it grows while the buffer pool is warming, after that it remains stable as expected. We do not expect any leak there;
            * {{anon}} allocations do not show any correlation as well;
            * {{mapped_files}} are strictly stable, no variations over from day to day;
            * the cache takes longer to stabilize but its increase does not seem to match working-set memory;
            * {{lsof}} outputs are stable over time, we do not see any increase of lines returned;
            * performance schemas memory table are stable over time, we do not see any increase in current memory used.

            h4. Increasing system variable: active files

            The only significant change we noticed was a steep and constant increase of {{active_file}}.

            Starting from a warm MariaDB with an uptime of 346868 seconds (4 days), over the next 4 days {{active_file}} grows quickly

            {code:text}
            DATE: Mon Apr 8 16:32:38 UTC 2024
            | Uptime | 346868 |
            active_file 864256

            DATE: Tue Apr 9 10:00:53 UTC 2024
            | Uptime | 409763 |
            active_file 2609152

            DATE: Thu Apr 11 12:45:30 UTC 2024
            | Uptime | 592440 |
            active_file 36868096
            {code}

            {{active_file}} counts toward the workingset memory calculation (https://github.com/kubernetes/kubernetes/issues/43916).

            h3. MariaDB 10.4 vs 10.6 comparison

            When we compared running 10.4 and 10.6 clusters, here's what we found:

            * In both images, only {{innodb_flush_method = O_direct}} is used. It's by default with mariadb docker images. Method {{fsync}} would have explained a different memory usage.
            * {{innodb_flush_log_at_trx_commit = 2}}. After and before upgrade, we did not try to set it to {{1}} to avoid impact
            * both use {{jemalloc}} as {{malloc}} lib (note: using {{tcmalloc}} with 10.6 was tested and does not solve the leak).
            * {{galera.cache}} have not been changed (and {{mmap}} files are stable), we don't see usage of additional {{gcache}} pages
            * there are no usages of explicit temporary tables, no DDLs
            * {{innodb_adaptive_hash_index}} was tried both disabled and enabled, it did not seem to improve the issue. (It was disabled by default in 10.6, so we tried to match the 10.4 tuning.)
            * both 10.4 and 10.6 workload have a high buffer pool miss rate: {{Buffer pool hit rate 936 / 1000, young-making rate 36 / 1000 not 126 / 1000}}.
             
            h4. Differences in raw parameters

            {code:text}
            Variable /tmp/mariadb_104 /tmp/mariadb_106
            ========================= ========================= =========================
            back_log 70 80
            bulk_insert_buffer_size 16777216 8388608
            concurrent_insert ALWAYS AUTO
            connect_timeout 5 10
            innodb_adaptive_hash_i... ON OFF
            innodb_change_buffering all none
            innodb_checksum_algorithm crc32 full_crc32
            innodb_lru_scan_depth 1024 1536
            innodb_max_dirty_pages... 75.000000 90.000000
            innodb_purge_batch_size 300 1000
            max_recursive_iterations 4294967295 1000
            max_relay_log_size 104857600 1073741824
            pseudo_thread_id 45 29
            slave_parallel_mode conservative optimistic
            sort_buffer_size 4194304 2097152
            table_open_cache 400 2000
            thread_cache_size 100 151
            wait_timeout 600 28800
            {code}

            Some of those variables had new default values in 10.6, but they were already tuned explicitly in the custom {{my.cnf}}.

            Both 10.4 and 10.6 are running in the same Kubernetes cluster.

            h4. Temporary tables

            So far, we only found that reducing the amount of implicit temporary tables usage reduces the "leak". This reduction does not remove the leak, but it makes it happen slower.

            h3. Things we did not try

            * comparing {{pmap}} over time;
            * {{jemalloc}} profiling (as RSS is stable);
            * any {{strace}}, {{perf}}, or any {{ebpf}} based tool. Without having a clear plan on what to track, we skipped as those can be costly.
            * removing entirely the temp tables used in a test cluster.

            h2. {color:red}TL;DR: workaround{color}

            To work around this issue quickly, it is enough to add the {{--temp-pool=1}} flag to the {{mariadbd}} (or {{mysql}}) program command.


            ----


            _Archived environment (no longer applicable) label:_
            {noformat}
            Kubernetes cluster, managed by GCP (GKE cluster)
            Kubernetes version: 1.28.9-gke.1289000.
            Dedicated nodepool with cgroup v1 (switching to cgroup v2 does not resolve), virtual machine type n2d-highmem-32.
            Docker images: from MariaDB, e.g. mariadb:10.6.18 (Docker Hub).
            Other: uses Galera replication. No Kubernetes operators.
            {noformat}

            Pinimo Thank you!

            We run sanity check level testing for Docker (ref https://buildbot.mariadb.org/#/builders/amd64-rhel8-dockerlibrary). Running regression testing inside Docker (alike to how we run regression testing elsewhere) is indeed a worthwile addition and this has been scoped but not implemented yet. danblack FYI.

            As for the issue itself, it in itself looks to be restricted to Docker and/or the Linux kernel. MariaDB (and previously MySQL) allows for the change of --temp-pool variable to cater for/workaround this issue. As Sergei mentioned we'll also leave the --temp-pool option in MariaDB providing a long-term solution/approach.

            Thank you for your assistance, and I will now go ahead and close the issue. Let us know if you have any further questions.

            Roel Roel Van de Paar added a comment - Pinimo Thank you! We run sanity check level testing for Docker (ref https://buildbot.mariadb.org/#/builders/amd64-rhel8-dockerlibrary ). Running regression testing inside Docker (alike to how we run regression testing elsewhere) is indeed a worthwile addition and this has been scoped but not implemented yet. danblack FYI. As for the issue itself, it in itself looks to be restricted to Docker and/or the Linux kernel. MariaDB (and previously MySQL) allows for the change of --temp-pool variable to cater for/workaround this issue. As Sergei mentioned we'll also leave the --temp-pool option in MariaDB providing a long-term solution/approach. Thank you for your assistance, and I will now go ahead and close the issue. Let us know if you have any further questions.
            Roel Roel Van de Paar made changes -
            Status In Testing [ 10301 ] Stalled [ 10000 ]
            Roel Roel Van de Paar made changes -
            Fix Version/s N/A [ 14700 ]
            Fix Version/s 10.5 [ 23123 ]
            Fix Version/s 10.6 [ 24028 ]
            Fix Version/s 10.11 [ 27614 ]
            Resolution Fixed [ 1 ]
            Status Stalled [ 10000 ] Closed [ 6 ]
            danblack Daniel Black added a comment -

            > 2 .or to find another resolution to the non-Docker mutex locking issue than the temp-pool deactivation?

            MDEV-15584 added O_TMPFILE to the temporary file creation internal function create_temp_file however the Aria doesn't use this.

            Based on the theory that its all filename driven mutexs within overlayfs2 (more likely than docker, but not really relevant), the open(O_TMPFILE) doesn't actually use filenames.

            container overlayfs2 support for O_TMPFILE is there

            [pid   363] openat(AT_FDCWD, "/tmp", O_RDWR|O_TRUNC|O_CLOEXEC|O_TMPFILE, 0660) = 8
            

            It is supported on overlayfs2 so lets see what a O_TMPFILE implemented Aria looks like.

            danblack Daniel Black added a comment - > 2 .or to find another resolution to the non-Docker mutex locking issue than the temp-pool deactivation? MDEV-15584 added O_TMPFILE to the temporary file creation internal function create_temp_file however the Aria doesn't use this. Based on the theory that its all filename driven mutexs within overlayfs2 (more likely than docker, but not really relevant), the open(O_TMPFILE) doesn't actually use filenames. container overlayfs2 support for O_TMPFILE is there [pid 363] openat(AT_FDCWD, "/tmp", O_RDWR|O_TRUNC|O_CLOEXEC|O_TMPFILE, 0660) = 8 It is supported on overlayfs2 so lets see what a O_TMPFILE implemented Aria looks like.
            danblack Daniel Black made changes -
            danblack Daniel Black added a comment -

            A implicit temporary table currently requires 11 lookup of a filename to achieve the result. Because of some deep call stacks it wasn't easy to construct and test the performance of just using O_TMPFILE so the new task was created MDEV-35860.

            843157 openat(AT_FDCWD, "/tmp/#sql-temptable-cdd6a-3-0.MAI", O_RDWR|O_CREAT|O_TRUNC|O_NOFOLLOW|O_CLOEXEC, 0660) = 49
            843157 openat(AT_FDCWD, "/tmp/#sql-temptable-cdd6a-3-0.MAD", O_RDWR|O_CREAT|O_TRUNC|O_NOFOLLOW|O_CLOEXEC, 0660) = 50
            843157 readlink("/tmp/#sql-temptable-cdd6a-3-0.MAI", 0x7fd25c0b7400, 1023) = -1 EINVAL (Invalid argument)
            843157 newfstatat(AT_FDCWD, "/tmp/#sql-temptable-cdd6a-3-0.MAI", {st_mode=S_IFREG|0660, st_size=8192, ...}, AT_SYMLINK_NOFOLLOW) = 0
            843157 openat(49, "#sql-temptable-cdd6a-3-0.MAI", O_RDWR|O_NOFOLLOW|O_CLOEXEC) = 50
            843157 newfstatat(AT_FDCWD, "/tmp/#sql-temptable-cdd6a-3-0.MAD", {st_mode=S_IFREG|0660, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0
            843157 openat(AT_FDCWD, "/tmp/#sql-temptable-cdd6a-3-0.MAD", O_RDWR|O_CLOEXEC) = 49
            843157 newfstatat(AT_FDCWD, "/tmp/#sql-temptable-cdd6a-3-0.MAI", {st_mode=S_IFREG|0660, st_size=8192, ...}, AT_SYMLINK_NOFOLLOW) = 0
            843157 unlink("/tmp/#sql-temptable-cdd6a-3-0.MAI") = 0
            843157 newfstatat(AT_FDCWD, "/tmp/#sql-temptable-cdd6a-3-0.MAD", {st_mode=S_IFREG|0660, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0
            843157 unlink("/tmp/#sql-temptable-cdd6a-3-0.MAD") = 0
            

            serg asked me to add --temp-pool=1 as the default in Docker Official Images of MariaDB. I'm ok with this, preferably as a temporary mechanism (if something can be fixed in server).

            > The memory leak happens only when temporary files are not pooled.

            Isn't the 10.6.18 / 10.-1-6.19 test above showing the default is still a problem?

            danblack Daniel Black added a comment - A implicit temporary table currently requires 11 lookup of a filename to achieve the result. Because of some deep call stacks it wasn't easy to construct and test the performance of just using O_TMPFILE so the new task was created MDEV-35860 . 843157 openat(AT_FDCWD, "/tmp/#sql-temptable-cdd6a-3-0.MAI", O_RDWR|O_CREAT|O_TRUNC|O_NOFOLLOW|O_CLOEXEC, 0660) = 49 843157 openat(AT_FDCWD, "/tmp/#sql-temptable-cdd6a-3-0.MAD", O_RDWR|O_CREAT|O_TRUNC|O_NOFOLLOW|O_CLOEXEC, 0660) = 50 843157 readlink("/tmp/#sql-temptable-cdd6a-3-0.MAI", 0x7fd25c0b7400, 1023) = -1 EINVAL (Invalid argument) 843157 newfstatat(AT_FDCWD, "/tmp/#sql-temptable-cdd6a-3-0.MAI", {st_mode=S_IFREG|0660, st_size=8192, ...}, AT_SYMLINK_NOFOLLOW) = 0 843157 openat(49, "#sql-temptable-cdd6a-3-0.MAI", O_RDWR|O_NOFOLLOW|O_CLOEXEC) = 50 843157 newfstatat(AT_FDCWD, "/tmp/#sql-temptable-cdd6a-3-0.MAD", {st_mode=S_IFREG|0660, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0 843157 openat(AT_FDCWD, "/tmp/#sql-temptable-cdd6a-3-0.MAD", O_RDWR|O_CLOEXEC) = 49 843157 newfstatat(AT_FDCWD, "/tmp/#sql-temptable-cdd6a-3-0.MAI", {st_mode=S_IFREG|0660, st_size=8192, ...}, AT_SYMLINK_NOFOLLOW) = 0 843157 unlink("/tmp/#sql-temptable-cdd6a-3-0.MAI") = 0 843157 newfstatat(AT_FDCWD, "/tmp/#sql-temptable-cdd6a-3-0.MAD", {st_mode=S_IFREG|0660, st_size=0, ...}, AT_SYMLINK_NOFOLLOW) = 0 843157 unlink("/tmp/#sql-temptable-cdd6a-3-0.MAD") = 0 serg asked me to add --temp-pool=1 as the default in Docker Official Images of MariaDB. I'm ok with this, preferably as a temporary mechanism (if something can be fixed in server). > The memory leak happens only when temporary files are not pooled. Isn't the 10.6.18 / 10.-1-6.19 test above showing the default is still a problem?

            Well, I only asked to consider it, I don't actually know what's better in this case.

            For me it's more important to understand why this happens so that it could be fixed for good.
            So, overlayfs2, huh? Is /tmp also on it? Feels unnatural, /tmp is totally local within a container, not on a host.

            serg Sergei Golubchik added a comment - Well, I only asked to consider it, I don't actually know what's better in this case. For me it's more important to understand why this happens so that it could be fixed for good. So, overlayfs2, huh? Is /tmp also on it? Feels unnatural, /tmp is totally local within a container, not on a host.
            Pinimo PNM added a comment -

            Thanks to all, this looks like promising research. I will now have to step down from participating to the issue investigation but I want to heartily thank serg and Roel on behalf of the BlaBlaCar team for your support.

            Take care,
            PNM

            Pinimo PNM added a comment - Thanks to all, this looks like promising research. I will now have to step down from participating to the issue investigation but I want to heartily thank serg and Roel on behalf of the BlaBlaCar team for your support. Take care, PNM

            People

              Roel Roel Van de Paar
              Pinimo PNM
              Votes:
              4 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.