Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-31772

Innodb_rows_* missing on mariadb-server 10.11.3

Details

    • Bug
    • Status: Stalled (View Workflow)
    • Major
    • Resolution: Unresolved
    • 10.10(EOL), 10.11, 11.0(EOL)
    • 10.11
    • None
    • None
    • OS: Debian 12 (Bookworm)
      MariaDB Server: 10.11.3

    Description

      On MariaDB Server 10.11.3 (the default version on Debian 12), the Innodb_rows_* columns from SHOW GLOBAL STATUS are missing:

      $ sudo apt install mariadb-server
      ...
      $ mysql --version
      mysql  Ver 15.1 Distrib 10.11.3-MariaDB, for debian-linux-gnu (x86_64) using  EditLine wrapper
      $ sudo mysql -Bse 'SHOW GLOBAL STATUS;' | grep Innodb_rows
      $
      

      This doesn't occur on 10.5.19 (the default version on Debian 11):

      $ sudo apt install mariadb-server
      ...
      $ mysql --version
      mysql  Ver 15.1 Distrib 10.5.19-MariaDB, for debian-linux-gnu (x86_64) using  EditLine wrapper
      $ sudo mysql -Bse 'SHOW GLOBAL STATUS;' | grep Innodb_rows
      Innodb_rows_deleted     0
      Innodb_rows_inserted    0
      Innodb_rows_read        0
      Innodb_rows_updated     0
      $
      

      I tried creating a dummy database with a dummy table and dummy rows but I still don't see the columns.

      Likewise, SHOW ENGINE INNODB STATUS doesn't show row operations on 10.11.3:

      $ sudo mysql -Bse 'SHOW ENGINE INNODB STATUS;' | sed 's/\\n/\n/g'
      ...
      --------------
      ROW OPERATIONS
      --------------
      0 read views open inside InnoDB
      state: sleeping
      ----------------------------
      END OF INNODB MONITOR OUTPUT
      ============================
      

      But it does on 10.5.19:

      $ sudo mysql -Bse 'SHOW ENGINE INNODB STATUS;' | sed 's/\\n/\n/g'
      ...
      --------------
      ROW OPERATIONS
      --------------
      0 read views open inside InnoDB
      Process ID=0, Main thread ID=0, state: sleeping
      Number of rows inserted 0, updated 0, deleted 0, read 0
      0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
      Number of system rows inserted 0, updated 0, deleted 0, read 0
      0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
      ----------------------------
      END OF INNODB MONITOR OUTPUT
      ============================
      

      Did something relevant change between those two versions? How do I get those values on 10.11.3? I don't see any callouts in the docs about those values being removed or renamed after 10.5.19.

      Attachments

        Issue Links

          Activity

            In https://github.com/MariaDB/server/pull/2959 I tried a few ideas to reduce the performance impact of this, including two user-visible ones:

            1. not updating the counters on the end of every statement, and
            2. not updating (or reinstating) Innodb_rows_read.

            Would such tweaks be acceptable?

            marko Marko Mäkelä added a comment - In https://github.com/MariaDB/server/pull/2959 I tried a few ideas to reduce the performance impact of this, including two user-visible ones: not updating the counters on the end of every statement, and not updating (or reinstating) Innodb_rows_read . Would such tweaks be acceptable?

            1. for me acceptable as long as it is more or less accurate
            2. do not understand it. Does it mean Innodb_rows_read is NOT more or less accurate? Then I do not like it.

            oli Oli Sennhauser added a comment - 1. for me acceptable as long as it is more or less accurate 2. do not understand it. Does it mean Innodb_rows_read is NOT more or less accurate? Then I do not like it.

            A performance regression of roughly 15% was observed with larger numbers of concurrent connections for a variant that would first update the counters in the trx_t object and on commit update the global counters in trx_sys (in a race condition prone way, without acquiring a mutex). I think that this was on a 2-way NUMA system where the number of connections exceed the number of hardware threads on one NUMA node.

            My last resort is a variant that would only add the INSERT, DELETE and UPDATE counters but not Innodb_rows_read. In this case, we can avoid updating the trx_sys on the commit of read-only or non-locking transactions. If there is a similar performance regression also for that, I don’t think we can bring back the counters in this form. What could be done is to make INFORMATION_SCHEMA.INNODB_MONITOR report the Handler_ counters, to be somewhat compatible with the existing monitoring solutions. This would not be exactly the same, because these would count operations also on other than InnoDB tables, possibly including some internal buffers that are used for computing JOIN or for sorting when no index is applicable to ORDER BY or GROUP BY.

            marko Marko Mäkelä added a comment - A performance regression of roughly 15% was observed with larger numbers of concurrent connections for a variant that would first update the counters in the trx_t object and on commit update the global counters in trx_sys (in a race condition prone way, without acquiring a mutex). I think that this was on a 2-way NUMA system where the number of connections exceed the number of hardware threads on one NUMA node. My last resort is a variant that would only add the INSERT , DELETE and UPDATE counters but not Innodb_rows_read . In this case, we can avoid updating the trx_sys on the commit of read-only or non-locking transactions. If there is a similar performance regression also for that, I don’t think we can bring back the counters in this form. What could be done is to make INFORMATION_SCHEMA.INNODB_MONITOR report the Handler_ counters, to be somewhat compatible with the existing monitoring solutions. This would not be exactly the same, because these would count operations also on other than InnoDB tables, possibly including some internal buffers that are used for computing JOIN or for sorting when no index is applicable to ORDER BY or GROUP BY .

            I think that we need a product manager decision on this. While adding everything else than innodb_rows_read might be acceptable from the performance point of view, it could be better to modify the SQL layer counter code so that there are separate Handler_ counters for each storage engine (and the current Handler_ counters would be reported as a sum of them).

            marko Marko Mäkelä added a comment - I think that we need a product manager decision on this. While adding everything else than innodb_rows_read might be acceptable from the performance point of view, it could be better to modify the SQL layer counter code so that there are separate Handler_ counters for each storage engine (and the current Handler_ counters would be reported as a sum of them).

            Specifically, we could refactor handler::update_global_table_stats() and handler::increment_statistics() so that some per-storage-engine statistics will be maintained instead of maintaining global statistics. Currently, handler::update_global_table_stats() is updating some per-table statistics but no per-engine statistics. It also is apparently lumping insert,update,delete together into a single per-table rows_changed count.

            I think that we should update per-engine statistics in a critical section of some mutex (such as LOCK_global_table_stats). Instead of invoking operations like increment_statistics(&SSV::ha_write_count) (incrementing global variables in a race condition prone way) we could do what handler::update_rows_read() does and update a per-handler counter that would later (at the end of the statement or transaction) be aggregated to a global counter.

            Once this is in place, the per-engine data could be exposed via PERFORMANCE_SCHEMA. For backward compatibility, a legacy interface INFORMATION_SCHEMA.INNODB_MONITOR could be reinstated.

            As far as I can tell, such a solution should incur minimal overhead. Possibly it could result in a minor performance improvement, because the global Handler_delete, Handler_update, and Handler_write counters would be updated less frequently, with less cache line contention.

            marko Marko Mäkelä added a comment - Specifically, we could refactor handler::update_global_table_stats() and handler::increment_statistics() so that some per-storage-engine statistics will be maintained instead of maintaining global statistics. Currently, handler::update_global_table_stats() is updating some per-table statistics but no per-engine statistics. It also is apparently lumping insert,update,delete together into a single per-table rows_changed count. I think that we should update per-engine statistics in a critical section of some mutex (such as LOCK_global_table_stats ). Instead of invoking operations like increment_statistics(&SSV::ha_write_count) (incrementing global variables in a race condition prone way) we could do what handler::update_rows_read() does and update a per- handler counter that would later (at the end of the statement or transaction) be aggregated to a global counter. Once this is in place, the per-engine data could be exposed via PERFORMANCE_SCHEMA . For backward compatibility, a legacy interface INFORMATION_SCHEMA.INNODB_MONITOR could be reinstated. As far as I can tell, such a solution should incur minimal overhead. Possibly it could result in a minor performance improvement, because the global Handler_delete , Handler_update , and Handler_write counters would be updated less frequently, with less cache line contention.

            People

              Unassigned Unassigned
              jefferbrecht Jeff Erbrecht
              Votes:
              1 Vote for this issue
              Watchers:
              9 Start watching this issue

              Dates

                Created:
                Updated:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.