Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-29082

InnoDB: Failing assertion: !cursor->index->is_committed()

Details

    Description

      Version: '10.5.16-MariaDB-log'  socket: '/run/user/1208/akonadi/mysql.socket'  port: 0  MariaDB package
      2022-07-11 19:46:16 0x7fd614e88700  InnoDB: Assertion failure in file /home/abuild/rpmbuild/BUILD/mariadb-10.5.16/storage/innobase/row/row0ins.cc line 218
      InnoDB: Failing assertion: !cursor->index->is_committed()
      InnoDB: We intentionally generate a memory trap.
      InnoDB: Submit a detailed bug report to https://jira.mariadb.org/
      InnoDB: If you get repeated assertion failures or crashes, even
      InnoDB: immediately after the mysqld startup, there may be
      InnoDB: corruption in the InnoDB tablespace. Please refer to
      InnoDB: https://mariadb.com/kb/en/library/innodb-recovery-modes/
      InnoDB: about forcing recovery.
      220711 19:46:16 [ERROR] mysqld got signal 6 ;
      This could be because you hit a bug. It is also possible that this binary
      or one of the libraries it was linked against is corrupt, improperly built,
      or misconfigured. This error can also be caused by malfunctioning hardware.
       
      To report this bug, see https://mariadb.com/kb/en/reporting-bugs
       
      We will try our best to scrape up some info that will hopefully help
      diagnose the problem, but since we have already crashed, 
      something is definitely wrong and this may fail.
       
      Server version: 10.5.16-MariaDB-log
      key_buffer_size=16384
      read_buffer_size=131072
      max_used_connections=34
      max_threads=258
      thread_count=35
      It is possible that mysqld could use up to 
      key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 567973 K  bytes of memory
      Hope that's ok; if not, decrease some variables in the equation.
       
      Thread pointer: 0x7fd554000c58
      Attempting backtrace. You can use the following information to find out
      where mysqld died. If you see no messages after this, something went
      terribly wrong...
      stack_bottom = 0x7fd614e87c58 thread_stack 0x49000
      ??:0(my_print_stacktrace)[0x562e8b0ca73d]
      ??:0(handle_fatal_signal)[0x562e8ab42495]
      ??:0(__restore_rt)[0x7fd612e168c0]
      ??:0(__GI_raise)[0x7fd61184acdb]
      ??:0(__GI_abort)[0x7fd61184c375]
      /usr/sbin/mysqld(+0x6464d8)[0x562e8a8464d8]
      ??:0(std::unique_lock<std::mutex>::unlock())[0x562e8aebe7d0]
      ??:0(std::unique_lock<std::mutex>::unlock())[0x562e8aec3b5b]
      ??:0(std::unique_lock<std::mutex>::unlock())[0x562e8aec4058]
      ??:0(std::unique_lock<std::mutex>::unlock())[0x562e8aed6121]
      ??:0(wsrep_notify_status(wsrep::server_state::state, wsrep::view const*))[0x562e8ae16f29]
      ??:0(handler::ha_write_row(unsigned char const*))[0x562e8ab4ed8f]
      ??:0(write_record(THD*, TABLE*, st_copy_info*, select_result*))[0x562e8a90e77d]
      ??:0(mysql_insert(THD*, TABLE_LIST*, List<Item>&, List<List<Item> >&, List<Item>&, List<Item>&, enum_duplicates, bool, select_result*))[0x562e8a918f53]
      ??:0(mysql_execute_command(THD*))[0x562e8a944dad]
      ??:0(Prepared_statement::execute(String*, bool))[0x562e8a960ae5]
      ??:0(Prepared_statement::execute_loop(String*, bool, unsigned char*, unsigned char*))[0x562e8a960c5c]
      ??:0(Prepared_statement::execute_bulk_loop(String*, bool, unsigned char*, unsigned char*))[0x562e8a961aaa]
      ??:0(mysqld_stmt_execute(THD*, char*, unsigned int))[0x562e8a961bca]
      ??:0(dispatch_command(enum_server_command, THD*, char*, unsigned int, bool, bool))[0x562e8a940175]
      ??:0(do_command(THD*))[0x562e8a93ebcf]
      ??:0(do_handle_one_connection(CONNECT*, bool))[0x562e8aa365c0]
      ??:0(handle_one_connection)[0x562e8aa36994]
      ??:0(MyCTX_nopad::finish(unsigned char*, unsigned int*))[0x562e8ad68ab3]
      ??:0(start_thread)[0x7fd612e0a6ea]
      ??:0(__GI___clone)[0x7fd611917a8f]
       
      Trying to get some variables.
      Some pointers may be invalid and cause the dump to abort.
      Query (0x7fd554010280): INSERT INTO PimItemTable (rev, remoteId, remoteRevision, gid, collectionId, mimeTypeId, datetime, atime, dirty, size) VALUES (0, '3496', NULL, NULL, 14, 5, TIMESTAMP'2022-07-11 17:46:16.327000', TIMESTAMP'2022-07-11 17:46:16.327000', 0, 766)
      

      | PimItemTable | CREATE TABLE `pimitemtable` (
        `id` bigint(20) NOT NULL AUTO_INCREMENT,
        `rev` int(11) NOT NULL DEFAULT 0,
        `remoteId` varbinary(1024) DEFAULT NULL,
        `remoteRevision` varbinary(255) DEFAULT NULL,
        `gid` varbinary(255) DEFAULT NULL,
        `collectionId` bigint(20) DEFAULT NULL,
        `mimeTypeId` bigint(20) DEFAULT NULL,
        `datetime` timestamp NOT NULL DEFAULT current_timestamp(),
        `atime` timestamp NOT NULL DEFAULT current_timestamp(),
        `dirty` tinyint(1) DEFAULT NULL,
        `size` bigint(20) NOT NULL DEFAULT 0,
        PRIMARY KEY (`id`),
        KEY `PimItemTable_collectionIndex` (`collectionId`),
        KEY `PimItemTable_mimeTypeIndex` (`mimeTypeId`),
        KEY `PimItemTable_gidIndex` (`gid`),
        KEY `PimItemTable_ridIndex` (`remoteId`),
        KEY `PimItemTable_idSortIndex` (`id`),
        CONSTRAINT `pimitemtable_ibfk_1` FOREIGN KEY (`collectionId`) REFERENCES `collectiontable` (`id`) ON DELETE CASCADE ON UPDATE CASCADE,
        CONSTRAINT `pimitemtable_ibfk_2` FOREIGN KEY (`mimeTypeId`) REFERENCES `mimetypetable` (`id`) ON UPDATE CASCADE
      ) ENGINE=InnoDB AUTO_INCREMENT=92621 DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC |
      

      Attachments

        Issue Links

          Activity

            Thank you for the report. Such errors often affect indexed virtual columns. I do not see any obvious use of indexed virtual columns in the table definition. If the PimItemTable_gidIndex were a UNIQUE key, it could use a virtual column (MDEV-371).

            Thus, the most likely explanation would be an elusive bug in the InnoDB change buffer that we have been unable to reproduce, and thus analyze and fix. The change buffer was disabled by default some time ago in MDEV-27734, but that will only prevent further writes via the change buffer. Rebuilding the secondary indexes (or the entire table) should help:

            SET GLOBAL innodb_change_buffering=none;
            ALTER TABLE pimitemtable FORCE;
            

            I see that this server-crashing assertion was not replaced with some less intrusive error handling in MDEV-13542. That could be done rather easily.

            marko Marko Mäkelä added a comment - Thank you for the report. Such errors often affect indexed virtual columns. I do not see any obvious use of indexed virtual columns in the table definition. If the PimItemTable_gidIndex were a UNIQUE key, it could use a virtual column ( MDEV-371 ). Thus, the most likely explanation would be an elusive bug in the InnoDB change buffer that we have been unable to reproduce, and thus analyze and fix. The change buffer was disabled by default some time ago in MDEV-27734 , but that will only prevent further writes via the change buffer. Rebuilding the secondary indexes (or the entire table) should help: SET GLOBAL innodb_change_buffering=none; ALTER TABLE pimitemtable FORCE ; I see that this server-crashing assertion was not replaced with some less intrusive error handling in MDEV-13542 . That could be done rather easily.

            Thanks, this has helped. After doing your commands it crashed on a second table, but I was able to fix that with the same commands. The other affected table was this:

            | PartTable | CREATE TABLE `parttable` (
              `id` bigint(20) NOT NULL AUTO_INCREMENT,
              `pimItemId` bigint(20) NOT NULL,
              `partTypeId` bigint(20) NOT NULL,
              `data` longblob DEFAULT NULL,
              `datasize` bigint(20) NOT NULL,
              `version` int(11) DEFAULT 0,
              `storage` tinyint(4) DEFAULT 0,
              PRIMARY KEY (`id`),
              UNIQUE KEY `PartTable_pimItemIdTypeIndex` (`pimItemId`,`partTypeId`),
              KEY `PartTable_pimItemIdSortIndex` (`pimItemId`),
              KEY `PartTable_partTypeIndex` (`partTypeId`),
              CONSTRAINT `parttable_ibfk_1` FOREIGN KEY (`pimItemId`) REFERENCES `pimitemtable` (`id`) ON DELETE CASCADE ON UPDATE CASCADE,
              CONSTRAINT `parttable_ibfk_2` FOREIGN KEY (`partTypeId`) REFERENCES `parttypetable` (`id`) ON DELETE CASCADE ON UPDATE CASCADE
            ) ENGINE=InnoDB AUTO_INCREMENT=295447 DEFAULT CHARSET=utf8 |
            

            Dakon Rolf Eike Beer added a comment - Thanks, this has helped. After doing your commands it crashed on a second table, but I was able to fix that with the same commands. The other affected table was this: | PartTable | CREATE TABLE `parttable` ( `id` bigint (20) NOT NULL AUTO_INCREMENT, `pimItemId` bigint (20) NOT NULL , `partTypeId` bigint (20) NOT NULL , `data` longblob DEFAULT NULL , `datasize` bigint (20) NOT NULL , `version` int (11) DEFAULT 0, `storage` tinyint(4) DEFAULT 0, PRIMARY KEY (`id`), UNIQUE KEY `PartTable_pimItemIdTypeIndex` (`pimItemId`,`partTypeId`), KEY `PartTable_pimItemIdSortIndex` (`pimItemId`), KEY `PartTable_partTypeIndex` (`partTypeId`), CONSTRAINT `parttable_ibfk_1` FOREIGN KEY (`pimItemId`) REFERENCES `pimitemtable` (`id`) ON DELETE CASCADE ON UPDATE CASCADE , CONSTRAINT `parttable_ibfk_2` FOREIGN KEY (`partTypeId`) REFERENCES `parttypetable` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB AUTO_INCREMENT=295447 DEFAULT CHARSET=utf8 |

            While we could dismiss this bug as something that was caused by the known but irreproducible bug in the change buffer, the bug MDEV-29130 that you filed later may or may not be due to something else. Depending on the update and backup/restore history of the database, it might be explained by MDEV-24449.

            Dakon, can you please try to clarify the history of the database?

            1. What was the first version of MySQL or MariaDB with which the database was created?
            2. Was Galera snapshot transfer ever used?
            3. Were data files ever restored from a physical backup (not logical, as in SQL dump)? If yes, how were the backups created? Backup tool name and version? File system snapshot?
            marko Marko Mäkelä added a comment - While we could dismiss this bug as something that was caused by the known but irreproducible bug in the change buffer, the bug MDEV-29130 that you filed later may or may not be due to something else. Depending on the update and backup/restore history of the database, it might be explained by MDEV-24449 . Dakon , can you please try to clarify the history of the database? What was the first version of MySQL or MariaDB with which the database was created? Was Galera snapshot transfer ever used? Were data files ever restored from a physical backup (not logical, as in SQL dump)? If yes, how were the backups created? Backup tool name and version? File system snapshot?

            I can't tell exactly when this database was created, but the oldest thing I found in the mysql.err.old is this:

            2020-07-09  8:14:03 139992265905024 [Note] InnoDB: 5.7.30 started; log sequence number 222174628
            

            I don't think Akonadi uses Galera for anything. The files were never restored from backup or similar, at most the LVM volume was moved from a different host.

            Dakon Rolf Eike Beer added a comment - I can't tell exactly when this database was created, but the oldest thing I found in the mysql.err.old is this: 2020-07-09 8:14:03 139992265905024 [Note] InnoDB: 5.7.30 started; log sequence number 222174628 I don't think Akonadi uses Galera for anything. The files were never restored from backup or similar, at most the LVM volume was moved from a different host.

            Dakon, thank you. If crash recovery or xtrabackup or file system snapshot based copying was ever used on MySQL 5.7, this corruption could have been caused by the race condition that was fixed in MariaDB in MDEV-24449.

            I think that MDEV-24449 could be the true cause of MySQL Bug #61104. That bug was never really fixed in MySQL, but only worked around.

            marko Marko Mäkelä added a comment - Dakon , thank you. If crash recovery or xtrabackup or file system snapshot based copying was ever used on MySQL 5.7, this corruption could have been caused by the race condition that was fixed in MariaDB in MDEV-24449 . I think that MDEV-24449 could be the true cause of MySQL Bug #61104 . That bug was never really fixed in MySQL, but only worked around.

            People

              marko Marko Mäkelä
              Dakon Rolf Eike Beer
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.