Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-19081

DB crashes periodicly with - InnoDB: Assertion failure



    • Bug
    • Status: Closed (View Workflow)
    • Critical
    • Resolution: Incomplete
    • 10.2.22, 10.2.23
    • N/A


      The DB service stops from time to time but non-periodic (estimated mostly between 11pm and 9am) with the following error:

      2019-03-28 23:31:08 0x7ff4dc62d700  InnoDB: Assertion failure in file /home/buildbot/buildbot/build/mariadb-10.2.23/storage/innobase/rem/rem0rec.cc line 574
      InnoDB: We intentionally generate a memory trap.
      InnoDB: Submit a detailed bug report to https://jira.mariadb.org/
      InnoDB: If you get repeated assertion failures or crashes, even
      InnoDB: immediately after the mysqld startup, there may be
      InnoDB: corruption in the InnoDB tablespace. Please refer to
      InnoDB: https://mariadb.com/kb/en/library/innodb-recovery-modes/
      InnoDB: about forcing recovery.
      190328 23:31:08 [ERROR] mysqld got signal 6 ;
      This could be because you hit a bug. It is also possible that this binary
      or one of the libraries it was linked against is corrupt, improperly built,
      or misconfigured. This error can also be caused by malfunctioning hardware.
      To report this bug, see https://mariadb.com/kb/en/reporting-bugs
      We will try our best to scrape up some info that will hopefully help
      diagnose the problem, but since we have already crashed, 
      something is definitely wrong and this may fail.
      Server version: 10.2.23-MariaDB-10.2.23+maria~stretch
      It is possible that mysqld could use up to 
      key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1783435 K  bytes of memory
      Hope that's ok; if not, decrease some variables in the equation.
      Thread pointer: 0x7ff4d8001f28
      Attempting backtrace. You can use the following information to find out
      where mysqld died. If you see no messages after this, something went
      terribly wrong...
      stack_bottom = 0x7ff4dc62ccc8 thread_stack 0x49000
      *** buffer overflow detected ***: /usr/sbin/mysqld terminated
      ======= Backtrace: =========
      ======= Memory map: ========
      55ca416a9000-55ca4278f000 r-xp 00000000 fe:02 1320586                    /usr/sbin/mysqld
      55ca4298e000-55ca42a60000 r--p 010e5000 fe:02 1320586                    /usr/sbin/mysqld
      55ca42a60000-55ca42b17000 rw-p 011b7000 fe:02 1320586                    /usr/sbin/mysqld
      55ca42b17000-55ca433a9000 rw-p 00000000 00:00 0 
      55ca43458000-55ca508b9000 rw-p 00000000 00:00 0                          [heap]
      7ff3d9e04000-7ff3d9e05000 ---p 00000000 00:00 0 
      7ff3d9e05000-7ff3d9e4f000 rw-p 00000000 00:00 0 
      7ff3d9e4f000-7ff3d9e50000 ---p 00000000 00:00 0 

      1. The DB node which crashed at first (db1b) tonight, was set up yesterday (running 10.2.23).
      2. The second server (db1c) crashed 70 seconds later after becoming the primary DB node and during the reconnect/resync (SST) of db1b (last update 3 weeks ago from 10.2.14 to 10.2.22)
      3. The third DB node db1a wasn't running since the weekend and was therefore not involved in tonights crash

      If seems that the processes keep some kind of running but the service info (systemctl status mariadb.service) are in a failed state and nobody can connect or use the db services. Furthermore the process has to be killed manually.
      After that I could rebuild the Cluster by creating a new Galera Cluster (galera_new_cluster) from the most advanced node.


        Issue Links



              marko Marko Mäkelä
              stefman_87 Stefan B.
              0 Vote for this issue
              3 Start watching this issue



                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.