Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-13325

InnoDB assert dict_sys->size > 0 during ALTER TABLE




      During repeated runs of a migration script in a single connection on an otherwise idle server, InnoDB crashes on an assert:

      2017-07-14 10:17:09 0x700004690000  InnoDB: Assertion failure in file /tmp/mariadb-20170712-4418-z03ns4/mariadb-10.2.7/storage/innobase/dict/dict0dict.cc line 1760
      InnoDB: Failing assertion: dict_sys->size > 0
      InnoDB: We intentionally generate a memory trap.
      InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
      InnoDB: If you get repeated assertion failures or crashes, even
      InnoDB: immediately after the mysqld startup, there may be
      InnoDB: corruption in the InnoDB tablespace. Please refer to
      InnoDB: http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html
      InnoDB: about forcing recovery.
      170714 10:17:09 [ERROR] mysqld got signal 6 ;
      This could be because you hit a bug. It is also possible that this binary
      or one of the libraries it was linked against is corrupt, improperly built,
      or misconfigured. This error can also be caused by malfunctioning hardware.
      To report this bug, see https://mariadb.com/kb/en/reporting-bugs
      We will try our best to scrape up some info that will hopefully help
      diagnose the problem, but since we have already crashed,
      something is definitely wrong and this may fail.
      Server version: 10.2.7-MariaDB
      It is possible that mysqld could use up to
      key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 467207 K  bytes of memory
      Hope that's ok; if not, decrease some variables in the equation.
      Thread pointer: 0x7fe90f10c408
      Attempting backtrace. You can use the following information to find out
      where mysqld died. If you see no messages after this, something went
      terribly wrong...
      0   mysqld                              0x000000010e450c33 _Z11mysql_parseP3THDPcjP12Parser_statebb + 649
      0   mysqld                              0x000000010e44e862 _Z16dispatch_command19enum_server_commandP3THDPcjbb + 5485
      0   mysqld                              0x000000010e44fd64 _Z10do_commandP3THD + 892
      0   mysqld                              0x000000010e51f6de _Z24do_handle_one_connectionP7CONNECT + 547
      0   mysqld                              0x000000010e51f4ae handle_one_connection + 56
      0   libsystem_pthread.dylib             0x00007fffbd98393b _pthread_body + 180
      0   libsystem_pthread.dylib             0x00007fffbd983887 _pthread_body + 0
      0   libsystem_pthread.dylib             0x00007fffbd98308d thread_start + 13
      Trying to get some variables.
      Some pointers may be invalid and cause the dump to abort.
      Query (0x7fe910064e20): is an invalid pointer
      Connection ID (thread ID): 35
      Status: NOT_KILLED
      Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=off,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on
      The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
      information that should help you find out what is causing the crash.
      We think the query pointer is invalid, but we will try to print it anyway.
      Query: ALTER TABLE `instances` ADD COLUMN `resurrection_paused` tinyint(1)

      This was brought to my attention due to a similar crash on RDS on a MySQL 5.7.17 instances. AWS support pointed to this MySQL bug:


      That was not reproducible apparently and hand-waved away as hardware troubles.

      This crash similarly occurs on Percona Server 5.7. This has been encountered in multiple environments, and this sample is from my local workstation.

      To reproduce I use the attached sql script and run:

      set -e
      while true; do
      mysql < migration_crasher.sql

      This fails after approximately 15 seconds on my local environment.

      I additionally inspected data dictionary memory concurrently and found strange output:

      while true;do mysql -sse 'SHOW ENGINE INNODB STATUS\G' | egrep '^Dictionary memory allocated';sleep 0.1;done
      Dictionary memory allocated 14342
      Dictionary memory allocated 9811
      Dictionary memory allocated 211
      Dictionary memory allocated 20302
      Dictionary memory allocated 18446744073709551123
      Dictionary memory allocated 3955
      Dictionary memory allocated 3334
      Dictionary memory allocated 9997
      Dictionary memory allocated 18446744073709540819
      ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (61)

      I was not able to reproduce this under MariaDB 10.1. Additionally, after enabling old_alter_table I could not reproduce the problem.


        Issue Links



              jplindst Jan Lindström (Inactive)
              andrew.garner Andrew Garner
              0 Vote for this issue
              4 Start watching this issue



                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.