Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-21054

Crash on shutdown due to btr_search_latches=NULL

    XMLWordPrintable

Details

    Description

      I got this crash on my local system only once. I tried to repeat it without success:

      ad376d653cb47756c6a329009e4b9fa7070fba46 with a merge from 10.5 ae90f8431ba80383f75810248ddaaa9d2c6fd09f

      ***Warnings generated in error logs during shutdown after running tests: encryption.innochecksum
       
      191114 14:53:01 [ERROR] mysqld got signal 11 ;
      Attempting backtrace. You can use the following information to find out
      Version: '10.5.0-MariaDB-debug-log'  socket: '/dev/shm/10.5/mysql-test/var/tmp/2/mysqld.1.sock'  port: 16020  Source distribution
      2019-11-14 14:53:00 0 [Note] /dev/shm/10.5/sql/mysqld (initiated by: root[root] @ localhost [127.0.0.1]): Normal shutdown
      2019-11-14 14:53:00 0 [Note] Event Scheduler: Purging the queue. 0 events
      2019-11-14 14:53:01 0 [Note] InnoDB: FTS optimize thread exiting.
      2019-11-14 14:53:01 0 [Note] InnoDB: Starting shutdown...
      2019-11-14 14:53:01 0 [Note] InnoDB: Dumping buffer pool(s) to /dev/shm/10.5/mysql-test/var/2/mysqld.1/data/ib_buffer_pool
      2019-11-14 14:53:01 0 [Note] InnoDB: Instance 0, restricted to 237 pages due to innodb_buf_pool_dump_pct=25
      2019-11-14 14:53:01 0 [Note] InnoDB: Buffer pool(s) dump completed at 191114 14:53:01
      191114 14:53:01 [ERROR] mysqld got signal 11 ;
      This could be because you hit a bug. It is also possible that this binary
      or one of the libraries it was linked against is corrupt, improperly built,
      or misconfigured. This error can also be caused by malfunctioning hardware.
       
      To report this bug, see https://mariadb.com/kb/en/reporting-bugs
       
      We will try our best to scrape up some info that will hopefully help
      diagnose the problem, but since we have already crashed, 
      something is definitely wrong and this may fail.
       
      Server version: 10.5.0-MariaDB-debug-log
      key_buffer_size=1048576
      read_buffer_size=131072
      max_used_connections=1
      max_threads=153
      thread_count=0
      It is possible that mysqld could use up to 
      key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 63648 K  bytes of memory
      Hope that's ok; if not, decrease some variables in the equation.
       
      Thread pointer: 0x0
      Attempting backtrace. You can use the following information to find out
      where mysqld died. If you see no messages after this, something went
      terribly wrong...
      stack_bottom = 0x0 thread_stack 0x49000
      /dev/shm/10.5/sql/mysqld(my_print_stacktrace+0x29)[0x5611bb4c97e9]
      /dev/shm/10.5/sql/mysqld(handle_fatal_signal+0x2ae)[0x5611bad4815e]
      /lib/x86_64-linux-gnu/libpthread.so.0(+0x13510)[0x7fa143cb7510]
      /dev/shm/10.5/sql/mysqld(+0xf38fb7)[0x5611bb30afb7]
      /dev/shm/10.5/sql/mysqld(+0xf84b75)[0x5611bb356b75]
      /dev/shm/10.5/sql/mysqld(+0xf5d88c)[0x5611bb32f88c]
      /dev/shm/10.5/sql/mysqld(+0xffa234)[0x5611bb3cc234]
      /dev/shm/10.5/sql/mysqld(+0xd67219)[0x5611bb139219]
      /dev/shm/10.5/sql/mysqld(_ZN5tpool10task_group7executeEPNS_4taskE+0xdd)[0x5611bb456a2d]
      /dev/shm/10.5/sql/mysqld(_ZN5tpool19thread_pool_generic11worker_mainEPNS_11worker_dataE+0x46)[0x5611bb454916]
      

      There was no core dump. I resolved the stack trace manually:

      0xf38fb7 is in btr_search_drop_page_hash_index(buf_block_t*) (/mariadb/10.5/storage/innobase/include/btr0sea.ic:141).
      0xf84b75 is in buf_LRU_free_page(buf_page_t*, bool) (/mariadb/10.5/storage/innobase/buf/buf0lru.cc:1644).
      0xf5d88c is in buf_page_io_complete(buf_page_t*, bool, bool) (/mariadb/10.5/storage/innobase/buf/buf0buf.cc:6147).
      0xffa234 is in fil_aio_callback(tpool::aiocb const*) (/mariadb/10.5/storage/innobase/fil/fil0fil.cc:4382).
      0xd67219 is in io_callback(tpool::aiocb*) (/mariadb/10.5/storage/innobase/os/os0file.cc:3908).
      /dev/shm/10.5/sql/mysqld(tpool::task_group::execute(tpool::task*)+0xdd)[0x5611bb456a2d]
      /dev/shm/10.5/sql/mysqld(tpool::thread_pool_generic::worker_main(tpool::worker_data*)+0x46)[0x5611bb454916]
      

      The crash apparently occurs due to btr_search_latches being NULL. (Note that ever since MDEV-20487 the adaptive hash index is disabled by default, also in this test, but that does not seem to prevent this code from executing):

      0x0000000000f38f9c <+92>:	cmpq   $0x0,0x140bbb4(%rip)        # 0x2344b58 <btr_ahi_parts>
         0x0000000000f38fa4 <+100>:	je     0xf39019 <btr_search_drop_page_hash_index(buf_block_t*)+217>
         0x0000000000f38fa6 <+102>:	xor    %ebx,%ebx
         0x0000000000f38fa8 <+104>:	nopl   0x0(%rax,%rax,1)
         0x0000000000f38fb0 <+112>:	mov    0x140bbe9(%rip),%rax        # 0x2344ba0 <btr_search_latches>
         0x0000000000f38fb7 <+119>:	mov    (%rax,%rbx,8),%rdi
         0x0000000000f38fbb <+123>:	mov    $0x4,%esi
         0x0000000000f38fc0 <+128>:	callq  0xe8aab0 <rw_lock_own(rw_lock_t*, unsigned long)>
      

      The machine code is for part of the following C++ code:

      static inline bool btr_search_own_any(ulint mode)
      {
      	for (ulint i = 0; i < btr_ahi_parts; ++i) {
      		if (rw_lock_own(btr_search_latches[i], mode)) {
      			return(true);
      		}
      	}
      	return(false);
      }
      

      I wonder if it would help to remove one level of pointer indirection for btr_search_latches, and to statically allocate the array of pointers for the maximum number of elements.

      Attachments

        Issue Links

          Activity

            People

              marko Marko Mäkelä
              marko Marko Mäkelä
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.