Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-20218

galera: crash on restart after failed state transfer

    XMLWordPrintable

Details

    • Bug
    • Status: Closed (View Workflow)
    • Major
    • Resolution: Incomplete
    • 10.4.6
    • N/A
    • Galera, Galera SST
    • mariaDB image 10.4.6-bionic from DockerHub in a 3-node galera cluster on debian buster

    Description

      When a docker container restarts and tries to re-join the cluster, state transfer fails and MariaDB crashed with signal 11

      2019-07-30 8:17:52 0 [ERROR] WSREP: Process completed with error: wsrep_sst_mariabackup --role 'joiner' --address 'dbcluster03.je-server.local:5454' --datadir '/var/lib/mysql/' --parent '1' '' '': 32 (Broken pipe)
      2019-07-30 8:17:52 0 [ERROR] WSREP: Failed to read uuid:seqno and wsrep_gtid_domain_id from joiner script.
      2019-07-30 8:17:52 4 [Note] WSREP: SST received
      2019-07-30 8:17:52 4 [Note] WSREP: SST received: 00000000-0000-0000-0000-000000000000:-1
      2019-07-30 8:17:52 3 [ERROR] WSREP: Application received wrong state:
      Received: 00000000-0000-0000-0000-000000000000
      Required: 59d93b0e-b04c-11e9-89e1-37592b577569
      2019-07-30 8:17:52 3 [ERROR] WSREP: Application state transfer failed. This is unrecoverable condition, restart required.
      [...]
      2019-07-30 8:17:52 3 [Note] WSREP: mysqld: Terminated.
      190730 8:17:52 [ERROR] mysqld got signal 11 ;
      This could be because you hit a bug. It is also possible that this binary
      or one of the libraries it was linked against is corrupt, improperly built,
      or misconfigured. This error can also be caused by malfunctioning hardware.

      To report this bug, see https://mariadb.com/kb/en/reporting-bugs

      We will try our best to scrape up some info that will hopefully help
      diagnose the problem, but since we have already crashed,
      something is definitely wrong and this may fail.

      Server version: 10.4.6-MariaDB-1:10.4.6+maria~bionic
      key_buffer_size=0
      read_buffer_size=67108864
      max_used_connections=0
      max_threads=1502
      thread_count=3
      It is possible that mysqld could use up to
      key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 196907000 K bytes of memory
      Hope that's ok; if not, decrease some variables in the equation.

      Thread pointer: 0x7f7f34000c08
      Attempting backtrace. You can use the following information to find out
      where mysqld died. If you see no messages after this, something went
      terribly wrong...
      stack_bottom = 0x7f7fcc07f9f8 thread_stack 0x49000
      mysqld(my_print_stacktrace+0x2e)[0x56323c277e0e]
      mysqld(handle_fatal_signal+0x515)[0x56323bcf0f95]
      /lib/x86_64-linux-gnu/libpthread.so.0(+0x12890)[0x7f7fcf101890]
      /lib/x86_64-linux-gnu/libc.so.6(abort+0x230)[0x7f7fce4138f0]
      /usr/lib/galera/libgalera_smm.so(+0x71233)[0x7f7fccc06233]
      /usr/lib/galera/libgalera_smm.so(_ZN6galera13ReplicatorSMM5abortEv+0x91)[0x7f7fccdcbdf1]
      /usr/lib/galera/libgalera_smm.so(_ZN6galera13ReplicatorSMM22request_state_transferEPvRK10wsrep_uuidlPKvl+0x366)[0x7f7fccde9206]
      /usr/lib/galera/libgalera_smm.so(_ZN6galera13ReplicatorSMM19process_conf_changeEPvRK10gcs_action+0xcaf)[0x7f7fccdd705f]
      /usr/lib/galera/libgalera_smm.so(_ZN6galera15GcsActionSource8dispatchEPvRK10gcs_actionRb+0x118)[0x7f7fccda95f8]
      /usr/lib/galera/libgalera_smm.so(_ZN6galera15GcsActionSource7processEPvRb+0xb8)[0x7f7fccda9858]
      /usr/lib/galera/libgalera_smm.so(_ZN6galera13ReplicatorSMM10async_recvEPv+0x120)[0x7f7fccdd1800]
      /usr/lib/galera/libgalera_smm.so(galera_recv+0x2b)[0x7f7fccdef4db]
      mysqld(_ZN5wsrep18wsrep_provider_v2611run_applierEPNS_21high_priority_serviceE+0xe)[0x56323c2fce8e]
      mysqld(+0x7cc422)[0x56323bc68422]
      mysqld(_Z15start_wsrep_THDPv+0x33c)[0x56323bc56c2c]
      /lib/x86_64-linux-gnu/libpthread.so.0(+0x76db)[0x7f7fcf0f66db]
      /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7f7fce4f488f]

      Trying to get some variables.
      Some pointers may be invalid and cause the dump to abort.
      Query (0x0): is an invalid pointer
      Connection ID (thread ID): 3
      Status: NOT_KILLED

      Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=on,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on,split_materialized=on,condition_pushdown_for_subquery=on,rowid_filter=on,condition_pushdown_from_having=on

      The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
      information that should help you find out what is causing the crash.

      We think the query pointer is invalid, but we will try to print it anyway.
      Query:

      Writing a core file...
      Working directory at /var/lib/mysql
      Resource Limits:
      Limit Soft Limit Hard Limit Units
      Max cpu time unlimited unlimited seconds
      Max file size unlimited unlimited bytes
      Max data size unlimited unlimited bytes
      Max stack size 8388608 unlimited bytes
      Max core file size 0 0 bytes
      Max resident set unlimited unlimited bytes
      Max processes unlimited unlimited processes
      Max open files 1048576 1048576 files
      Max locked memory 65536 65536 bytes
      Max address space unlimited unlimited bytes
      Max file locks unlimited unlimited locks
      Max pending signals 1028962 1028962 signals
      Max msgqueue size 819200 819200 bytes
      Max nice priority 0 0
      Max realtime priority 0 0
      Max realtime timeout unlimited unlimited us
      Core pattern: core

      this crash (or the wsrep_sst_mariabackup failure) also blocks the donor in DONOR/DESYNC state, filling up gcache files and creating huge problems in our productin cluster, since then 2 out of the nodes are non-functional.

      adding "docker logs" of dbcluster03 (the joiner) and dbcluster01 (the donor, log ends in "access denied" due to SST failure)

      Please let me know, if you need more information.

      Attachments

        Issue Links

          Activity

            People

              janlindstrom Jan Lindström
              mmerz Matthias Merz
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.