Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-25136

Mariabackup crash with InnoDB: Assertion failure in file /.../extra/mariabackup/xtrabackup.cc line 875

Details

    • Bug
    • Status: Closed (View Workflow)
    • Major
    • Resolution: Won't Fix
    • 10.2.32, 10.3.24, 10.4.14, 10.5.5
    • N/A
    • Backup, Galera
    • None
    • RHEL 7.3

    Description

      [00] 2020-09-04 08:01:16 Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS...
      [00] 2020-09-04 08:01:16 The latest check point (for incremental): '5888524992656'
      mariabackup: Stopping log copying thread.[00] 2020-09-04 08:01:16 DDL tracking : rename 456015 "./reporting/mv_pl_opportunities.ibd","./reporting/#sql-ib456988.ibd"
      2020-09-04 08:01:16 0x7f062f7b0700  InnoDB: Assertion failure in file /home/jenkins/workspace/MariaDBE-Custom-RPM/label/rhel-7/padding_for_CPACK_RPM_BUILD_SOURCE_DIRS_PREFIX_ON_ES_BACKUP_DEBUGSOURCE/extra/mariabackup/xtrabackup.cc line 875
      InnoDB: Failing assertion: opt_no_lock
      InnoDB: We intentionally generate a memory trap.
      InnoDB: Submit a detailed bug report to https://jira.mariadb.org/
      InnoDB: If you get repeated assertion failures or crashes, even
      InnoDB: immediately after the mysqld startup, there may be
      InnoDB: corruption in the InnoDB tablespace. Please refer to
      InnoDB: https://mariadb.com/kb/en/library/innodb-recovery-modes/
      InnoDB: about forcing recovery.
      200904  8:01:16 [ERROR] mysqld got signal 6 ;
      This could be because you hit a bug. It is also possible that this binary
      or one of the libraries it was linked against is corrupt, improperly built,
      or misconfigured. This error can also be caused by malfunctioning hardware.
       
      To report this bug, see https://mariadb.com/kb/en/reporting-bugs
       
      We will try our best to scrape up some info that will hopefully help
      diagnose the problem, but since we have already crashed, 
      something is definitely wrong and this may fail.
       
      Server version: 10.2.32-7-MariaDB-enterprise
      key_buffer_size=0
      read_buffer_size=131072
      max_used_connections=0
      max_threads=1
      thread_count=0
      It is possible that mysqld could use up to 
      key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 5432 K  bytes of memory
      Hope that's ok; if not, decrease some variables in the equation.
       
      Thread pointer: 0x0
      Attempting backtrace. You can use the following information to find out
      where mysqld died. If you see no messages after this, something went
      terribly wrong...
      stack_bottom = 0x0 thread_stack 0x49000
      addr2line: 'mariabackup': No such file
      mariabackup(my_print_stacktrace+0x2e)[0x55e7a193412e]
      mariabackup(handle_fatal_signal+0x30d)[0x55e7a143504d]
      sigaction.c:0(__restore_rt)[0x7f0635549630]
      :0(__GI_raise)[0x7f063381b377]
      :0(__GI_abort)[0x7f063381ca68]
      addr2line: 'mariabackup': No such file
      mariabackup(+0x458ba4)[0x55e7a1131ba4]
      mariabackup(+0x47a1f2)[0x55e7a11531f2]
      mariabackup(+0xa2ac88)[0x55e7a1703c88]
      mariabackup(+0xa2f5cf)[0x55e7a17085cf]
      mariabackup(+0xa2f98c)[0x55e7a170898c]
      mariabackup(+0x478725)[0x55e7a1151725]
      mariabackup(+0x4789fe)[0x55e7a11519fe]
      pthread_create.c:0(start_thread)[0x7f0635541ea5]
      /lib64/libc.so.6(clone+0x6d)[0x7f06338e38cd]
      The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
      information that should help you find out what is causing the crash.
      Writing a core file...
      Working directory at /var/lib/mysql
      Resource Limits:
      Limit                     Soft Limit           Hard Limit           Units     
      Max cpu time              unlimited            unlimited            seconds   
      Max file size             unlimited            unlimited            bytes     
      Max data size             unlimited            unlimited            bytes     
      Max stack size            8388608              unlimited            bytes     
      Max core file size        0                    unlimited            bytes     
      Max resident set          unlimited            unlimited            bytes     
      Max processes             63450                63450                processes 
      Max open files            262144               262144               files     
      Max locked memory         65536                65536                bytes     
      Max address space         unlimited            unlimited            bytes     
      Max file locks            unlimited            unlimited            locks     
      Max pending signals       63450                63450                signals   
      Max msgqueue size         819200               819200               bytes     
      Max nice priority         0                    0                    
      Max realtime priority     0                    0                    
      Max realtime timeout      unlimited            unlimited            us        
      Core pattern: core
      

      Attachments

        1. backup.log
          71 kB
        2. populate.sh
          0.5 kB

        Issue Links

          Activity

            According to the following code in 10.[45]

            bool                                                                            
            MDL_lock::can_grant_lock(enum_mdl_type type_arg,                                
                                     MDL_context *requestor_ctx,                            
                                     bool ignore_lock_priority) const                       
            {                                                                               
            ...                                                                                
            #ifdef WITH_WSREP                                                               
              /*                                                                            
                Approve lock request in BACKUP namespace for BF threads.                    
                We should get rid of this code and forbid FTWRL/BACKUP statements           
                when wsrep is active.                                                       
              */                                                                            
              if ((wsrep_thd_is_toi(requestor_ctx->get_thd()) ||                            
                   wsrep_thd_is_applying(requestor_ctx->get_thd())) &&                      
                  key.mdl_namespace() == MDL_key::BACKUP)                                   
              {                                                                             
                bool waiting_incompatible= m_waiting.bitmap() & waiting_incompat_map;       
                bool granted_incompatible= m_granted.bitmap() & granted_incompat_map;       
                if (waiting_incompatible || granted_incompatible)                           
                {                                                                           
                  WSREP_DEBUG("global lock granted for BF%s: %lu %s",                       
                              waiting_incompatible ? " (waiting queue)" : "",               
                              thd_get_thread_id(requestor_ctx->get_thd()),                  
                              wsrep_thd_query(requestor_ctx->get_thd()));                   
                }                                                                           
                return true;                                                                
              }                                                                             
            #endif /* WITH_WSREP */
            ...
            }                 
            

            for WSREP applier thread global(backup) lock will be granted even if incompatible type of lock was granted for some other thread. What means even if BLOCK_COMMIT lock was granted, DDL will not be blocked in WSREP applier thread.

            And according to the comment in the code It was done intentionally.

            vlad.lesin Vladislav Lesin added a comment - According to the following code in 10. [45] bool MDL_lock::can_grant_lock(enum_mdl_type type_arg, MDL_context *requestor_ctx, bool ignore_lock_priority) const { ... #ifdef WITH_WSREP /* Approve lock request in BACKUP namespace for BF threads. We should get rid of this code and forbid FTWRL/BACKUP statements when wsrep is active. */ if ((wsrep_thd_is_toi(requestor_ctx->get_thd()) || wsrep_thd_is_applying(requestor_ctx->get_thd())) && key.mdl_namespace() == MDL_key::BACKUP) { bool waiting_incompatible= m_waiting.bitmap() & waiting_incompat_map; bool granted_incompatible= m_granted.bitmap() & granted_incompat_map; if (waiting_incompatible || granted_incompatible) { WSREP_DEBUG( "global lock granted for BF%s: %lu %s" , waiting_incompatible ? " (waiting queue)" : "" , thd_get_thread_id(requestor_ctx->get_thd()), wsrep_thd_query(requestor_ctx->get_thd())); } return true ; } #endif /* WITH_WSREP */ ... } for WSREP applier thread global(backup) lock will be granted even if incompatible type of lock was granted for some other thread. What means even if BLOCK_COMMIT lock was granted, DDL will not be blocked in WSREP applier thread. And according to the comment in the code It was done intentionally.
            vlad.lesin Vladislav Lesin added a comment - - edited

            Here is the simplified test, which can be used for debug and to make sure "TRUNCATE" on node 2 is not blocked despite BLOCK_COMMIT is held.

            --source include/galera_cluster.inc                                             
            --source include/have_innodb.inc                                                
                                                                                            
            --connection node_1                                                             
            CREATE TABLE t1 ( a int(11) DEFAULT NULL) ENGINE=InnoDB;                        
            INSERT INTO t1 (a) values (1),(2),(3),(4),(5);                                  
                                                                                            
            --connection node_2                                                             
            BACKUP STAGE START;                                                             
            BACKUP STAGE BLOCK_COMMIT;                                                      
                                                                                            
            --connection node_1                                                             
            TRUNCATE t1;                                                                    
                                                                                            
            --connection node_2                                                             
            BACKUP STAGE END;                                                               
                                                                                            
            --connection node_1                                                             
                                                                                            
            DROP TABLE t1;                                                                  
            --source include/galera_end.inc                                                 
            

            vlad.lesin Vladislav Lesin added a comment - - edited Here is the simplified test, which can be used for debug and to make sure "TRUNCATE" on node 2 is not blocked despite BLOCK_COMMIT is held. --source include/galera_cluster.inc --source include/have_innodb.inc --connection node_1 CREATE TABLE t1 ( a int(11) DEFAULT NULL) ENGINE=InnoDB; INSERT INTO t1 (a) values (1),(2),(3),(4),(5); --connection node_2 BACKUP STAGE START; BACKUP STAGE BLOCK_COMMIT; --connection node_1 TRUNCATE t1; --connection node_2 BACKUP STAGE END; --connection node_1 DROP TABLE t1; --source include/galera_end.inc

            Useful comment from sysprg:
            "Now I see that the logic for handling MDL locks in the case of Galera (wsrep) has been changed in such a way that lock will be granted for backup locks if it is executed from applier threads or in TOI mode (usually DDL), while without wsrep we get here's a different behavior. I realized that this was done in order to eliminate problems with applying operations during replication, in order to prevent conflicts when transactions are executed during replication under the control of Galera. However, in this case, this special logic for Galera lead to failre. I have not yet figured out what kind of change in lock handling logic is actually needed here. As an experiment, I tried to remove the special handling for Galera at this point to avoid a difference in the MDL lock processing logic with standard replication, but I got failures in the execution of mtr tests from the galera suite, which means that the special handling for wsrep at this point is not outdated and it is really necessary in some cases, otherwise we get conflicts in transaction processing and test failures - I'm trying to figure out (by code) what needs to be changed here, for this I need to deal with the MDL logic and all the inserts for wsrep regarding MDL locks."

            vlad.lesin Vladislav Lesin added a comment - Useful comment from sysprg : "Now I see that the logic for handling MDL locks in the case of Galera (wsrep) has been changed in such a way that lock will be granted for backup locks if it is executed from applier threads or in TOI mode (usually DDL), while without wsrep we get here's a different behavior. I realized that this was done in order to eliminate problems with applying operations during replication, in order to prevent conflicts when transactions are executed during replication under the control of Galera. However, in this case, this special logic for Galera lead to failre. I have not yet figured out what kind of change in lock handling logic is actually needed here. As an experiment, I tried to remove the special handling for Galera at this point to avoid a difference in the MDL lock processing logic with standard replication, but I got failures in the execution of mtr tests from the galera suite, which means that the special handling for wsrep at this point is not outdated and it is really necessary in some cases, otherwise we get conflicts in transaction processing and test failures - I'm trying to figure out (by code) what needs to be changed here, for this I need to deal with the MDL logic and all the inserts for wsrep regarding MDL locks."
            vlad.lesin Vladislav Lesin added a comment - - edited

            I am assigning the issue to sysprg, as there is nothing to do from mariabackup side at the moment(see my comments above with the code which causes the issue and simplified test case to repeat it). The workaround for the customer at the moment is to not use DDL's during backup of Galera node.

            See also this thread in slack: https://mariadb.slack.com/archives/C03ER9QC3/p1607381382335100.

            vlad.lesin Vladislav Lesin added a comment - - edited I am assigning the issue to sysprg , as there is nothing to do from mariabackup side at the moment(see my comments above with the code which causes the issue and simplified test case to repeat it). The workaround for the customer at the moment is to not use DDL's during backup of Galera node. See also this thread in slack: https://mariadb.slack.com/archives/C03ER9QC3/p1607381382335100 .

            perhaps a duplicate of MDEV-23080, let's re-test after MDEV-23080 is fixed

            serg Sergei Golubchik added a comment - perhaps a duplicate of MDEV-23080 , let's re-test after MDEV-23080 is fixed

            People

              jplindst Jan Lindström (Inactive)
              Coker Todd Coker
              Votes:
              5 Vote for this issue
              Watchers:
              10 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.