Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-12637

main.range_vs_index_merge_innodb fails in buildbot with timeout

Details

    • 10.1.32

    Description

      http://buildbot.askmonty.org/buildbot/builders/work-amd64-valgrind/builds/10024/steps/test/logs/stdio

      main.range_vs_index_merge_innodb 'xtradb' w4 [ fail ]  timeout after 9000 seconds
              Test ended at 2017-04-28 01:02:38
       
      Test case timeout after 9000 seconds
       
      == /mnt/data/buildot/maria-slave/work-opensuse-amd64/build/mysql-test/var/4/log/range_vs_index_merge_innodb.log == 
      create table t1 (
      id int unsigned auto_increment primary key,
      c1 char(12),
      c2 char(15),
      c3 char(1)
      );
      insert into t1 (c3) values ('1'), ('2');
      insert into t1 (c3) select c3 from t1;
      insert into t1 (c3) select c3 from t1;
      insert into t1 (c3) select c3 from t1;
      insert into t1 (c3) select c3 from t1;
      insert into t1 (c3) select c3 from t1;
      insert into t1 (c3) select c3 from t1;
      insert into t1 (c3) select c3 from t1;
      insert into t1 (c3) select c3 from t1;
      insert into t1 (c3) select c3 from t1;
      insert into t1 (c3) select c3 from t1;
      insert into t1 (c3) select c3 from t1;
      insert into t1 (c3) select c3 from t1;
      update t1 set c1=lpad(id+1000, 12, ' '), c2=lpad(id+10000, 15, ' ');
       
      The server [mysqld.1 - pid: 5073, winpid: 5073, exit: 256] crashed while running 'analyze-timeout'
      Server log from this test:
      ----------SERVER LOG START-----------
      2017-04-27 22:32:00 68532992 [Note] InnoDB: Created tablespace for space 416 name world/Country key_id 0 encryption 0.
      2017-04-27 22:32:00 68532992 [Note] InnoDB: Created tablespace for space 417 name world/City key_id 0 encryption 0.
      2017-04-27 22:32:01 68532992 [Note] InnoDB: Created tablespace for space 418 name world/CountryLanguage key_id 0 encryption 0.
      2017-04-27 22:53:02 68532992 [Note] InnoDB: Created tablespace for space 419 name test/t1 key_id 0 encryption 0.
      2017-04-28  0:14:41 68532992 [Note] InnoDB: Created tablespace for space 420 name test/t1 key_id 0 encryption 0.
      2017-04-28  0:33:10 68532992 [Note] InnoDB: Created tablespace for space 421 name test/t1 key_id 0 encryption 0.
      2017-04-28  0:46:39 68532992 [Note] InnoDB: Created tablespace for space 422 name test/t1 key_id 0 encryption 0.
      170428  1:01:57 [ERROR] mysqld got signal 6 ;
      This could be because you hit a bug. It is also possible that this binary
      or one of the libraries it was linked against is corrupt, improperly built,
      or misconfigured. This error can also be caused by malfunctioning hardware.
       
      To report this bug, see https://mariadb.com/kb/en/reporting-bugs
       
      We will try our best to scrape up some info that will hopefully help
      diagnose the problem, but since we have already crashed, 
      something is definitely wrong and this may fail.
       
      Server version: 10.1.23-MariaDB-debug
      key_buffer_size=1048576
      read_buffer_size=131072
      max_used_connections=1
      max_threads=153
      thread_count=1
      It is possible that mysqld could use up to 
      key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 63027 K  bytes of memory
      Hope that's ok; if not, decrease some variables in the equation.
       
      Thread pointer: 0x0
      Attempting backtrace. You can use the following information to find out
      where mysqld died. If you see no messages after this, something went
      terribly wrong...
      stack_bottom = 0x0 thread_stack 0x48400
      /mnt/data/buildot/maria-slave/work-opensuse-amd64/build/sql/mysqld(my_print_stacktrace+0x38)[0xf20d95]
      mysys/stacktrace.c:267(my_print_stacktrace)[0x868233]
      /lib64/libpthread.so.0(+0xf9f0)[0x4e439f0]
      /lib64/libc.so.6(__poll+0x2d)[0x6b2006d]
      sql/mysqld.cc:6484(handle_connections_sockets())[0x552e78]
      sql/mysqld.cc:5914(mysqld_main(int, char**))[0x551e44]
      sql/main.cc:26(main)[0x546d58]
      /lib64/libc.so.6(__libc_start_main+0xf5)[0x6a63be5]
      x86_64/start.S:125(_start)[0x546c49]
      The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
      information that should help you find out what is causing the crash.
      Writing a core file
      ----------SERVER LOG END-------------
      

      Attachments

        Issue Links

          Activity

            I cannot repeat the hang, and there is no stack trace. The failure was reported for this commit that claimed to fix MDEV-12253. I remember this commit and the development of MariaDB 10.1.23 very well. It was part of a very much needed fixes of race conditions and hangs in MariaDB 10.1. This commit appeared in MariaDB 10.1.23. I seem to remember that it caused some regressions, the last of which was fixed in MDEV-12602, which was included in the MariaDB 10.1.23 release.

            Based on the client log excerpt, the hang occurred during the execution of the following statement:

            alter table t1 add unique index (c1), add unique index (c2), add index (c3);
            

            Based on the server log excerpt, the server had completed 4 CREATE TABLE t1 statements. This agrees with the test case contents. The ALTER TABLE statement would be executed in-place (adding secondary indexes), because there already is a PRIMARY KEY defined on the table.

            I do not remember details very clearly after so long time, but I am sure that encryption used to have hangs especially during DDL operations. I also remember that MDEV-12602 fixed an issue that was caused by some earlier commit during the 10.1.23 development. This issue would affect all data files, not only encrypted ones.

            What exactly caused this hang? It is difficult to say. But as long as I remember, MDEV-12602 was the last major fix to hangs that were caused by the encryption changes in MariaDB 10.1. On innodb_plugin the test took quite a bit of time, but nowhere close to the 9,000,000ms limit:

            main.range_vs_index_merge_innodb 'innodb_plugin' w1 [ pass ]  1304214
            

            Before the failure, no test appears to have used encryption since the latest restart of the instance:

            main.innodb_mysql_lock 'xtradb'          w4 [ pass ]  16683
            main.join_outer_innodb 'xtradb'          w4 [ pass ]  32136
            main.loaddata_autocom_innodb 'xtradb'    w4 [ pass ]   1968
            main.log_tables_upgrade 'xtradb'         w4 [ pass ]   8970
            main.lowercase_table2 'xtradb'           w4 [ skipped ]  Test requires: 'lowercase2'
            main.merge_innodb 'xtradb'               w4 [ pass ]   2357
            main.mrr_derived_crash_4610 'xtradb'     w4 [ pass ]   3190
            main.multi_update_innodb 'xtradb'        w4 [ pass ]   3129
            main.mysql_upgrade 'xtradb'              w4 [ pass ]  76296
            main.mysqlcheck 'xtradb'                 w4 [ fail ]  Found warnings/errors in server log file!
            main.order_by_optimizer_innodb 'xtradb'  w4 [ pass ]  38832
            main.parser_bug21114_innodb 'xtradb'     w4 [ pass ]  439186
            main.plugin_innodb 'xtradb'              w4 [ pass ]  34063
            main.range_innodb 'xtradb'               w4 [ pass ]  606284
            main.range_mrr_icp 'xtradb'              w4 [ pass ]  985291
            main.range_vs_index_merge_innodb 'xtradb' w4 [ fail ]  timeout after 9000 seconds
            

            marko Marko Mäkelä added a comment - I cannot repeat the hang, and there is no stack trace. The failure was reported for this commit that claimed to fix MDEV-12253 . I remember this commit and the development of MariaDB 10.1.23 very well. It was part of a very much needed fixes of race conditions and hangs in MariaDB 10.1. This commit appeared in MariaDB 10.1.23. I seem to remember that it caused some regressions, the last of which was fixed in MDEV-12602 , which was included in the MariaDB 10.1.23 release. Based on the client log excerpt, the hang occurred during the execution of the following statement: alter table t1 add unique index (c1), add unique index (c2), add index (c3); Based on the server log excerpt, the server had completed 4 CREATE TABLE t1 statements. This agrees with the test case contents. The ALTER TABLE statement would be executed in-place (adding secondary indexes), because there already is a PRIMARY KEY defined on the table. I do not remember details very clearly after so long time, but I am sure that encryption used to have hangs especially during DDL operations. I also remember that MDEV-12602 fixed an issue that was caused by some earlier commit during the 10.1.23 development. This issue would affect all data files, not only encrypted ones. What exactly caused this hang? It is difficult to say. But as long as I remember, MDEV-12602 was the last major fix to hangs that were caused by the encryption changes in MariaDB 10.1. On innodb_plugin the test took quite a bit of time, but nowhere close to the 9,000,000ms limit: main.range_vs_index_merge_innodb 'innodb_plugin' w1 [ pass ] 1304214 Before the failure, no test appears to have used encryption since the latest restart of the instance: main.innodb_mysql_lock 'xtradb' w4 [ pass ] 16683 main.join_outer_innodb 'xtradb' w4 [ pass ] 32136 main.loaddata_autocom_innodb 'xtradb' w4 [ pass ] 1968 main.log_tables_upgrade 'xtradb' w4 [ pass ] 8970 main.lowercase_table2 'xtradb' w4 [ skipped ] Test requires: 'lowercase2' main.merge_innodb 'xtradb' w4 [ pass ] 2357 main.mrr_derived_crash_4610 'xtradb' w4 [ pass ] 3190 main.multi_update_innodb 'xtradb' w4 [ pass ] 3129 main.mysql_upgrade 'xtradb' w4 [ pass ] 76296 main.mysqlcheck 'xtradb' w4 [ fail ] Found warnings/errors in server log file! main.order_by_optimizer_innodb 'xtradb' w4 [ pass ] 38832 main.parser_bug21114_innodb 'xtradb' w4 [ pass ] 439186 main.plugin_innodb 'xtradb' w4 [ pass ] 34063 main.range_innodb 'xtradb' w4 [ pass ] 606284 main.range_mrr_icp 'xtradb' w4 [ pass ] 985291 main.range_vs_index_merge_innodb 'xtradb' w4 [ fail ] timeout after 9000 seconds

            I started tests locally.

            ./mtr --mem --valgrind --repeat=10 --parallel=auto main.range_vs_index_merge_innodb{,,,}
            

            10.0 1bec0c45954689580e27e826dee7d1afefa82471

            worker[1] Using MTR_BUILD_THREAD 300, with reserved ports 16000..16019
            worker[2] Using MTR_BUILD_THREAD 301, with reserved ports 16020..16039
            worker[4] Using MTR_BUILD_THREAD 302, with reserved ports 16040..16059
            worker[3] Using MTR_BUILD_THREAD 303, with reserved ports 16060..16079
            worker[2] Test still running: main.range_vs_index_merge_innodb
            worker[3] Test still running: main.range_vs_index_merge_innodb
            worker[4] Test still running: main.range_vs_index_merge_innodb
            worker[1] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'innodb_plugin' w3 [ pass ]  2377699
            main.range_vs_index_merge_innodb 'innodb_plugin' w2 [ pass ]  2380451
            main.range_vs_index_merge_innodb 'innodb_plugin' w4 [ pass ]  2382399
            main.range_vs_index_merge_innodb 'innodb_plugin' w1 [ pass ]  2388440
            worker[3] Test still running: main.range_vs_index_merge_innodb
            worker[2] Test still running: main.range_vs_index_merge_innodb
            worker[4] Test still running: main.range_vs_index_merge_innodb
            worker[1] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'innodb_plugin' w3 [ pass ]  2220613
            main.range_vs_index_merge_innodb 'innodb_plugin' w4 [ pass ]  2220209
            main.range_vs_index_merge_innodb 'innodb_plugin' w2 [ pass ]  2228767
            main.range_vs_index_merge_innodb 'innodb_plugin' w1 [ pass ]  2219457
            worker[3] Test still running: main.range_vs_index_merge_innodb
            worker[4] Test still running: main.range_vs_index_merge_innodb
            worker[2] Test still running: main.range_vs_index_merge_innodb
            worker[1] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'innodb_plugin' w3 [ pass ]  2235379
            main.range_vs_index_merge_innodb 'innodb_plugin' w1 [ pass ]  2231888
            main.range_vs_index_merge_innodb 'innodb_plugin' w2 [ pass ]  2234251
            main.range_vs_index_merge_innodb 'innodb_plugin' w4 [ pass ]  2243300
            worker[3] Test still running: main.range_vs_index_merge_innodb
            worker[1] Test still running: main.range_vs_index_merge_innodb
            worker[2] Test still running: main.range_vs_index_merge_innodb
            worker[4] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'innodb_plugin' w1 [ pass ]  2195471
            main.range_vs_index_merge_innodb 'innodb_plugin' w3 [ pass ]  2208165
            main.range_vs_index_merge_innodb 'innodb_plugin' w2 [ pass ]  2211816
            main.range_vs_index_merge_innodb 'innodb_plugin' w4 [ pass ]  2227083
            mysql-test-run: *** ERROR: Got ^C signal
            

            At this point, I interrupted the test and restarted with xtradb, because that is where the hang was reported:

            ./mtr --mem --valgrind --repeat=10 --parallel=auto main.range_vs_index_merge_innodb,xtradb{,,,}
            

            marko Marko Mäkelä added a comment - I started tests locally. ./mtr --mem --valgrind --repeat=10 --parallel=auto main.range_vs_index_merge_innodb{,,,} 10.0 1bec0c45954689580e27e826dee7d1afefa82471 worker[1] Using MTR_BUILD_THREAD 300, with reserved ports 16000..16019 worker[2] Using MTR_BUILD_THREAD 301, with reserved ports 16020..16039 worker[4] Using MTR_BUILD_THREAD 302, with reserved ports 16040..16059 worker[3] Using MTR_BUILD_THREAD 303, with reserved ports 16060..16079 worker[2] Test still running: main.range_vs_index_merge_innodb worker[3] Test still running: main.range_vs_index_merge_innodb worker[4] Test still running: main.range_vs_index_merge_innodb worker[1] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'innodb_plugin' w3 [ pass ] 2377699 main.range_vs_index_merge_innodb 'innodb_plugin' w2 [ pass ] 2380451 main.range_vs_index_merge_innodb 'innodb_plugin' w4 [ pass ] 2382399 main.range_vs_index_merge_innodb 'innodb_plugin' w1 [ pass ] 2388440 worker[3] Test still running: main.range_vs_index_merge_innodb worker[2] Test still running: main.range_vs_index_merge_innodb worker[4] Test still running: main.range_vs_index_merge_innodb worker[1] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'innodb_plugin' w3 [ pass ] 2220613 main.range_vs_index_merge_innodb 'innodb_plugin' w4 [ pass ] 2220209 main.range_vs_index_merge_innodb 'innodb_plugin' w2 [ pass ] 2228767 main.range_vs_index_merge_innodb 'innodb_plugin' w1 [ pass ] 2219457 worker[3] Test still running: main.range_vs_index_merge_innodb worker[4] Test still running: main.range_vs_index_merge_innodb worker[2] Test still running: main.range_vs_index_merge_innodb worker[1] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'innodb_plugin' w3 [ pass ] 2235379 main.range_vs_index_merge_innodb 'innodb_plugin' w1 [ pass ] 2231888 main.range_vs_index_merge_innodb 'innodb_plugin' w2 [ pass ] 2234251 main.range_vs_index_merge_innodb 'innodb_plugin' w4 [ pass ] 2243300 worker[3] Test still running: main.range_vs_index_merge_innodb worker[1] Test still running: main.range_vs_index_merge_innodb worker[2] Test still running: main.range_vs_index_merge_innodb worker[4] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'innodb_plugin' w1 [ pass ] 2195471 main.range_vs_index_merge_innodb 'innodb_plugin' w3 [ pass ] 2208165 main.range_vs_index_merge_innodb 'innodb_plugin' w2 [ pass ] 2211816 main.range_vs_index_merge_innodb 'innodb_plugin' w4 [ pass ] 2227083 mysql-test-run: *** ERROR: Got ^C signal At this point, I interrupted the test and restarted with xtradb, because that is where the hang was reported: ./mtr --mem --valgrind --repeat=10 --parallel=auto main.range_vs_index_merge_innodb,xtradb{,,,}

            I was not able to repeat any hang in this way:

            worker[1] Using MTR_BUILD_THREAD 300, with reserved ports 16000..16019
            worker[3] Using MTR_BUILD_THREAD 301, with reserved ports 16020..16039
            worker[2] Using MTR_BUILD_THREAD 302, with reserved ports 16040..16059
            worker[4] Using MTR_BUILD_THREAD 303, with reserved ports 16060..16079
            worker[2] Test still running: main.range_vs_index_merge_innodb
            worker[4] Test still running: main.range_vs_index_merge_innodb
            worker[3] Test still running: main.range_vs_index_merge_innodb
            worker[1] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'xtradb' w4 [ pass ]  2268198
            main.range_vs_index_merge_innodb 'xtradb' w2 [ pass ]  2272657
            main.range_vs_index_merge_innodb 'xtradb' w3 [ pass ]  2283271
            main.range_vs_index_merge_innodb 'xtradb' w1 [ pass ]  2321484
            worker[4] Test still running: main.range_vs_index_merge_innodb
            worker[2] Test still running: main.range_vs_index_merge_innodb
            worker[3] Test still running: main.range_vs_index_merge_innodb
            worker[1] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'xtradb' w4 [ pass ]  2379181
            main.range_vs_index_merge_innodb 'xtradb' w3 [ pass ]  2374475
            worker[2] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'xtradb' w2 [ pass ]  2409713
            worker[1] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'xtradb' w1 [ pass ]  2409291
            worker[4] Test still running: main.range_vs_index_merge_innodb
            worker[3] Test still running: main.range_vs_index_merge_innodb
            worker[2] Test still running: main.range_vs_index_merge_innodb
            worker[1] Test still running: main.range_vs_index_merge_innodb
            worker[4] Test still running: main.range_vs_index_merge_innodb
            worker[3] Test still running: main.range_vs_index_merge_innodb
            worker[2] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'xtradb' w3 [ pass ]  2439826
            main.range_vs_index_merge_innodb 'xtradb' w4 [ pass ]  2477677
            worker[1] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'xtradb' w2 [ pass ]  2473013
            main.range_vs_index_merge_innodb 'xtradb' w1 [ pass ]  2521295
            worker[3] Test still running: main.range_vs_index_merge_innodb
            worker[4] Test still running: main.range_vs_index_merge_innodb
            worker[2] Test still running: main.range_vs_index_merge_innodb
            worker[1] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'xtradb' w3 [ pass ]  2323561
            main.range_vs_index_merge_innodb 'xtradb' w4 [ pass ]  2314867
            main.range_vs_index_merge_innodb 'xtradb' w2 [ pass ]  2342171
            worker[1] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'xtradb' w1 [ pass ]  2406380
            worker[3] Test still running: main.range_vs_index_merge_innodb
            worker[4] Test still running: main.range_vs_index_merge_innodb
            worker[2] Test still running: main.range_vs_index_merge_innodb
            worker[1] Test still running: main.range_vs_index_merge_innodb
            worker[3] Test still running: main.range_vs_index_merge_innodb
            worker[4] Test still running: main.range_vs_index_merge_innodb
            worker[2] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'xtradb' w3 [ pass ]  2599272
            main.range_vs_index_merge_innodb 'xtradb' w4 [ pass ]  2604337
            worker[1] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'xtradb' w2 [ pass ]  2620441
            main.range_vs_index_merge_innodb 'xtradb' w1 [ pass ]  2575754
            worker[3] Test still running: main.range_vs_index_merge_innodb
            worker[4] Test still running: main.range_vs_index_merge_innodb
            worker[2] Test still running: main.range_vs_index_merge_innodb
            worker[1] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'xtradb' w3 [ pass ]  2268703
            main.range_vs_index_merge_innodb 'xtradb' w4 [ pass ]  2295956
            main.range_vs_index_merge_innodb 'xtradb' w2 [ pass ]  2280279
            main.range_vs_index_merge_innodb 'xtradb' w1 [ pass ]  2299700
            worker[3] Test still running: main.range_vs_index_merge_innodb
            worker[4] Test still running: main.range_vs_index_merge_innodb
            worker[2] Test still running: main.range_vs_index_merge_innodb
            worker[1] Test still running: main.range_vs_index_merge_innodb
            worker[3] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'xtradb' w3 [ pass ]  2411915
            worker[4] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'xtradb' w4 [ pass ]  2408764
            main.range_vs_index_merge_innodb 'xtradb' w2 [ pass ]  2394262
            main.range_vs_index_merge_innodb 'xtradb' w1 [ pass ]  2390223
            mysql-test-run: *** ERROR: Got ^C signal
            

            I started one final attempt:

            ./mtr --mem --valgrind --no-reorder --force --retry=0 --suite=main innodb_mysql_lock,xtradb join_outer_innodb,xtradb loaddata_autocom_innodb,xtradb log_tables_upgrade,xtradb lowercase_table2,xtradb merge_innodb,xtradb mrr_derived_crash_4610,xtradb multi_update_innodb,xtradb mysql_upgrade,xtradb mysqlcheck,xtradb order_by_optimizer_innodb,xtradb parser_bug21114_innodb,xtradb plugin_innodb,xtradb range_innodb,xtradb range_mrr_icp,xtradb range_vs_index_merge_innodb,xtradb
            

            main.innodb_mysql_lock 'xtradb'          [ pass ]  21006
            main.join_outer_innodb 'xtradb'          [ pass ]  25752
            main.loaddata_autocom_innodb 'xtradb'    [ pass ]   4386
            main.log_tables_upgrade 'xtradb'         [ pass ]   6085
            main.lowercase_table2 'xtradb'           [ skipped ]  Test requires: 'lowercase2'
            main.merge_innodb 'xtradb'               [ pass ]   1822
            main.mrr_derived_crash_4610 'xtradb'     [ pass ]   2449
            main.multi_update_innodb 'xtradb'        [ pass ]   6482
            main.mysql_upgrade 'xtradb'              [ pass ]  74682
            main.mysqlcheck 'xtradb'                 [ pass ]  20717
            main.order_by_optimizer_innodb 'xtradb'  [ pass ]  19643
            main.parser_bug21114_innodb 'xtradb'     [ pass ]  408096
            main.plugin_innodb 'xtradb'              [ pass ]  24973
            main.range_innodb 'xtradb'               [ pass ]  257916
            main.range_mrr_icp 'xtradb'              [ pass ]  786391
            worker[1] Test still running: main.range_vs_index_merge_innodb
            worker[1] Test still running: main.range_vs_index_merge_innodb
            main.range_vs_index_merge_innodb 'xtradb' [ pass ]  3362975
            valgrind_report                          [ pass ]       
            

            I think that this bug may have been fixed in 10.1.23. Looking at the diff from the failing commit to the tag mariadb-10.1.23, MDEV-12602 looks most promising; it fixed a regression that was introduced by MDEV-12428 which in turn fixed a regression that was introduced by MDEV-11738.

            marko Marko Mäkelä added a comment - I was not able to repeat any hang in this way: worker[1] Using MTR_BUILD_THREAD 300, with reserved ports 16000..16019 worker[3] Using MTR_BUILD_THREAD 301, with reserved ports 16020..16039 worker[2] Using MTR_BUILD_THREAD 302, with reserved ports 16040..16059 worker[4] Using MTR_BUILD_THREAD 303, with reserved ports 16060..16079 worker[2] Test still running: main.range_vs_index_merge_innodb worker[4] Test still running: main.range_vs_index_merge_innodb worker[3] Test still running: main.range_vs_index_merge_innodb worker[1] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'xtradb' w4 [ pass ] 2268198 main.range_vs_index_merge_innodb 'xtradb' w2 [ pass ] 2272657 main.range_vs_index_merge_innodb 'xtradb' w3 [ pass ] 2283271 main.range_vs_index_merge_innodb 'xtradb' w1 [ pass ] 2321484 worker[4] Test still running: main.range_vs_index_merge_innodb worker[2] Test still running: main.range_vs_index_merge_innodb worker[3] Test still running: main.range_vs_index_merge_innodb worker[1] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'xtradb' w4 [ pass ] 2379181 main.range_vs_index_merge_innodb 'xtradb' w3 [ pass ] 2374475 worker[2] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'xtradb' w2 [ pass ] 2409713 worker[1] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'xtradb' w1 [ pass ] 2409291 worker[4] Test still running: main.range_vs_index_merge_innodb worker[3] Test still running: main.range_vs_index_merge_innodb worker[2] Test still running: main.range_vs_index_merge_innodb worker[1] Test still running: main.range_vs_index_merge_innodb worker[4] Test still running: main.range_vs_index_merge_innodb worker[3] Test still running: main.range_vs_index_merge_innodb worker[2] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'xtradb' w3 [ pass ] 2439826 main.range_vs_index_merge_innodb 'xtradb' w4 [ pass ] 2477677 worker[1] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'xtradb' w2 [ pass ] 2473013 main.range_vs_index_merge_innodb 'xtradb' w1 [ pass ] 2521295 worker[3] Test still running: main.range_vs_index_merge_innodb worker[4] Test still running: main.range_vs_index_merge_innodb worker[2] Test still running: main.range_vs_index_merge_innodb worker[1] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'xtradb' w3 [ pass ] 2323561 main.range_vs_index_merge_innodb 'xtradb' w4 [ pass ] 2314867 main.range_vs_index_merge_innodb 'xtradb' w2 [ pass ] 2342171 worker[1] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'xtradb' w1 [ pass ] 2406380 worker[3] Test still running: main.range_vs_index_merge_innodb worker[4] Test still running: main.range_vs_index_merge_innodb worker[2] Test still running: main.range_vs_index_merge_innodb worker[1] Test still running: main.range_vs_index_merge_innodb worker[3] Test still running: main.range_vs_index_merge_innodb worker[4] Test still running: main.range_vs_index_merge_innodb worker[2] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'xtradb' w3 [ pass ] 2599272 main.range_vs_index_merge_innodb 'xtradb' w4 [ pass ] 2604337 worker[1] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'xtradb' w2 [ pass ] 2620441 main.range_vs_index_merge_innodb 'xtradb' w1 [ pass ] 2575754 worker[3] Test still running: main.range_vs_index_merge_innodb worker[4] Test still running: main.range_vs_index_merge_innodb worker[2] Test still running: main.range_vs_index_merge_innodb worker[1] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'xtradb' w3 [ pass ] 2268703 main.range_vs_index_merge_innodb 'xtradb' w4 [ pass ] 2295956 main.range_vs_index_merge_innodb 'xtradb' w2 [ pass ] 2280279 main.range_vs_index_merge_innodb 'xtradb' w1 [ pass ] 2299700 worker[3] Test still running: main.range_vs_index_merge_innodb worker[4] Test still running: main.range_vs_index_merge_innodb worker[2] Test still running: main.range_vs_index_merge_innodb worker[1] Test still running: main.range_vs_index_merge_innodb worker[3] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'xtradb' w3 [ pass ] 2411915 worker[4] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'xtradb' w4 [ pass ] 2408764 main.range_vs_index_merge_innodb 'xtradb' w2 [ pass ] 2394262 main.range_vs_index_merge_innodb 'xtradb' w1 [ pass ] 2390223 mysql-test-run: *** ERROR: Got ^C signal I started one final attempt: ./mtr --mem --valgrind --no-reorder --force --retry=0 --suite=main innodb_mysql_lock,xtradb join_outer_innodb,xtradb loaddata_autocom_innodb,xtradb log_tables_upgrade,xtradb lowercase_table2,xtradb merge_innodb,xtradb mrr_derived_crash_4610,xtradb multi_update_innodb,xtradb mysql_upgrade,xtradb mysqlcheck,xtradb order_by_optimizer_innodb,xtradb parser_bug21114_innodb,xtradb plugin_innodb,xtradb range_innodb,xtradb range_mrr_icp,xtradb range_vs_index_merge_innodb,xtradb main.innodb_mysql_lock 'xtradb' [ pass ] 21006 main.join_outer_innodb 'xtradb' [ pass ] 25752 main.loaddata_autocom_innodb 'xtradb' [ pass ] 4386 main.log_tables_upgrade 'xtradb' [ pass ] 6085 main.lowercase_table2 'xtradb' [ skipped ] Test requires: 'lowercase2' main.merge_innodb 'xtradb' [ pass ] 1822 main.mrr_derived_crash_4610 'xtradb' [ pass ] 2449 main.multi_update_innodb 'xtradb' [ pass ] 6482 main.mysql_upgrade 'xtradb' [ pass ] 74682 main.mysqlcheck 'xtradb' [ pass ] 20717 main.order_by_optimizer_innodb 'xtradb' [ pass ] 19643 main.parser_bug21114_innodb 'xtradb' [ pass ] 408096 main.plugin_innodb 'xtradb' [ pass ] 24973 main.range_innodb 'xtradb' [ pass ] 257916 main.range_mrr_icp 'xtradb' [ pass ] 786391 worker[1] Test still running: main.range_vs_index_merge_innodb worker[1] Test still running: main.range_vs_index_merge_innodb main.range_vs_index_merge_innodb 'xtradb' [ pass ] 3362975 valgrind_report [ pass ] I think that this bug may have been fixed in 10.1.23. Looking at the diff from the failing commit to the tag mariadb-10.1.23, MDEV-12602 looks most promising; it fixed a regression that was introduced by MDEV-12428 which in turn fixed a regression that was introduced by MDEV-11738 .

            People

              marko Marko Mäkelä
              elenst Elena Stepanova
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.