Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-7864

[PATCH] Slave SQL: stopping on non-last RBR event with annotations results in SEGV (signal 11)

Details

    • Bug
    • Status: Closed (View Workflow)
    • Major
    • Resolution: Fixed
    • 10.0.17
    • 10.0.18
    • Replication
    • x86_64, RHEL6, 5.5.39 master to 10.0.17 slave on same machine

    Description

      while replicating from a 5.5.39 master to a 10.0.17 slave the following repeatable segfault resulted.

      150327 22:39:10 [Note] Master '5.5': Slave SQL thread initialized, starting replication in log 'tulwaurt005-mysql-bin.000687' at position 649581827, relay log './tulwaurt005-relay-bin-5@002e5.000004' position: 649582126
      150327 22:39:10 [ERROR] Master '5.5': Slave SQL: Could not execute Delete_rows_v1 event on table X_20150210.feed_lookup; Can't find record in 'feed_lookup', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log tulwaurt005-mysql-bin.000687, end_log_pos 649583202, Internal MariaDB error code: 1032
      150327 22:39:10 [Warning] Master '5.5': Slave: Can't find record in 'feed_lookup' Error_code: 1032
      50327 22:39:10 [ERROR] Master '5.5': Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'tulwaurt005-mysql-bin.000687' position 649581827
      150327 22:39:10 [ERROR] mysqld got signal 11 ;
      This could be because you hit a bug. It is also possible that this binary
      or one of the libraries it was linked against is corrupt, improperly built,
      or misconfigured. This error can also be caused by malfunctioning hardware.
       
      To report this bug, see http://kb.askmonty.org/en/reporting-bugs
       
      We will try our best to scrape up some info that will hopefully help
      diagnose the problem, but since we have already crashed,
      something is definitely wrong and this may fail.
       
      Server version: 10.0.17-MariaDB-log
      key_buffer_size=209715200
      read_buffer_size=262144
      max_used_connections=1
      max_threads=3202
      thread_count=1
      It is possible that mysqld could use up to
      key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 7648478 K  bytes of memory
      Hope that's ok; if not, decrease some variables in the equation.
       
      Thread pointer: 0x0x7fc97002d008
      Attempting backtrace. You can use the following information to find out
      where mysqld died. If you see no messages after this, something went
      terribly wrong...
      stack_bottom = 0x7fc9523ef870 thread_stack 0x48000
      /opt/mariadb-10/sbin/mysqld(my_print_stacktrace+0x2b)[0xb743db]
      /opt/mariadb-10/sbin/mysqld(handle_fatal_signal+0x398)[0x726a38]
      /lib64/libpthread.so.0[0x3353a0f500]
      150327 22:39:10 [Note] Master '5.5': Slave I/O thread: connected to master 'repl@localhost:3306',replication started in log 'tulwaurt005-mysql-bin.000702' at position 957219213
      /opt/mariadb-10/sbin/mysqld(_ZN14rpl_group_infoD2Ev+0x18)[0x68e148]
      /opt/mariadb-10/sbin/mysqld(handle_slave_sql+0x21c)[0x56576c]
      /lib64/libpthread.so.0[0x3353a07851]
      /lib64/libc.so.6(clone+0x6d)[0x33536e890d]
       
      Trying to get some variables.
      Some pointers may be invalid and cause the dump to abort.
      Query (0x0): is an invalid pointer
      Connection ID (thread ID): 25

      mysqlbinlog --start-position 649582126  tulwaurt005-relay-bin-5@002e5.000004 | more
      # at 649582126
      #150324  4:45:21 server id 76  end_log_pos 649581906    Query   thread_id=10229406      exec_time=3     error_code=0
      SET TIMESTAMP=1427138121/*!*/;
      SET @@session.pseudo_thread_id=10229406/*!*/;
      SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/;
      SET @@session.sql_mode=0/*!*/;
      SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/;
      /*!\C utf8 *//*!*/;
      SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/;
      SET @@session.lc_time_names=0/*!*/;
      SET @@session.collation_database=DEFAULT/*!*/;
      BEGIN
      /*!*/;
      # at 649582205
      # at 649582454
      # at 649582526
      # at 649583501
      # at 649584476
      # at 649585451
      # at 649586426
      # at 649587401
      # at 649588376
      # at 649589351
      # at 649590326
      # at 649591301
      # at 649592276
      # at 649593251
      # at 649594226
      # at 649595201
      # at 649596176
      # at 649597151
      # at 649598126
      # at 649599101
      # at 649600076
      # at 649601051
      # at 649602026
      # at 649603001
      # at 649603976
      # at 649604951
      # at 649605926
      # at 649606901
      # at 649607876
      # at 649608851
      # at 649609826
      # at 649610801
      # at 649611776
      # at 649612751
      # at 649613726
      # at 649614701
      # at 649615676
      # at 649616651
      # at 649617626
      # at 649618601
      # at 649619576
      # at 649620551
      # at 649621526
      # at 649622501
      # at 649623476
      # at 649624451
      # at 649625426
      # at 649626401
      # at 649627376
      # at 649628351
      # at 649629326
      # at 649630301
      # at 649631276
      # at 649632251
      # at 649633226
      # at 649634201
      # at 649635176
      # at 649636151
      # at 649637126
      # at 649638101
      # at 649639076
      # at 649640051
      # at 649641026
      # at 649642001
      # at 649642976
      # at 649643951
      # at 649644926
      # at 649645901
      # at 649646876
      # at 649647851
      # at 649648826
      # at 649649801
      # at 649650776
      # at 649651751
      # at 649652726
      #150324  4:45:21 server id 76  end_log_pos 649582155    Annotate_rows:
      #Q> /*wp_r4 (MASTER)*/DELETE feed_lookup
      #Q>                     FROM feed_lookup
      #Q>                             LEFT JOIN market_prices ON (feed_lookup.id=market_prices.market_prices_id)
      #Q>                     WHERE feed_lookup.entity_type="market_price"
      #Q>                             AND market_prices.market_prices_id IS 
      #150324  4:45:21 server id 76  end_log_pos 649582227    Table_map: `X_20150210`.`feed_lookup` mapped to number 418
      #150324  4:45:21 server id 76  end_log_pos 649583202    Delete_rows: table id 418
      #150324  4:45:21 server id 76  end_log_pos 649584177    Delete_rows: table id 418
      #150324  4:45:21 server id 76  end_log_pos 649585152    Delete_rows: table id 418
      ..
      #150324  4:45:21 server id 76  end_log_pos 649653144    Delete_rows: table id 418 flags: STMT_END_F
      ### DELETE FROM `X_20150210`.`feed_lookup`
      ### WHERE
      ###   @1=2065190
      ###   @2=245216596
      ###   @3='FcOKws8fYwPEPs55vjI-uWAyyRA'
      ###   @4=-117 (4294967179)
      ###   @5='market_price'
      ###   @6='YYbb8jzsfDm9MU25_bdbSQmbYLA'
      ### DELETE FROM `X_20150210`.`feed_lookup`
      ### WHERE
      ###   @1=2065191
      ###   @2=245216597
      ###   @3='g2XXXXXXX'
      ###   @4=-117 (4294967179)
      ###   @5='XXXXXXX'
      ###   @6='jeXXXXXXX0'
      ### DELETE FROM `X_20150210`.`feed_lookup`
      ### WHERE
      ###   @1=2065192
      ###   @2=245216598
      ###   @3='X3XXXXXXXXXX'
      ###   @4=-117 (4294967179)
      ###   @5='XXXXXXXX'
      ###   @6='DCXXXXXXX'
       

      As the error quite rightly points out the roes are missing on the slave

       select * from feed_lookup where lookup_id in ( 2065191, 2065192);
      Empty set (0.02 sec)

      Raw binlog in private uploads

      Attachments

        Issue Links

          Activity

            danblack Daniel Black created issue -
            danblack Daniel Black made changes -
            Field Original Value New Value
            Description while replicating from a 5.5.39 master to a 10.0.17 slave the following repeatable segfault resulted.

            {noformat}
            150327 22:39:10 [Note] Master '5.5': Slave SQL thread initialized, starting replication in log 'tulwaurt005-mysql-bin.000687' at position 649581827, relay log './tulwaurt005-relay-bin-5@002e5.000004' position: 649582126
            150327 22:39:10 [ERROR] Master '5.5': Slave SQL: Could not execute Delete_rows_v1 event on table X_20150210.feed_lookup; Can't find record in 'feed_lookup', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log tulwaurt005-mysql-bin.000687, end_log_pos 649583202, Internal MariaDB error code: 1032
            150327 22:39:10 [Warning] Master '5.5': Slave: Can't find record in 'feed_lookup' Error_code: 1032
            50327 22:39:10 [ERROR] Master '5.5': Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'tulwaurt005-mysql-bin.000687' position 649581827
            150327 22:39:10 [ERROR] mysqld got signal 11 ;
            This could be because you hit a bug. It is also possible that this binary
            or one of the libraries it was linked against is corrupt, improperly built,
            or misconfigured. This error can also be caused by malfunctioning hardware.

            To report this bug, see http://kb.askmonty.org/en/reporting-bugs

            We will try our best to scrape up some info that will hopefully help
            diagnose the problem, but since we have already crashed,
            something is definitely wrong and this may fail.

            Server version: 10.0.17-MariaDB-log
            key_buffer_size=209715200
            read_buffer_size=262144
            max_used_connections=1
            max_threads=3202
            thread_count=1
            It is possible that mysqld could use up to
            key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 7648478 K bytes of memory
            Hope that's ok; if not, decrease some variables in the equation.

            Thread pointer: 0x0x7fc97002d008
            Attempting backtrace. You can use the following information to find out
            where mysqld died. If you see no messages after this, something went
            terribly wrong...
            stack_bottom = 0x7fc9523ef870 thread_stack 0x48000
            /opt/mariadb-10/sbin/mysqld(my_print_stacktrace+0x2b)[0xb743db]
            /opt/mariadb-10/sbin/mysqld(handle_fatal_signal+0x398)[0x726a38]
            /lib64/libpthread.so.0[0x3353a0f500]
            150327 22:39:10 [Note] Master '5.5': Slave I/O thread: connected to master 'repl@localhost:3306',replication started in log 'tulwaurt005-mysql-bin.000702' at position 957219213
            /opt/mariadb-10/sbin/mysqld(_ZN14rpl_group_infoD2Ev+0x18)[0x68e148]
            /opt/mariadb-10/sbin/mysqld(handle_slave_sql+0x21c)[0x56576c]
            /lib64/libpthread.so.0[0x3353a07851]
            /lib64/libc.so.6(clone+0x6d)[0x33536e890d]

            Trying to get some variables.
            Some pointers may be invalid and cause the dump to abort.
            Query (0x0): is an invalid pointer
            Connection ID (thread ID): 25
            {noformat}

            {noformat}
            mysqlbinlog --start-position 649582126 tulwaurt005-relay-bin-5@002e5.000004 | more
            # at 649582126
            #150324 4:45:21 server id 76 end_log_pos 649581906 Query thread_id=10229406 exec_time=3 error_code=0
            SET TIMESTAMP=1427138121/*!*/;
            SET @@session.pseudo_thread_id=10229406/*!*/;
            SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/;
            SET @@session.sql_mode=0/*!*/;
            SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/;
            /*!\C utf8 *//*!*/;
            SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/;
            SET @@session.lc_time_names=0/*!*/;
            SET @@session.collation_database=DEFAULT/*!*/;
            BEGIN
            /*!*/;
            # at 649582205
            # at 649582454
            # at 649582526
            # at 649583501
            # at 649584476
            # at 649585451
            # at 649586426
            # at 649587401
            # at 649588376
            # at 649589351
            # at 649590326
            # at 649591301
            # at 649592276
            # at 649593251
            # at 649594226
            # at 649595201
            # at 649596176
            # at 649597151
            # at 649598126
            # at 649599101
            # at 649600076
            # at 649601051
            # at 649602026
            # at 649603001
            # at 649603976
            # at 649604951
            # at 649605926
            # at 649606901
            # at 649607876
            # at 649608851
            # at 649609826
            # at 649610801
            # at 649611776
            # at 649612751
            # at 649613726
            # at 649614701
            # at 649615676
            # at 649616651
            # at 649617626
            # at 649618601
            # at 649619576
            # at 649620551
            # at 649621526
            # at 649622501
            # at 649623476
            # at 649624451
            # at 649625426
            # at 649626401
            # at 649627376
            # at 649628351
            # at 649629326
            # at 649630301
            # at 649631276
            # at 649632251
            # at 649633226
            # at 649634201
            # at 649635176
            # at 649636151
            # at 649637126
            # at 649638101
            # at 649639076
            # at 649640051
            # at 649641026
            # at 649642001
            # at 649642976
            # at 649643951
            # at 649644926
            # at 649645901
            # at 649646876
            # at 649647851
            # at 649648826
            # at 649649801
            # at 649650776
            # at 649651751
            # at 649652726
            #150324 4:45:21 server id 76 end_log_pos 649582155 Annotate_rows:
            #Q> /*wp_r4 (MASTER)*/DELETE feed_lookup
            #Q> FROM feed_lookup
            #Q> LEFT JOIN market_prices ON (feed_lookup.id=market_prices.market_prices_id)
            #Q> WHERE feed_lookup.entity_type="market_price"
            #Q> AND market_prices.market_prices_id IS
            #150324 4:45:21 server id 76 end_log_pos 649582227 Table_map: `luxbet_20150210`.`feed_lookup` mapped to number 418
            #150324 4:45:21 server id 76 end_log_pos 649583202 Delete_rows: table id 418
            #150324 4:45:21 server id 76 end_log_pos 649584177 Delete_rows: table id 418
            #150324 4:45:21 server id 76 end_log_pos 649585152 Delete_rows: table id 418
            ..
            #150324 4:45:21 server id 76 end_log_pos 649653144 Delete_rows: table id 418 flags: STMT_END_F
            ### DELETE FROM `luxbet_20150210`.`feed_lookup`
            ### WHERE
            ### @1=2065190
            ### @2=245216596
            ### @3='FcOKws8fYwPEPs55vjI-uWAyyRA'
            ### @4=-117 (4294967179)
            ### @5='market_price'
            ### @6='YYbb8jzsfDm9MU25_bdbSQmbYLA'
            ### DELETE FROM `X_20150210`.`feed_lookup`
            ### WHERE
            ### @1=2065191
            ### @2=245216597
            ### @3='g2XXXXXXX'
            ### @4=-117 (4294967179)
            ### @5='XXXXXXX'
            ### @6='jeXXXXXXX0'
            ### DELETE FROM `X_20150210`.`feed_lookup`
            ### WHERE
            ### @1=2065192
            ### @2=245216598
            ### @3='X3XXXXXXXXXX'
            ### @4=-117 (4294967179)
            ### @5='XXXXXXXX'
            ### @6='DCXXXXXXX'

            {noformat}

            As the error quite rightly points out the roes are missing on the slave
            {noformat}
             select * from feed_lookup where lookup_id in ( 2065191, 2065192);
            Empty set (0.02 sec)
            {noformat}

            Raw binlog in private uploads
            while replicating from a 5.5.39 master to a 10.0.17 slave the following repeatable segfault resulted.

            {noformat}
            150327 22:39:10 [Note] Master '5.5': Slave SQL thread initialized, starting replication in log 'tulwaurt005-mysql-bin.000687' at position 649581827, relay log './tulwaurt005-relay-bin-5@002e5.000004' position: 649582126
            150327 22:39:10 [ERROR] Master '5.5': Slave SQL: Could not execute Delete_rows_v1 event on table X_20150210.feed_lookup; Can't find record in 'feed_lookup', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log tulwaurt005-mysql-bin.000687, end_log_pos 649583202, Internal MariaDB error code: 1032
            150327 22:39:10 [Warning] Master '5.5': Slave: Can't find record in 'feed_lookup' Error_code: 1032
            50327 22:39:10 [ERROR] Master '5.5': Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'tulwaurt005-mysql-bin.000687' position 649581827
            150327 22:39:10 [ERROR] mysqld got signal 11 ;
            This could be because you hit a bug. It is also possible that this binary
            or one of the libraries it was linked against is corrupt, improperly built,
            or misconfigured. This error can also be caused by malfunctioning hardware.

            To report this bug, see http://kb.askmonty.org/en/reporting-bugs

            We will try our best to scrape up some info that will hopefully help
            diagnose the problem, but since we have already crashed,
            something is definitely wrong and this may fail.

            Server version: 10.0.17-MariaDB-log
            key_buffer_size=209715200
            read_buffer_size=262144
            max_used_connections=1
            max_threads=3202
            thread_count=1
            It is possible that mysqld could use up to
            key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 7648478 K bytes of memory
            Hope that's ok; if not, decrease some variables in the equation.

            Thread pointer: 0x0x7fc97002d008
            Attempting backtrace. You can use the following information to find out
            where mysqld died. If you see no messages after this, something went
            terribly wrong...
            stack_bottom = 0x7fc9523ef870 thread_stack 0x48000
            /opt/mariadb-10/sbin/mysqld(my_print_stacktrace+0x2b)[0xb743db]
            /opt/mariadb-10/sbin/mysqld(handle_fatal_signal+0x398)[0x726a38]
            /lib64/libpthread.so.0[0x3353a0f500]
            150327 22:39:10 [Note] Master '5.5': Slave I/O thread: connected to master 'repl@localhost:3306',replication started in log 'tulwaurt005-mysql-bin.000702' at position 957219213
            /opt/mariadb-10/sbin/mysqld(_ZN14rpl_group_infoD2Ev+0x18)[0x68e148]
            /opt/mariadb-10/sbin/mysqld(handle_slave_sql+0x21c)[0x56576c]
            /lib64/libpthread.so.0[0x3353a07851]
            /lib64/libc.so.6(clone+0x6d)[0x33536e890d]

            Trying to get some variables.
            Some pointers may be invalid and cause the dump to abort.
            Query (0x0): is an invalid pointer
            Connection ID (thread ID): 25
            {noformat}

            {noformat}
            mysqlbinlog --start-position 649582126 tulwaurt005-relay-bin-5@002e5.000004 | more
            # at 649582126
            #150324 4:45:21 server id 76 end_log_pos 649581906 Query thread_id=10229406 exec_time=3 error_code=0
            SET TIMESTAMP=1427138121/*!*/;
            SET @@session.pseudo_thread_id=10229406/*!*/;
            SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/;
            SET @@session.sql_mode=0/*!*/;
            SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/;
            /*!\C utf8 *//*!*/;
            SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/;
            SET @@session.lc_time_names=0/*!*/;
            SET @@session.collation_database=DEFAULT/*!*/;
            BEGIN
            /*!*/;
            # at 649582205
            # at 649582454
            # at 649582526
            # at 649583501
            # at 649584476
            # at 649585451
            # at 649586426
            # at 649587401
            # at 649588376
            # at 649589351
            # at 649590326
            # at 649591301
            # at 649592276
            # at 649593251
            # at 649594226
            # at 649595201
            # at 649596176
            # at 649597151
            # at 649598126
            # at 649599101
            # at 649600076
            # at 649601051
            # at 649602026
            # at 649603001
            # at 649603976
            # at 649604951
            # at 649605926
            # at 649606901
            # at 649607876
            # at 649608851
            # at 649609826
            # at 649610801
            # at 649611776
            # at 649612751
            # at 649613726
            # at 649614701
            # at 649615676
            # at 649616651
            # at 649617626
            # at 649618601
            # at 649619576
            # at 649620551
            # at 649621526
            # at 649622501
            # at 649623476
            # at 649624451
            # at 649625426
            # at 649626401
            # at 649627376
            # at 649628351
            # at 649629326
            # at 649630301
            # at 649631276
            # at 649632251
            # at 649633226
            # at 649634201
            # at 649635176
            # at 649636151
            # at 649637126
            # at 649638101
            # at 649639076
            # at 649640051
            # at 649641026
            # at 649642001
            # at 649642976
            # at 649643951
            # at 649644926
            # at 649645901
            # at 649646876
            # at 649647851
            # at 649648826
            # at 649649801
            # at 649650776
            # at 649651751
            # at 649652726
            #150324 4:45:21 server id 76 end_log_pos 649582155 Annotate_rows:
            #Q> /*wp_r4 (MASTER)*/DELETE feed_lookup
            #Q> FROM feed_lookup
            #Q> LEFT JOIN market_prices ON (feed_lookup.id=market_prices.market_prices_id)
            #Q> WHERE feed_lookup.entity_type="market_price"
            #Q> AND market_prices.market_prices_id IS
            #150324 4:45:21 server id 76 end_log_pos 649582227 Table_map: `X_20150210`.`feed_lookup` mapped to number 418
            #150324 4:45:21 server id 76 end_log_pos 649583202 Delete_rows: table id 418
            #150324 4:45:21 server id 76 end_log_pos 649584177 Delete_rows: table id 418
            #150324 4:45:21 server id 76 end_log_pos 649585152 Delete_rows: table id 418
            ..
            #150324 4:45:21 server id 76 end_log_pos 649653144 Delete_rows: table id 418 flags: STMT_END_F
            ### DELETE FROM `X_20150210`.`feed_lookup`
            ### WHERE
            ### @1=2065190
            ### @2=245216596
            ### @3='FcOKws8fYwPEPs55vjI-uWAyyRA'
            ### @4=-117 (4294967179)
            ### @5='market_price'
            ### @6='YYbb8jzsfDm9MU25_bdbSQmbYLA'
            ### DELETE FROM `X_20150210`.`feed_lookup`
            ### WHERE
            ### @1=2065191
            ### @2=245216597
            ### @3='g2XXXXXXX'
            ### @4=-117 (4294967179)
            ### @5='XXXXXXX'
            ### @6='jeXXXXXXX0'
            ### DELETE FROM `X_20150210`.`feed_lookup`
            ### WHERE
            ### @1=2065192
            ### @2=245216598
            ### @3='X3XXXXXXXXXX'
            ### @4=-117 (4294967179)
            ### @5='XXXXXXXX'
            ### @6='DCXXXXXXX'

            {noformat}

            As the error quite rightly points out the roes are missing on the slave
            {noformat}
             select * from feed_lookup where lookup_id in ( 2065191, 2065192);
            Empty set (0.02 sec)
            {noformat}

            Raw binlog in private uploads

            I haven't got it reproduced the easy way.
            danblack,
            Could you please add SHOW CREATE TABLE and your cnf files?

            Thanks.

            elenst Elena Stepanova added a comment - I haven't got it reproduced the easy way. danblack , Could you please add SHOW CREATE TABLE and your cnf files? Thanks.
            elenst Elena Stepanova made changes -
            Due Date 2015-04-30
            danblack Daniel Black added a comment -

             /opt/mariadb-10/bin/my_print_defaults  --defaults-file=/opt/mariadb-10/my.cnf --mysqld
            --plugin_dir=/opt/mariadb-10/lib64/mysql/plugin/
            --port=3310
            --socket=/var/lib/mariadb/mysql-10.sock
            --slow_query_log_file=/var/lib/mariadb/slow_query_log-10.log
            --pid-file=/var/run/mysqld/mysqld-10.pid
            --datadir=/u01/mariadb-10
            --tmpdir=/u01tmp/mariadb-10/tmp
            --skip-external-locking
            --skip-name-resolve
            --key_buffer_size=200M
            --max_allowed_packet=100M
            --table_open_cache=1000
            --sort_buffer_size=2M
            --net_buffer_length=8K
            --read_buffer_size=256K
            --read_rnd_buffer_size=512K
            --myisam_sort_buffer_size=8M
            --query_cache_size=100M
            --query_cache_type=1
            --query_cache_limit=1M
            --thread_cache_size=20
            --max_connections=3200
            --wait_timeout=300
            --long_query_time=1
            --slow_query_log=ON
            --min-examined-row-limit=3000
            --log-slow-slave-statements=TRUE
            --log-slow-admin-statements=TRUE
            --log_queries_not_using_indexes=ON
            --optimizer_switch=mrr=on,mrr_sort_keys=on,index_merge=off,index_merge_intersection=off,index_merge_union=off,index_merge_sort_union=off
            --join_cache_level=8
            --default_storage_engine=InnoDB
            --innodb_file_per_table=1
            --innodb_file_format=barracuda
            --innodb_fast_shutdown=0
            --innodb_read_io_threads=16
            --innodb_write_io_threads=16
            --innodb_io_capacity=20000
            --innodb_flush_method=O_DIRECT
            --innodb_open_files=600
            --innodb_buffer_pool_size=24G
            --innodb_buffer_pool_instances=24
            --innodb_log_files_in_group=8
            --innodb_log_file_size=50M
            --slave_compressed_protocol=1
            --expire_logs_days=10
            --binlog_format=MIXED
            --binlog_annotate_row_events=ON
            --replicate_annotate_row_events=ON
            --query_cache_strip_comments=ON
            --innodb-buffer-pool-dump-at-shutdown=1
            --innodb-buffer-pool-load-at-startup=1
            --innodb_flush_neighbors=0
            --skip-slave-start
            --relay-log=tulwaurt005-relay-bin
            --log-bin=tulwaurt005-mysql-bin
            --report-host=tulwaurt005
            --server-id=20076
            --log_slow_verbosity=Query_plan,Innodb,explain
            --log-slave-updates
            --slave-parallel-threads=20
            --slave_domain_parallel_threads=10
            --binlog_commit_wait_count=20
            --binlog_commit_wait_usec=5000
            --use_stat_tables=complementary
            --histogram_size=255
            --optimizer_use_condition_selectivity=4
            --innodb_stats_traditional=false
            --userstat=1
            --query_response_time_stats=1
            --skip-slave-start=1
            --slave-skip-errors=1062,1146
            --replicate-do-db=X_20150210

            I had set binlog_commit_wait_count to 0 at runtime when this first occurred (to enable replication catch up), however the same segv occurred on restart after doing a start slave '5.5'.

            | feed_lookup | CREATE TABLE `feed_lookup` (
              `lookup_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
              `id` int(11) NOT NULL,
              `external_id` varchar(27) NOT NULL,
              `feed_id` int(11) NOT NULL,
              `entity_type` varchar(25) NOT NULL,
              `external_group_id` varchar(36) DEFAULT NULL,
              PRIMARY KEY (`lookup_id`),
              UNIQUE KEY `LOOKUP` (`feed_id`,`entity_type`,`external_id`),
              KEY `real_id` (`id`),
              KEY `external_group_id` (`external_group_id`)
            ) ENGINE=InnoDB AUTO_INCREMENT=2365127 DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC |

            danblack Daniel Black added a comment - /opt/mariadb-10/bin/my_print_defaults --defaults-file=/opt/mariadb-10/my.cnf --mysqld --plugin_dir=/opt/mariadb-10/lib64/mysql/plugin/ --port=3310 --socket=/var/lib/mariadb/mysql-10.sock --slow_query_log_file=/var/lib/mariadb/slow_query_log-10.log --pid-file=/var/run/mysqld/mysqld-10.pid --datadir=/u01/mariadb-10 --tmpdir=/u01tmp/mariadb-10/tmp --skip-external-locking --skip-name-resolve --key_buffer_size=200M --max_allowed_packet=100M --table_open_cache=1000 --sort_buffer_size=2M --net_buffer_length=8K --read_buffer_size=256K --read_rnd_buffer_size=512K --myisam_sort_buffer_size=8M --query_cache_size=100M --query_cache_type=1 --query_cache_limit=1M --thread_cache_size=20 --max_connections=3200 --wait_timeout=300 --long_query_time=1 --slow_query_log=ON --min-examined-row-limit=3000 --log-slow-slave-statements=TRUE --log-slow-admin-statements=TRUE --log_queries_not_using_indexes=ON --optimizer_switch=mrr=on,mrr_sort_keys=on,index_merge=off,index_merge_intersection=off,index_merge_union=off,index_merge_sort_union=off --join_cache_level=8 --default_storage_engine=InnoDB --innodb_file_per_table=1 --innodb_file_format=barracuda --innodb_fast_shutdown=0 --innodb_read_io_threads=16 --innodb_write_io_threads=16 --innodb_io_capacity=20000 --innodb_flush_method=O_DIRECT --innodb_open_files=600 --innodb_buffer_pool_size=24G --innodb_buffer_pool_instances=24 --innodb_log_files_in_group=8 --innodb_log_file_size=50M --slave_compressed_protocol=1 --expire_logs_days=10 --binlog_format=MIXED --binlog_annotate_row_events=ON --replicate_annotate_row_events=ON --query_cache_strip_comments=ON --innodb-buffer-pool-dump-at-shutdown=1 --innodb-buffer-pool-load-at-startup=1 --innodb_flush_neighbors=0 --skip-slave-start --relay-log=tulwaurt005-relay-bin --log-bin=tulwaurt005-mysql-bin --report-host=tulwaurt005 --server-id=20076 --log_slow_verbosity=Query_plan,Innodb,explain --log-slave-updates --slave-parallel-threads=20 --slave_domain_parallel_threads=10 --binlog_commit_wait_count=20 --binlog_commit_wait_usec=5000 --use_stat_tables=complementary --histogram_size=255 --optimizer_use_condition_selectivity=4 --innodb_stats_traditional=false --userstat=1 --query_response_time_stats=1 --skip-slave-start=1 --slave-skip-errors=1062,1146 --replicate-do-db=X_20150210 I had set binlog_commit_wait_count to 0 at runtime when this first occurred (to enable replication catch up), however the same segv occurred on restart after doing a start slave '5.5'. | feed_lookup | CREATE TABLE `feed_lookup` ( `lookup_id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `id` int(11) NOT NULL, `external_id` varchar(27) NOT NULL, `feed_id` int(11) NOT NULL, `entity_type` varchar(25) NOT NULL, `external_group_id` varchar(36) DEFAULT NULL, PRIMARY KEY (`lookup_id`), UNIQUE KEY `LOOKUP` (`feed_id`,`entity_type`,`external_id`), KEY `real_id` (`id`), KEY `external_group_id` (`external_group_id`) ) ENGINE=InnoDB AUTO_INCREMENT=2365127 DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC |
            elenst Elena Stepanova made changes -
            Due Date 2015-04-30

            More trying, still no crash.

            As I understand, you only start one slave, and it crashes on the very first event it attempts to execute; so, I don't see much room for concurrency there. And at least one crash happened when your slave only had one connection total (apparently the one that started the slave), hence no concurrency from direct slave updates, either. So, I've run out of ideas for tonight.

            Would you be able to do any of

            • provide the slave datadir and the whole tulwaurt005-mysql-bin.000687?
            • run a debug server?
            elenst Elena Stepanova added a comment - More trying, still no crash. As I understand, you only start one slave, and it crashes on the very first event it attempts to execute; so, I don't see much room for concurrency there. And at least one crash happened when your slave only had one connection total (apparently the one that started the slave), hence no concurrency from direct slave updates, either. So, I've run out of ideas for tonight. Would you be able to do any of provide the slave datadir and the whole tulwaurt005-mysql-bin.000687? run a debug server?
            elenst Elena Stepanova made changes -
            Due Date 2015-04-22
            elenst Elena Stepanova made changes -
            Due Date 2015-04-22
            Labels replication need_feedback replication
            danblack Daniel Black added a comment -

            I've got the binlog however don't have the slave dir.

            bit more info because I didn't have a backtrace

            c++filt _ZN14rpl_group_infoD2Ev
            rpl_group_info::~rpl_group_info()

            rpl_group_info::~rpl_group_info()
            {   
              free_annotate_event();
              delete deferred_events;
              mysql_mutex_destroy(&sleep_lock);
              mysql_cond_destroy(&sleep_cond);
            }   

            As serial_rgi is the only rpl_group_info in handle_slave_sql. It must be this. The destructor is also called from handle_slave_sql so this is the one delete at the bottom of the handle_slave_sql function.

            Which variable?...

            using objdump -C --disassemble mysqld > mysqld.objdump

            000000000068e130 <rpl_group_info::~rpl_group_info()>:
              68e130: 55                    push   %rbp
              68e131: 48 89 e5              mov    %rsp,%rbp
              68e134: 41 54                 push   %r12
              68e136: 53                    push   %rbx
              68e137: 48 83 bf f8 00 00 00  cmpq   $0x0,0xf8(%rdi)
              68e13e: 00
              68e13f: 48 89 fb              mov    %rdi,%rbx
              68e142: 48 8b 47 10           mov    0x10(%rdi),%rax
              68e146: 74 24                 je     68e16c <rpl_group_info::~rpl_group_info()+0x3c>
              68e148: c6 80 38 08 00 00 00  movb   $0x0,0x838(%rax)
              68e14f: 48 8b bf f8 00 00 00  mov    0xf8(%rdi),%rdi
              68e156: 48 85 ff              test   %rdi,%rdi
              68e159: 74 06                 je     68e161 <rpl_group_info::~rpl_group_info()+0x31>
              68e15b: 48 8b 07              mov    (%rdi),%rax
              68e15e: ff 50 48              callq  *0x48(%rax)
              68e161: 48 c7 83 f8 00 00 00  movq   $0x0,0xf8(%rbx)
              68e168: 00 00 00 00
              68e16c: 4c 8b a3 e8 00 00 00  mov    0xe8(%rbx),%r12
              68e173: 4d 85 e4              test   %r12,%r12
              68e176: 74 10                 je     68e188 <rpl_group_info::~rpl_group_info()+0x58>
              68e178: 4c 89 e7              mov    %r12,%rdi
              68e17b: e8 50 cb ff ff        callq  68acd0 <Deferred_log_events::~Deferred_log_events()>

            The backtrace shows this occurred at 0x68e148 setting some address to 0. As this is before call to Deferred_log_events::~Deferred_log_events() the segv is in free_annotate_event

              inline void free_annotate_event()
              {
                if (m_annotate_event)
                {
                  this->thd->variables.binlog_annotate_row_events= 0;
                  delete m_annotate_event;
                  m_annotate_event= 0;
                }
              }

            The segv occurs on the assignment of this->thd->variables.binlog_annotate_row_events= 0;

            this->thd (aka serial_rgi->thd) is an invalid address. And we see in the bottom of handle_slave_sql serial_rgi->thd is set to 0 before the destructor is called. Really bad move since free_annotate_event needs to set it to this->thd->variables.binlog_annotate_row_events to 0.

            bottom of handle_slave_sql

              serial_rgi->thd= rli->sql_driver_thd= 0;
              mysql_mutex_lock(&LOCK_thread_count);
              THD_CHECK_SENTRY(thd);
              thd->rgi_fake= thd->rgi_slave= NULL;
              delete serial_rgi;

            So any row based event that fails with an annotation should trigger this error.

            danblack Daniel Black added a comment - I've got the binlog however don't have the slave dir. bit more info because I didn't have a backtrace c++filt _ZN14rpl_group_infoD2Ev rpl_group_info::~rpl_group_info() rpl_group_info::~rpl_group_info() { free_annotate_event(); delete deferred_events; mysql_mutex_destroy(&sleep_lock); mysql_cond_destroy(&sleep_cond); } As serial_rgi is the only rpl_group_info in handle_slave_sql. It must be this. The destructor is also called from handle_slave_sql so this is the one delete at the bottom of the handle_slave_sql function. Which variable?... using objdump -C --disassemble mysqld > mysqld.objdump 000000000068e130 <rpl_group_info::~rpl_group_info()>: 68e130: 55 push %rbp 68e131: 48 89 e5 mov %rsp,%rbp 68e134: 41 54 push %r12 68e136: 53 push %rbx 68e137: 48 83 bf f8 00 00 00 cmpq $0x0,0xf8(%rdi) 68e13e: 00 68e13f: 48 89 fb mov %rdi,%rbx 68e142: 48 8b 47 10 mov 0x10(%rdi),%rax 68e146: 74 24 je 68e16c <rpl_group_info::~rpl_group_info()+0x3c> 68e148: c6 80 38 08 00 00 00 movb $0x0,0x838(%rax) 68e14f: 48 8b bf f8 00 00 00 mov 0xf8(%rdi),%rdi 68e156: 48 85 ff test %rdi,%rdi 68e159: 74 06 je 68e161 <rpl_group_info::~rpl_group_info()+0x31> 68e15b: 48 8b 07 mov (%rdi),%rax 68e15e: ff 50 48 callq *0x48(%rax) 68e161: 48 c7 83 f8 00 00 00 movq $0x0,0xf8(%rbx) 68e168: 00 00 00 00 68e16c: 4c 8b a3 e8 00 00 00 mov 0xe8(%rbx),%r12 68e173: 4d 85 e4 test %r12,%r12 68e176: 74 10 je 68e188 <rpl_group_info::~rpl_group_info()+0x58> 68e178: 4c 89 e7 mov %r12,%rdi 68e17b: e8 50 cb ff ff callq 68acd0 <Deferred_log_events::~Deferred_log_events()> The backtrace shows this occurred at 0x68e148 setting some address to 0. As this is before call to Deferred_log_events::~Deferred_log_events() the segv is in free_annotate_event inline void free_annotate_event() { if (m_annotate_event) { this->thd->variables.binlog_annotate_row_events= 0; delete m_annotate_event; m_annotate_event= 0; } } The segv occurs on the assignment of this->thd->variables.binlog_annotate_row_events= 0; this->thd (aka serial_rgi->thd) is an invalid address. And we see in the bottom of handle_slave_sql serial_rgi->thd is set to 0 before the destructor is called. Really bad move since free_annotate_event needs to set it to this->thd->variables.binlog_annotate_row_events to 0. bottom of handle_slave_sql serial_rgi->thd= rli->sql_driver_thd= 0; mysql_mutex_lock(&LOCK_thread_count); THD_CHECK_SENTRY(thd); thd->rgi_fake= thd->rgi_slave= NULL; delete serial_rgi; So any row based event that fails with an annotation should trigger this error.
            elenst Elena Stepanova made changes -
            Labels need_feedback replication replication

            In case the event fails, wouldn't m_annotate_event be 0, so this->thd->variables.binlog_annotate_row_events= 0 doesn't really get executed?

            elenst Elena Stepanova added a comment - In case the event fails, wouldn't m_annotate_event be 0, so this->thd->variables.binlog_annotate_row_events= 0 doesn't really get executed?
            elenst Elena Stepanova made changes -
            Labels replication need_feedback replication
            danblack Daniel Black added a comment - - edited

            In case the event fails, wouldn't m_annotate_event be 0

            Obviously m_annotate_event not 0 because of the position in the backtrace. The only place this can be set is delete_or_keep_event_post_apply that is the only place where there is a call to set_annotate_event.

            It isn't set back to 0 in delete_or_keep_event_post_apply by the call to rgi->free_annotate_event() because this only occurs if get_flags(Rows_log_event::STMT_END_F).

            There are a couple of other places in rpl_parallel.cc where free_annotate_event gets called however since its not 0 at the time of segfault its not relevant.

            So you'd need an annotate event, followed by more than one binlog event in the group, where it isn't the last one that fails. This matches the contents of the binlog I have where there are over 10 delete rows events in the goup.

            Another parallel replication possibility, I'm not 100% sure this is possible, is exec_relay_log_event line int res= rli->parallel.do_event(serial_rgi, ev, event_size); was called. Is it possible that the annotate event and the failing delete rows event where executed in different threads and the delete rows was executed first, hence calling delete_or_keep_event_post_apply and clearing the m_annotate_event followed by the annotate event being run, setting m_annotate_event to non-0.

            Either way, the fix is the same:

              serial_rgi->thd= rli->sql_driver_thd= 0;
              mysql_mutex_lock(&LOCK_thread_count);
              THD_CHECK_SENTRY(thd);
              thd->rgi_fake= thd->rgi_slave= NULL;
              delete serial_rgi;

            serial_rgi isn't used before its deletion so there is no need to set serial_rgi->thd to 0. Alternately call serial_rgi->free_annotate_event before it however it does break the encapsulation of serial_rgi a bit more.

            danblack Daniel Black added a comment - - edited In case the event fails, wouldn't m_annotate_event be 0 Obviously m_annotate_event not 0 because of the position in the backtrace. The only place this can be set is delete_or_keep_event_post_apply that is the only place where there is a call to set_annotate_event . It isn't set back to 0 in delete_or_keep_event_post_apply by the call to rgi->free_annotate_event() because this only occurs if get_flags(Rows_log_event::STMT_END_F) . There are a couple of other places in rpl_parallel.cc where free_annotate_event gets called however since its not 0 at the time of segfault its not relevant. So you'd need an annotate event, followed by more than one binlog event in the group, where it isn't the last one that fails. This matches the contents of the binlog I have where there are over 10 delete rows events in the goup. Another parallel replication possibility, I'm not 100% sure this is possible, is exec_relay_log_event line int res= rli->parallel.do_event(serial_rgi, ev, event_size); was called. Is it possible that the annotate event and the failing delete rows event where executed in different threads and the delete rows was executed first, hence calling delete_or_keep_event_post_apply and clearing the m_annotate_event followed by the annotate event being run, setting m_annotate_event to non-0. Either way, the fix is the same: serial_rgi->thd= rli->sql_driver_thd= 0; mysql_mutex_lock(&LOCK_thread_count); THD_CHECK_SENTRY(thd); thd->rgi_fake= thd->rgi_slave= NULL; delete serial_rgi; serial_rgi isn't used before its deletion so there is no need to set serial_rgi->thd to 0. Alternately call serial_rgi->free_annotate_event before it however it does break the encapsulation of serial_rgi a bit more.
            danblack Daniel Black added a comment - working test case: https://github.com/openquery/mariadb-server/tree/MDEV-7864_binlog_annotations_segv
            elenst Elena Stepanova made changes -
            Fix Version/s 10.0 [ 16000 ]
            Assignee Kristian Nielsen [ knielsen ]
            Labels need_feedback replication replication verified
            danblack Daniel Black added a comment -

            added fix too: https://github.com/openquery/mariadb-server/commit/cb55f425ea36321d2a00326405cd86ae3e108b43.patch

            cleaned up test cases more. Need to skip check of warnings in error log.

            danblack Daniel Black added a comment - added fix too: https://github.com/openquery/mariadb-server/commit/cb55f425ea36321d2a00326405cd86ae3e108b43.patch cleaned up test cases more. Need to skip check of warnings in error log.
            elenst Elena Stepanova made changes -
            Summary Slave SQL: Could not execute Delete_rows_v1 event on table X_20150210.feed_lookup; Can't find record in 'feed_lookup', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; then SEGV (signal 11) [PATCH] Slave SQL: Could not execute Delete_rows_v1 event on table X_20150210.feed_lookup; Can't find record in 'feed_lookup', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; then SEGV (signal 11)
            danblack Daniel Black made changes -
            Summary [PATCH] Slave SQL: Could not execute Delete_rows_v1 event on table X_20150210.feed_lookup; Can't find record in 'feed_lookup', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; then SEGV (signal 11) [PATCH] Slave SQL: stopping on non-last RBR event with annotations results in SEGV (signal 11)
            danblack Daniel Black added a comment -

            changed title because this is the general case when stopping a slave with binlog annotations and the rbr event isn't the last one.

            To be clear this only affects 10.0, doesn't affect 5.5

            danblack Daniel Black added a comment - changed title because this is the general case when stopping a slave with binlog annotations and the rbr event isn't the last one. To be clear this only affects 10.0, doesn't affect 5.5
            danblack Daniel Black added a comment - corrected test case. Fixed regression. https://github.com/MariaDB/server/pull/49 test run: https://travis-ci.org/openquery/mariadb-server/jobs/59977445
            knielsen Kristian Nielsen made changes -
            Status Open [ 1 ] In Progress [ 3 ]

            Pushed into 10.0.18.

            Thanks, Daniel for the bug analysis and test case!

            I fixed the bug in a different way. I think the clearing of serial_rgi->thd
            just before deleting serial_rgi just does not make any sense. It was
            introduced in this commit, without any justification that I can see:

              commit 2e100cc5a493b6a0f6f907e0483a734c7fee2087
              Author: Michael Widenius <monty@askmonty.org>
              Date:   Mon Oct 14 00:24:05 2013 +0300

            So I fixed by simply reverting that part of that change (one line).

            knielsen Kristian Nielsen added a comment - Pushed into 10.0.18. Thanks, Daniel for the bug analysis and test case! I fixed the bug in a different way. I think the clearing of serial_rgi->thd just before deleting serial_rgi just does not make any sense. It was introduced in this commit, without any justification that I can see: commit 2e100cc5a493b6a0f6f907e0483a734c7fee2087 Author: Michael Widenius <monty@askmonty.org> Date: Mon Oct 14 00:24:05 2013 +0300 So I fixed by simply reverting that part of that change (one line).
            knielsen Kristian Nielsen made changes -
            Fix Version/s 10.0.18 [ 18702 ]
            Fix Version/s 10.0 [ 16000 ]
            Resolution Fixed [ 1 ]
            Status In Progress [ 3 ] Closed [ 6 ]

            Hm, I now noticed you actually suggested this way to fix in an earlier comment.

            I still think this is the right fix (just removing the clear of serial_rgi->thd). But do let me know if you think there is a reason that a different fix is needed.

            knielsen Kristian Nielsen added a comment - Hm, I now noticed you actually suggested this way to fix in an earlier comment. I still think this is the right fix (just removing the clear of serial_rgi->thd). But do let me know if you think there is a reason that a different fix is needed.
            danblack Daniel Black added a comment -

            While I agree that what you've done is the simplest fix.

            I added the condition here in free_annotate_events because I wasn't 100% sure that all the calls to free_annotate event in rpl_parallel.cc occurred when the thread was valid. Are you? Is one extra non-zero check in a thread termination to prevent a future segfault worthwhile ?

            https://github.com/MariaDB/server/blob/71c5275fbe0a62ea1950695c158cc7256fc7195e/sql/rpl_rli.h#L746

            I assumed the reason this bug crept in in the first place is because the ambiguity about whether this was a structure that just stores fields or actually exhibits some abstraction of data responsibility for its on management. To move it more into the latter category is the reason for the other few changes https://github.com/MariaDB/server/pull/49/files

            danblack Daniel Black added a comment - While I agree that what you've done is the simplest fix. I added the condition here in free_annotate_events because I wasn't 100% sure that all the calls to free_annotate event in rpl_parallel.cc occurred when the thread was valid. Are you? Is one extra non-zero check in a thread termination to prevent a future segfault worthwhile ? https://github.com/MariaDB/server/blob/71c5275fbe0a62ea1950695c158cc7256fc7195e/sql/rpl_rli.h#L746 I assumed the reason this bug crept in in the first place is because the ambiguity about whether this was a structure that just stores fields or actually exhibits some abstraction of data responsibility for its on management. To move it more into the latter category is the reason for the other few changes https://github.com/MariaDB/server/pull/49/files
            ratzpo Rasmus Johansson (Inactive) made changes -
            Workflow MariaDB v2 [ 60276 ] MariaDB v3 [ 66940 ]
            elenst Elena Stepanova made changes -
            serg Sergei Golubchik made changes -
            Workflow MariaDB v3 [ 66940 ] MariaDB v4 [ 148960 ]

            People

              knielsen Kristian Nielsen
              danblack Daniel Black
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.