Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-26224

InnoDB fails to remove AUTO_INCREMENT attribute

Details

    Description

      MDEV-6076 introduced persistent AUTO_INCREMENT values in InnoDB tables. The values are stored in the clustered index root page.

      While analyzing MDEV-19272, I noticed that InnoDB fails to remove an AUTO_INCREMENT attribute on a column in the following case:

      --source include/have_innodb.inc
      CREATE TABLE t (c INT AUTO_INCREMENT PRIMARY KEY) ENGINE=InnoDB;
      ALTER TABLE t MODIFY c INT NOT NULL;
      INSERT INTO t SET c=1;
      DROP TABLE t;
      

      The value in the page would be wrongly updated like this:

      10.4 7ffa801cf2a7d0a4e55b84908dede6493c7ae73d

      #0  mlog_write_ull (ptr=0x7f4580618038 "", val=1, mtr=0x7f4580052248, mtr@entry=0x2) at /mariadb/10.4/storage/innobase/mtr/mtr0log.cc:304
      #1  0x0000558c761fe113 in page_set_autoinc (block=<optimized out>, index=<optimized out>, index@entry=0x7f456802f290, autoinc=<optimized out>, mtr=<optimized out>, mtr@entry=0x7f4580052248, reset=false)
          at /mariadb/10.4/storage/innobase/page/page0page.cc:250
      #2  0x0000558c763962fd in btr_cur_seaMajorrch_to_nth_level_func (index=<optimized out>, level=<optimized out>, level@entry=0, tuple=<optimized out>, tuple@entry=0x7f456802f290, mode=<optimized out>, 
          mode@entry=PAGE_CUR_LE, latch_mode=latch_mode@entry=2, cursor=<optimized out>, cursor@entry=0x7f4580051e00, ahi_latch=<optimized out>, file=<optimized out>, line=<optimized out>, mtr=<optimized out>, 
          autoinc=<optimized out>) at /mariadb/10.4/storage/innobase/btr/btr0cur.cc:2462
      #3  0x0000558c7625e521 in btr_pcur_open_low (index=<optimized out>, index@entry=0x7f456802f290, level=0, tuple=<optimized out>, tuple@entry=0x558c7693ee46, mode=mode@entry=PAGE_CUR_LE, 
          latch_mode=latch_mode@entry=2, cursor=0x7f4580051e00, cursor@entry=0x7f4580051df0, file=0x5 <error: Cannot access memory at address 0x5>, line=line@entry=2634, autoinc=1, mtr=0x7f4580052248)
          at /mariadb/10.4/storage/innobase/include/btr0pcur.ic:441
      #4  0x0000558c76263c8d in row_ins_clust_index_entry_low (flags=<optimized out>, flags@entry=0, mode=mode@entry=2, index=index@entry=0x7f456802f290, n_uniq=n_uniq@entry=1, entry=<optimized out>, 
          entry@entry=0x7f456802d830, n_ext=<optimized out>, n_ext@entry=0, thr=0x7f45680461f0) at /mariadb/10.4/storage/innobase/row/row0ins.cc:2633
      #5  0x0000558c7626a42b in row_ins_clust_index_entry (index=0x7f456802f290, entry=0x7f456802d830, thr=0x7f45680461f0, n_ext=n_ext@entry=0) at /mariadb/10.4/storage/innobase/row/row0ins.cc:3217
      #6  0x0000558c7626ca51 in row_ins_index_entry (index=0x7f456802f290, entry=0x7f456802d830, thr=0x7f45680461f0) at /mariadb/10.4/storage/innobase/row/row0ins.cc:3343
      #7  row_ins_index_entry_step (node=0x7f4568045fc8, thr=0x7f45680461f0) at /mariadb/10.4/storage/innobase/row/row0ins.cc:3512
      #8  row_ins (node=0x7f4568045fc8, thr=0x7f45680461f0) at /mariadb/10.4/storage/innobase/row/row0ins.cc:3671
      #9  row_ins_step (thr=<optimized out>, thr@entry=0x7f45680461f0) at /mariadb/10.4/storage/innobase/row/row0ins.cc:3821
      #10 0x0000558c7628c0a0 in row_insert_for_mysql (mysql_rec=<optimized out>, mysql_rec@entry=0x7f456802b058 "\377\001", prebuilt=<optimized out>, ins_mode=<optimized out>)
          at /mariadb/10.4/storage/innobase/row/row0mysql.cc:1420
      #11 0x0000558c76107803 in ha_innobase::write_row (this=0x7f456803d280, record=0x7f456802b058 "\377\001") at /mariadb/10.4/storage/innobase/handler/ha_innodb.cc:8122
      #12 0x0000558c75f289e6 in handler::ha_write_row (this=0x7f456803d280, buf=0x7f456802b058 "\377\001") at /mariadb/10.4/sql/handler.cc:6757
      #13 0x0000558c75c71cfb in write_record (thd=thd@entry=0x7f4568018d08, table=table@entry=0x7f456803c488, info=info@entry=0x7f45800531e0) at /mariadb/10.4/sql/sql_insert.cc:2061
      #14 0x0000558c75c6f328 in mysql_insert (thd=<optimized out>, thd@entry=0x7f4568018d08, table_list=<optimized out>, fields=<optimized out>, values_list=<optimized out>, update_fields=<optimized out>, 
          update_values=<optimized out>, duplic=DUP_ERROR, ignore=<optimized out>) at /mariadb/10.4/sql/sql_insert.cc:1078
      #15 0x0000558c75cada2a in mysql_execute_command (thd=thd@entry=0x7f4568018d08) at /mariadb/10.4/sql/sql_parse.cc:4600
      #16 0x0000558c75ca5874 in mysql_parse (thd=thd@entry=0x7f4568018d08, rawbuf=0x7f4568024090 "INSERT INTO t SET c=1", length=<optimized out>, parser_state=<optimized out>, parser_state@entry=0x7f4580054680, 
          is_com_multi=false, is_next_command=<optimized out>) at /mariadb/10.4/sql/sql_parse.cc:7992
      

      Note: in the native ALTER TABLE API there is no flag for removing the AUTO_INCREMENT attribute. In this case, ha_innobase::check_if_supported_inplace_alter() would observe ha_alter_info->handler_flags == ALTER_CHANGE_COLUMN_DEFAULT.

      InnoDB can support ALGORITHM=INSTANT removal of the AUTO_INCREMENT attribute, but it needs to remove the attribute from dict_table_t in commit_cache_norebuild(). For a table rebuild, there is no issue: after

      ALTER TABLE t MODIFY c INT NOT NULL, FORCE;
      

      the function page_set_autoinc() will not be invoked.

      Attachments

        Issue Links

          Activity

            Before MDEV-15563 in 10.4.3, an AUTO_INCREMENT attribute could only be removed by ALGORITHM=COPY:

            --source include/have_innodb.inc
            CREATE TABLE t (c INT AUTO_INCREMENT NULL UNIQUE) ENGINE=InnoDB;
            ALTER TABLE t MODIFY c INT NOT NULL, ALGORITHM=INPLACE;
            DROP TABLE t;
            

            10.2 742b3a0d39875584f994f4bd01d57cc31d1dddca

            mysqltest: At line 3: query 'ALTER TABLE t MODIFY c INT NOT NULL, ALGORITHM=INPLACE' failed: 1846: ALGORITHM=INPLACE is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY
            

            marko Marko Mäkelä added a comment - Before MDEV-15563 in 10.4.3, an AUTO_INCREMENT attribute could only be removed by ALGORITHM=COPY : --source include/have_innodb.inc CREATE TABLE t (c INT AUTO_INCREMENT NULL UNIQUE ) ENGINE=InnoDB; ALTER TABLE t MODIFY c INT NOT NULL , ALGORITHM=INPLACE; DROP TABLE t; 10.2 742b3a0d39875584f994f4bd01d57cc31d1dddca mysqltest: At line 3: query 'ALTER TABLE t MODIFY c INT NOT NULL, ALGORITHM=INPLACE' failed: 1846: ALGORITHM=INPLACE is not supported. Reason: Cannot change column type INPLACE. Try ALGORITHM=COPY

            The following test shows a problem with FORCE (table rebuild) as well. We would expect the fields to contain 0, but the Perl code is reading the value 1 from both files.

            --source include/have_innodb.inc
            CREATE TABLE t1 (c INT AUTO_INCREMENT NULL UNIQUE) ENGINE=InnoDB;
            ALTER TABLE t1 MODIFY c INT NULL, ALGORITHM=INSTANT;
            INSERT INTO t1 SET c=1;
            CREATE TABLE t2 (c INT AUTO_INCREMENT NULL UNIQUE) ENGINE=InnoDB;
            ALTER TABLE t2 MODIFY c INT NULL, FORCE, ALGORITHM=INPLACE;
            INSERT INTO t2 SET c=1;
            FLUSH TABLES t1,t2 FOR EXPORT;
            --let INNODB_PAGE_SIZE=`select @@innodb_page_size`
            --let MYSQLD_DATADIR = `SELECT @@datadir`
            --perl
            my $ps= $ENV{INNODB_PAGE_SIZE};
            my $file= "$ENV{MYSQLD_DATADIR}/test/t1.ibd";
             
            open(FILE, "<", $file) || die "Unable to open $file\n";
            sysseek(FILE, 3*$ps+38+18+4, 0) || die "Unable to seek $file\n";
            die "Unable to read $file\n" unless sysread(FILE, $_, 4) == 4;
            my $t1=unpack("N",$_);
            close(FILE);
             
            $file= "$ENV{MYSQLD_DATADIR}/test/t2.ibd";
            open(FILE, "<", $file) || die "Unable to open $file\n";
            sysseek(FILE, 3*$ps+38+18+4, 0) || die "Unable to seek $file\n";
            die "Unable to read $file\n" unless sysread(FILE, $t_, 4) == 4;
            my $t2=unpack("N",$_);
            close(FILE);
            print "AUTO_INCREMENT: $t1,$t2\n";
            EOF
            UNLOCK TABLES;
             
            DROP TABLE t1,t2;
            

            marko Marko Mäkelä added a comment - The following test shows a problem with FORCE (table rebuild) as well. We would expect the fields to contain 0, but the Perl code is reading the value 1 from both files. --source include/have_innodb.inc CREATE TABLE t1 (c INT AUTO_INCREMENT NULL UNIQUE ) ENGINE=InnoDB; ALTER TABLE t1 MODIFY c INT NULL , ALGORITHM=INSTANT; INSERT INTO t1 SET c=1; CREATE TABLE t2 (c INT AUTO_INCREMENT NULL UNIQUE ) ENGINE=InnoDB; ALTER TABLE t2 MODIFY c INT NULL , FORCE , ALGORITHM=INPLACE; INSERT INTO t2 SET c=1; FLUSH TABLES t1,t2 FOR EXPORT; --let INNODB_PAGE_SIZE=`select @@innodb_page_size` --let MYSQLD_DATADIR = `SELECT @@datadir` --perl my $ps= $ENV{INNODB_PAGE_SIZE}; my $file= "$ENV{MYSQLD_DATADIR}/test/t1.ibd" ;   open (FILE, "<" , $file) || die "Unable to open $file\n" ; sysseek(FILE, 3*$ps+38+18+4, 0) || die "Unable to seek $file\n" ; die "Unable to read $file\n" unless sysread(FILE, $_, 4) == 4; my $t1=unpack( "N" ,$_); close (FILE);   $file= "$ENV{MYSQLD_DATADIR}/test/t2.ibd" ; open (FILE, "<" , $file) || die "Unable to open $file\n" ; sysseek(FILE, 3*$ps+38+18+4, 0) || die "Unable to seek $file\n" ; die "Unable to read $file\n" unless sysread(FILE, $t_, 4) == 4; my $t2=unpack( "N" ,$_); close (FILE); print "AUTO_INCREMENT: $t1,$t2\n" ; EOF UNLOCK TABLES;   DROP TABLE t1,t2;
            marko Marko Mäkelä added a comment - - edited

            There exist assertions in dict_table_t::instant_column() and elsewhere that assume that the AUTO_INCREMENT attribute will not change, it is not trivial to fix this, and the fix will have to be tested thoroughly with RQG against assertion failures.

            To fix my test case, we should change commit_cache_norebuild() and commit_try_rebuild() and probably introduce a flag for changing (removing) an AUTO_INCREMENT attribute.

            marko Marko Mäkelä added a comment - - edited There exist assertions in dict_table_t::instant_column() and elsewhere that assume that the AUTO_INCREMENT attribute will not change, it is not trivial to fix this, and the fix will have to be tested thoroughly with RQG against assertion failures. To fix my test case, we should change commit_cache_norebuild() and commit_try_rebuild() and probably introduce a flag for changing (removing) an AUTO_INCREMENT attribute.

            It turns out that this bug is actually caused by MDEV-12836, which specifically allows the removal of an AUTO_INCREMENT attribute.

            marko Marko Mäkelä added a comment - It turns out that this bug is actually caused by MDEV-12836 , which specifically allows the removal of an AUTO_INCREMENT attribute.

            The above test contains error in perl code. PAGE_ROOT_AUTO_INC fields is read int $t_ instead of $_ for t2 table, but $_ is unpacked into $t2, so $t2 shows t1's PAGE_ROOT_AUTO_INC. If we fix it, we will see that INPLACE algorithm is not affected. For INPLACE algorithm new index metadata is created in prepare_inplace_alter_table_dict(), so there is no need to reset table->persistent_autoinc:

            prepare_inplace_alter_table_dict(...)                                           
            {                                                                               
            ...                                                                             
                            for (ulint a = 0; a < ctx->num_to_add_index; a++) {             
                                    dict_index_t* index = ctx->add_index[a];                
            ...                                                                             
                                    index = create_index_dict(ctx->trx, index, add_v);      
            ...                                                                             
                            }                                                               
            ...                                                                                                                               
            }
            

            For INSTANT algorithm ha_innobase::prepare_inplace_alter_table() even don't call prepare_inplace_alter_table_dict(), because !(ha_alter_info->handler_flags & ~INNOBASE_INPLACE_IGNORE) condition is true.

            And this is also the reason why we can't call commit_cache_norebuild() in ha_innobase::commit_inplace_alter_table(). To call commit_cache_norebuild() we need ha_alter_info->handler_ctx, which, in turns, is created in ha_innobase::prepare_inplace_alter_table if ha_alter_info->handler_flags & ~INNOBASE_INPLACE_IGNORE:

            ha_innobase::prepare_inplace_alter_table(...)                                   
            {                                                                               
            ...                                                                             
                    if (!(ha_alter_info->handler_flags & ~INNOBASE_INPLACE_IGNORE)) {       
                            /* Nothing to do */                                             
                            ...                                                             
                            DBUG_RETURN(false);                                             
                    }                                                                       
            ...                                                                             
                    if (!(ha_alter_info->handler_flags & INNOBASE_ALTER_DATA)               
                        || ((ha_alter_info->handler_flags & ~(INNOBASE_INPLACE_IGNORE       
                                                              | INNOBASE_ALTER_NOCREATE     
                                                              | INNOBASE_ALTER_INSTANT))    
                            == ALTER_OPTIONS                                                
                            && !alter_options_need_rebuild(ha_alter_info, table))) {        
                                                                                            
                            ha_innobase_inplace_ctx *ctx = NULL;                            
                            if (heap) {                                                     
                                    ctx = new ha_innobase_inplace_ctx(...);                                                                   
                                    ha_alter_info->handler_ctx = ctx;                                                                         
                            }                                                               
                            ...                                                                                                               
                            DBUG_RETURN(false);                                             
                    }                                                                       
            ...                                                                             
                    ha_alter_info->handler_ctx = new ha_innobase_inplace_ctx(...);          
                                                                                            
                    DBUG_RETURN(prepare_inplace_alter_table_dict(...));                     
            }
            

            INNOBASE_INPLACE_IGNORE also contains ALTER_COLUMN_DEFAULT, which is set for "ALTER TABLE t1 MODIFY c INT NULL" statement:

            static const alter_table_operations INNOBASE_INPLACE_IGNORE                     
                    = ALTER_COLUMN_DEFAULT                                                  
                    | ALTER_PARTITIONED                                                     
                    | ALTER_COLUMN_COLUMN_FORMAT                                            
                    | ALTER_COLUMN_STORAGE_TYPE                                             
                    | ALTER_CONVERT_TO                                                      
                    | ALTER_VIRTUAL_GCOL_EXPR                                               
                    | ALTER_DROP_CHECK_CONSTRAINT                                           
                    | ALTER_RENAME                                                          
                    | ALTER_COLUMN_INDEX_LENGTH                                             
                    | ALTER_CHANGE_INDEX_COMMENT;
            

            Marko explained why ALTER_COLUMN_DEFAULT flag is ignored:
            "Normally, changing the default values is not something that InnoDB cares about. Default values are handled by the SQL layer, not InnoDB. The only exception is instant ADD COLUMN where we must know the default value of the instantly added column, and that value must be constant for all rows during the ALTER TABLE execution."

            So to call commit_cache_norebuild() in ha_innobase::commit_inplace_alter_table() we need to change the logic ha_innobase::prepare_inplace_alter_table().

            At the other hand, why do we need to call ommit_cache_norebuild()? Why just don't reset table->persistent_autoinc in ha_innobase::commit_inplace_alter_table() directly:

            --- a/storage/innobase/handler/handler0alter.cc
            +++ b/storage/innobase/handler/handler0alter.cc
            @@ -11036,6 +11036,10 @@ ha_innobase::commit_inplace_alter_table(
                            DBUG_ASSERT(!ctx0);
                            MONITOR_ATOMIC_DEC(MONITOR_PENDING_ALTER_TABLE);
                            ha_alter_info->group_commit_ctx = NULL;
            +               if (table->found_next_number_field
            +                       && !altered_table->found_next_number_field) {
            +                       m_prebuilt->table->persistent_autoinc = 0;
            +               }
                            DBUG_RETURN(false);
                    }
            

            persistent_autoinc is set if TABLE::found_next_number_field is set (during table open) in SQL layer. So persistent_autoinc reset should be crash-safe. And it's not supposed to be rolled back if ha_innobase::commit_inplace_alter_table() is finished successfully.

            vlad.lesin Vladislav Lesin added a comment - The above test contains error in perl code. PAGE_ROOT_AUTO_INC fields is read int $t_ instead of $_ for t2 table, but $_ is unpacked into $t2, so $t2 shows t1's PAGE_ROOT_AUTO_INC. If we fix it, we will see that INPLACE algorithm is not affected. For INPLACE algorithm new index metadata is created in prepare_inplace_alter_table_dict(), so there is no need to reset table->persistent_autoinc: prepare_inplace_alter_table_dict(...) { ... for (ulint a = 0 ; a < ctx->num_to_add_index; a++) { dict_index_t* index = ctx->add_index[a]; ... index = create_index_dict(ctx->trx, index, add_v); ... } ... } For INSTANT algorithm ha_innobase::prepare_inplace_alter_table() even don't call prepare_inplace_alter_table_dict(), because !(ha_alter_info->handler_flags & ~INNOBASE_INPLACE_IGNORE) condition is true. And this is also the reason why we can't call commit_cache_norebuild() in ha_innobase::commit_inplace_alter_table(). To call commit_cache_norebuild() we need ha_alter_info->handler_ctx, which, in turns, is created in ha_innobase::prepare_inplace_alter_table if ha_alter_info->handler_flags & ~INNOBASE_INPLACE_IGNORE: ha_innobase::prepare_inplace_alter_table(...) { ... if (!(ha_alter_info->handler_flags & ~INNOBASE_INPLACE_IGNORE)) { /* Nothing to do */ ... DBUG_RETURN( false ); } ... if (!(ha_alter_info->handler_flags & INNOBASE_ALTER_DATA) || ((ha_alter_info->handler_flags & ~(INNOBASE_INPLACE_IGNORE | INNOBASE_ALTER_NOCREATE | INNOBASE_ALTER_INSTANT)) == ALTER_OPTIONS && !alter_options_need_rebuild(ha_alter_info, table))) { ha_innobase_inplace_ctx *ctx = NULL; if (heap) { ctx = new ha_innobase_inplace_ctx(...); ha_alter_info->handler_ctx = ctx; } ... DBUG_RETURN( false ); } ... ha_alter_info->handler_ctx = new ha_innobase_inplace_ctx(...); DBUG_RETURN(prepare_inplace_alter_table_dict(...)); } INNOBASE_INPLACE_IGNORE also contains ALTER_COLUMN_DEFAULT, which is set for "ALTER TABLE t1 MODIFY c INT NULL" statement: static const alter_table_operations INNOBASE_INPLACE_IGNORE = ALTER_COLUMN_DEFAULT | ALTER_PARTITIONED | ALTER_COLUMN_COLUMN_FORMAT | ALTER_COLUMN_STORAGE_TYPE | ALTER_CONVERT_TO | ALTER_VIRTUAL_GCOL_EXPR | ALTER_DROP_CHECK_CONSTRAINT | ALTER_RENAME | ALTER_COLUMN_INDEX_LENGTH | ALTER_CHANGE_INDEX_COMMENT; Marko explained why ALTER_COLUMN_DEFAULT flag is ignored: "Normally, changing the default values is not something that InnoDB cares about. Default values are handled by the SQL layer, not InnoDB. The only exception is instant ADD COLUMN where we must know the default value of the instantly added column, and that value must be constant for all rows during the ALTER TABLE execution." So to call commit_cache_norebuild() in ha_innobase::commit_inplace_alter_table() we need to change the logic ha_innobase::prepare_inplace_alter_table(). At the other hand, why do we need to call ommit_cache_norebuild()? Why just don't reset table->persistent_autoinc in ha_innobase::commit_inplace_alter_table() directly: --- a/storage/innobase/handler/handler0alter.cc +++ b/storage/innobase/handler/handler0alter.cc @@ - 11036 , 6 + 11036 , 10 @@ ha_innobase::commit_inplace_alter_table( DBUG_ASSERT(!ctx0); MONITOR_ATOMIC_DEC(MONITOR_PENDING_ALTER_TABLE); ha_alter_info->group_commit_ctx = NULL; + if (table->found_next_number_field + && !altered_table->found_next_number_field) { + m_prebuilt->table->persistent_autoinc = 0 ; + } DBUG_RETURN( false ); } persistent_autoinc is set if TABLE::found_next_number_field is set (during table open) in SQL layer. So persistent_autoinc reset should be crash-safe. And it's not supposed to be rolled back if ha_innobase::commit_inplace_alter_table() is finished successfully.

            I think that it could be cleaner if the assignment to reset ha_alter_info->group_commit_ctx=NULL was not skipped in ha_innobase::commit_inplace_alter_table(), but the metadata for all partitions were updated.

            The proposed solution looks technically correct, but then the comment before the relaxed assertion in ha_partition::commit_inplace_alter_table() will have to be adjusted.

            marko Marko Mäkelä added a comment - I think that it could be cleaner if the assignment to reset ha_alter_info->group_commit_ctx=NULL was not skipped in ha_innobase::commit_inplace_alter_table() , but the metadata for all partitions were updated. The proposed solution looks technically correct, but then the comment before the relaxed assertion in ha_partition::commit_inplace_alter_table() will have to be adjusted.
            vlad.lesin Vladislav Lesin added a comment - - edited

            marko, below is edited version of my message about partitions iteration for INNOBASE_INPLACE_IGNORE flags in ha_innobase::commit_inplace_alter_table() from our slack discussion.

            Why we can't do the same thing for INNOBASE_INPLACE_IGNORE alter table flags in ha_innobase::commit_inplace_alter_table() as for commit_cache_norebuild(), i.e. iterate partitions with ha_alter_info->group_commit_ctx and don't change the code in ha_partition? The question is the same as for none-partitioned tables. The answer is the same as here.

            ha_alter_info->group_commit_ctx is analogue of ha_alter_info->handler_ctx for partitioned tables. ha_alter_info->handler_ctx is filled in ha_innobase::prepare_inplace_alter_table(), and partition engine invokes ha_innobase::prepare_inplace_alter_table() for each partition and pushes created ha_alter_info->handler_ctx into ha_alter_info->group_commit_ctx array.

            ha_innobase::prepare_inplace_alter_table() don't create ha_alter_info->handler_ctx for INNOBASE_INPLACE_IGNORE flags(the details are here), so ha_alter_info->group_commit_ctx array is filled with NULL's for INNOBASE_INPLACE_IGNORE.

            To change metadata for all partitions for INNOBASE_INPLACE_IGNORE flags in ha_innobase::commit_inplace_alter_table() we need to change the logic in ha_innobase::prepare_inplace_alter_table() for those flags.

            I would not do this for such small change as resetting persistent_autoinc.

            vlad.lesin Vladislav Lesin added a comment - - edited marko , below is edited version of my message about partitions iteration for INNOBASE_INPLACE_IGNORE flags in ha_innobase::commit_inplace_alter_table() from our slack discussion. Why we can't do the same thing for INNOBASE_INPLACE_IGNORE alter table flags in ha_innobase::commit_inplace_alter_table() as for commit_cache_norebuild(), i.e. iterate partitions with ha_alter_info->group_commit_ctx and don't change the code in ha_partition? The question is the same as for none-partitioned tables. The answer is the same as here . ha_alter_info->group_commit_ctx is analogue of ha_alter_info->handler_ctx for partitioned tables. ha_alter_info->handler_ctx is filled in ha_innobase::prepare_inplace_alter_table(), and partition engine invokes ha_innobase::prepare_inplace_alter_table() for each partition and pushes created ha_alter_info->handler_ctx into ha_alter_info->group_commit_ctx array. ha_innobase::prepare_inplace_alter_table() don't create ha_alter_info->handler_ctx for INNOBASE_INPLACE_IGNORE flags(the details are here ), so ha_alter_info->group_commit_ctx array is filled with NULL's for INNOBASE_INPLACE_IGNORE. To change metadata for all partitions for INNOBASE_INPLACE_IGNORE flags in ha_innobase::commit_inplace_alter_table() we need to change the logic in ha_innobase::prepare_inplace_alter_table() for those flags. I would not do this for such small change as resetting persistent_autoinc.

            vlad.lesin, I agree that your proposed solution is technically correct, and possibly also the simplest solution. The reason why changes to partitioned tables are committed in a single call is durability. This change does not affect durability, because it is solely about updating the InnoDB data dictionary caches (one InnoDB dict_table_t per partition or sub-partition).

            I would only request you to revise the comment in ha_partition::commit_inplace_alter_table() to match the revised design.

            marko Marko Mäkelä added a comment - vlad.lesin , I agree that your proposed solution is technically correct, and possibly also the simplest solution. The reason why changes to partitioned tables are committed in a single call is durability. This change does not affect durability, because it is solely about updating the InnoDB data dictionary caches (one InnoDB dict_table_t per partition or sub-partition). I would only request you to revise the comment in ha_partition::commit_inplace_alter_table() to match the revised design.

            People

              vlad.lesin Vladislav Lesin
              marko Marko Mäkelä
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.