Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-16329

Engine-independent online ALTER TABLE

Details

    Description

      Implement online ALTER TABLE above the storage engine layer by mimicking what InnoDB does since MariaDB 10.0.

      Intro

      ALTER TABLE can perform many various table metadata alterations, individually or batched (many alterations at once). It supports different algorithms for applying those alterations and different lock levels restricting access to the table while it's being altered. What algorithm and lock level to use depends on the storage engine, requested alterations and explicitly specified algorithm and lock, if any. If no algorithm or lock level is explicitly specified, the server is supposed to select the best algorithm/lock combination automatically.

      While certain alterations (like adding a column) can be done by certain storage engines (like InnoDB) internally (using InnoDB-specific ALGORITHM=INSTANT) and without locking the table (LOCK=NONE), the most universal ALTER TABLE algorithm that supports arbitrary alterations in arbitrary combinations is the COPY algorithm and it locks the table, allowing only read access during the whole ALTER TABLE duration. When the server has to resort to the COPY algorithm (because no other one can perform the requested set of alterations) it often means long periods of the application being essentially down, because the table cannot be written into.

      The goal of this task is to allow the COPY algorithm to work without read-locking the table. In other words, this should make the combination ALGORITHM=COPY, LOCK=NONE possible.

      Implementation

      The COPY algorithm for ALTER ONLINE TABLE is supposed to do the following:

      1. Exclusively acquire the table Metadata Lock (MDL).
      2. Acquire the table lock for read (TL_READ)
      3. Read the first record. In table is empty, online is skipped (goto 11).
      4. Set up (a separate, per-table one) row-based replication for tracking changes from concurrent DMLs ("online changes").
      5. Downgrade the MDL lock.
      6. Copy the table contents (using a non-locking read if supported by the storage engine).
      7. Apply the online changes from the replicated contents.
      8. Unlock the table lock
      9. Exclusively lock the table MDL (upgrade to MDL_SHARED_WRITE).
      10. Apply any remaining online changes.
      11. Swap the old and new table, unlock, drop the old table.

      This would remove some limitations that currently exist with the InnoDB-only online table rebuild. Basically, anything that is supported by ALGORITHM=COPY should 'just work' (however see the limitations section). The bulk copying could still happen in copy_data_between_tables(). A few examples:

      1. Arbitrary changes of column type will be possible, without duplicating any conversion logic.
      2. It will be possible to add virtual columns (materialized or not) together with adding indexes, while allowing concurrent writes (MDEV-13795, MDEV-14332).
      3. The ENGINE or the partitioning of a table can be changed, just like any other attribute.

      [Not implemented here] We should remove the online table rebuild code from InnoDB (row_log_table_apply() and friends), and just let InnoDB fall back to this. The only ALTER ONLINE TABLE that could better be implemented inside storage engines would be ADD INDEX. Then, ALGORITHM=INPLACE would no longer be misleading, because it would mean exactly the same as the ALGORITHM=NOCOPY that was introduced in MDEV-13134. Before this, we must implement MDEV-515 (bulk load into an empty InnoDB table) to avoid a performance regression.

      Behavior of different engines

      The per-engine behavior depends on what operations can happen concurrently while TL_READ is held.

      • Innodb can do any DML (except TRUNCATE i presume). It lazily opens the read view once the first record is read during the copy stage. This means that in theory some transaction can slip concurrently between TL_READ-locked table and first record is read. This is why we first read one record out, and then set up the online change buffer.
      • Myisam/Aria only allow inserts in parallel with reads: The last table's record offset is remembered for the table handle, so copy stage will read out only the changes, that are already there. Other DMLs will be blocked until table lock is released.
      • Online is disabled for temporary tables.
      • For other engines, it depends on whether is it possible to acquire a particular table lock in parallel with TL_READ.

      Limitations

      • Embedded server doesn't support LOCK=NONE, Until HAVE_REPLICATION is enabled there (or until some finer refactoring).
      • DROP SYSTEM VERSIONING is not currently supported, but the support can be added on demand
      • ALTER TABLE ... ORDER BY is not and cannot be supported
      • Tables which are referenced by FOREIGN KEYs with CASCADE operations, see MDEV-29068
      • ALTER IGNORE TABLE
      • Adding autoinc to the existing column, when NO_AUTO_VALUE_ON_ZERO is not present, and there were no unchanged UNIQUE NOT NULL keys. A NULL column is always impossible to update to AUTOINC with Online COPY.
      • Sequences are not supported
      • ADD COLUMN ... AUTO_INCREMENT and ADD COLUMN ... DEFAULT(NEXTVAL(..))
      • MODIFY ... NOT NULL DEFAULT(NEXTVAL(..)), if the column initially was NULLable
      • Sequences
      • Engines S3 and CONNECT

      [Old part] Challenges

      We should replicate the online rebuild on slaves in parallel, so that the master and slaves will be able to commit at roughly the same time. This would be something similar to MDEV-11675, which would still be needed for native online ADD INDEX, which would avoid copying the table.

      In InnoDB, there is some logic for logging the changes when the PRIMARY KEY columns are changed, or a PRIMARY KEY is being added. The 'row event log' online_log will additionally contain the PRIMARY KEY values in the new table, so that the records can easily be found. The online_log will contain INSERT, UPDATE, and DELETE events.

      We will need some interface from ROLLBACK inside the storage engine to the 'row event log', so that BEGIN; INSERT; ROLLBACK will also create a DELETE event. Similarly, we will need an interface that allows CASCADE or SET NULL operations from FOREIGN KEY constraints to be relayed to the 'row event log'.

      Starting with MariaDB 10.2, there is an optimization that avoids unnecessarily sorting the data by PRIMARY KEY when the sorting does not change. Search for skip_pk_sort. It would be nice if the future MDEV-515 code inside InnoDB could be informed of this, so that it can assume that the data is already sorted by PRIMARY KEY.

      If there exist FOREIGN KEY constraints on the being-rebuilt table, then this approach should work just as fine as the current online table rebuild in InnoDB: The constraints would be enforced on the old copy of the table until the very end where we switch the tables, and from that point on, on the new copy of the table.

      Initially, we could disable ONLINE...ADD FOREIGN KEY. That could be easier to implement after moving the FOREIGN KEY processing from InnoDB to the SQL layer.

      Attachments

        Issue Links

          Activity

            In ALGORITHM=COPY, column type conversions are implemented in Copy_field::do_copy(), which is called by copy_data_between_tables(). This makes use of a function pointer, pointing to a conversion function, such as Field_long::store() or do_copy_not_null(). These conversion functions require that the data be available in Field::ptr.

            InnoDB stores data in a different format internally. Integers are stored in big-endian format, and the sign bit is inverted, so that data can be compared with memcmp(). In order to use Copy_field, the ALGORITHM=INPLACE code in InnoDB would have to convert both the source data and the copied data. It seems that we would have to refactor Copy_field and Field::get_copy_func() so that the copied data would be in the storage engine format.

            Instant (failure-free) type conversions can be implemented in MDEV-11424 as a special case, without depending on this task.

            marko Marko Mäkelä added a comment - In ALGORITHM=COPY , column type conversions are implemented in Copy_field::do_copy() , which is called by copy_data_between_tables() . This makes use of a function pointer, pointing to a conversion function, such as Field_long::store() or do_copy_not_null() . These conversion functions require that the data be available in Field::ptr . InnoDB stores data in a different format internally. Integers are stored in big-endian format, and the sign bit is inverted, so that data can be compared with memcmp() . In order to use Copy_field , the ALGORITHM=INPLACE code in InnoDB would have to convert both the source data and the copied data. It seems that we would have to refactor Copy_field and Field::get_copy_func() so that the copied data would be in the storage engine format. Instant (failure-free) type conversions can be implemented in MDEV-11424 as a special case, without depending on this task.
            marko Marko Mäkelä added a comment - - edited

            Remove-InnoDB-online-table-rebuild.patch is a patch against mariadb-10.4.0 that removes most of the InnoDB code related to online table rebuild. It compiles and links, but I did not test it. More changes would be needed in the file handler0alter.cc. The files row0uins.cc and row0umod.cc implement rollback. In InnoDB, the online table rebuild code would log and possibly apply any DML to the copy of the table before COMMIT, and therefore it must log the affected row operations for ROLLBACK as well.

            The cross-engine online table rebuild might be easiest to implement by deferring log apply to the COMMIT of each DML transaction. In that way, there is no issue with ROLLBACK. But then, in order to fix MDEV-11675 (or not to reintroduce it) the log should not be replicated, and instead some logic should be added to replication slaves to generate and apply ‘row event log’ locally for the being-rebuilt table.

            Inside InnoDB, online ADD INDEX would remain supported with ALGORITHM=INPLACE. In many cases, it is a lighter operation than a full table rebuild. Other supported operations with ALGORITHM=INPLACE would be DROP INDEX and any ALGORITHM=INSTANT operations.

            marko Marko Mäkelä added a comment - - edited Remove-InnoDB-online-table-rebuild.patch is a patch against mariadb-10.4.0 that removes most of the InnoDB code related to online table rebuild. It compiles and links, but I did not test it. More changes would be needed in the file handler0alter.cc. The files row0uins.cc and row0umod.cc implement rollback. In InnoDB, the online table rebuild code would log and possibly apply any DML to the copy of the table before COMMIT , and therefore it must log the affected row operations for ROLLBACK as well. The cross-engine online table rebuild might be easiest to implement by deferring log apply to the COMMIT of each DML transaction. In that way, there is no issue with ROLLBACK . But then, in order to fix MDEV-11675 (or not to reintroduce it) the log should not be replicated, and instead some logic should be added to replication slaves to generate and apply ‘row event log’ locally for the being-rebuilt table. Inside InnoDB, online ADD INDEX would remain supported with ALGORITHM=INPLACE . In many cases, it is a lighter operation than a full table rebuild. Other supported operations with ALGORITHM=INPLACE would be DROP INDEX and any ALGORITHM=INSTANT operations.

            MDEV-18127 gives a strong reason why it would be better to defer the logging to the COMMIT time of DML transactions. In the current InnoDB online table rebuild which logs every row operation immediately, if any DML transaction was aborted due to duplicate key error, then also the online ALTER TABLE could be aborted when applying the log.

            marko Marko Mäkelä added a comment - MDEV-18127 gives a strong reason why it would be better to defer the logging to the COMMIT time of DML transactions. In the current InnoDB online table rebuild which logs every row operation immediately, if any DML transaction was aborted due to duplicate key error, then also the online ALTER TABLE could be aborted when applying the log.

            I believe that some operations will remain impossible to do online (while allowing concurrent modifications). Here are a few examples:

            • Adding an AUTO_INCREMENT column to a table. (Which values to assign for concurrent DML? This would very likely be nondeterministic and very challenging in a replication environment.)
            • Dropping the PRIMARY KEY of a table without adding one. (This might be doable, but applying the log of concurrent changes could be very slow.)
            • ALTER IGNORE TABLE with ADD UNIQUE INDEX or ADD PRIMARY KEY would produce nondeterministic results if concurrent changes were allowed. Hence, we’d better lock the table. Other operations within ALTER IGNORE TABLE (such as replacing NULL values when adding NOT NULL) should be fine.
            • ALTER IGNORE TABLE with lossy data conversions on columns that are part of a UNIQUE KEY or PRIMARY KEY could lead to nondeterministic results with concurrent modifications.
            marko Marko Mäkelä added a comment - I believe that some operations will remain impossible to do online (while allowing concurrent modifications). Here are a few examples: Adding an AUTO_INCREMENT column to a table. (Which values to assign for concurrent DML? This would very likely be nondeterministic and very challenging in a replication environment.) Dropping the PRIMARY KEY of a table without adding one. (This might be doable, but applying the log of concurrent changes could be very slow.) ALTER IGNORE TABLE with ADD UNIQUE INDEX or ADD PRIMARY KEY would produce nondeterministic results if concurrent changes were allowed. Hence, we’d better lock the table. Other operations within ALTER IGNORE TABLE (such as replacing NULL values when adding NOT NULL ) should be fine. ALTER IGNORE TABLE with lossy data conversions on columns that are part of a UNIQUE KEY or PRIMARY KEY could lead to nondeterministic results with concurrent modifications.

            We should extend the progress reporting for ALTER TABLE. Between copying the data and applying the log, we should invoke thd_progress_next_stage(), and then keep invoking thd_progress_report() also when applying the log of changes.

            marko Marko Mäkelä added a comment - We should extend the progress reporting for ALTER TABLE . Between copying the data and applying the log, we should invoke thd_progress_next_stage() , and then keep invoking thd_progress_report() also when applying the log of changes.

            thiru pointed out that it might not be a good idea to try to apply Remove-InnoDB-online-table-rebuild.patch after all. A native table-rebuilding ALTER TABLE inside InnoDB could be faster because it would avoid conversions between InnoDB and TABLE::record formats. For some operations, such as character set conversions or adding stored generated columns, I believe that this is better to pay the overhead of these conversions than to try to duplicate logic inside InnoDB.

            marko Marko Mäkelä added a comment - thiru pointed out that it might not be a good idea to try to apply Remove-InnoDB-online-table-rebuild.patch after all. A native table-rebuilding ALTER TABLE inside InnoDB could be faster because it would avoid conversions between InnoDB and TABLE::record formats. For some operations, such as character set conversions or adding stored generated columns, I believe that this is better to pay the overhead of these conversions than to try to duplicate logic inside InnoDB.

            I originally filed this bug to address MySQL Bug #77097 by making InnoDB online table rebuild support data type changes. Later, after evaluating the changes needed, it seemed to be more useful to support engine-independent online table rebuild.

            marko Marko Mäkelä added a comment - I originally filed this bug to address MySQL Bug #77097 by making InnoDB online table rebuild support data type changes. Later, after evaluating the changes needed, it seemed to be more useful to support engine-independent online table rebuild.

            In MySQL Bug #98600, a user complains about the ‘fake duplicate’ problem, which occurs because InnoDB table rebuild is writing online_log before a row operation has been successfully applied to all indexes. For rolling back the operation, another online_log record would be written, but the intermittent duplicate key error will make the table rebuild fail. The problem exists since MySQL 5.6, and it should also affect the non-rebuilding creation of UNIQUE INDEX.

            An elegant way to prevent such ‘fake duplicates’ is to buffer row events and write them only after the row operation has been successfully performed, or the entire transaction has been committed.

            marko Marko Mäkelä added a comment - In MySQL Bug #98600 , a user complains about the ‘fake duplicate’ problem, which occurs because InnoDB table rebuild is writing online_log before a row operation has been successfully applied to all indexes. For rolling back the operation, another online_log record would be written, but the intermittent duplicate key error will make the table rebuild fail. The problem exists since MySQL 5.6, and it should also affect the non-rebuilding creation of UNIQUE INDEX . An elegant way to prevent such ‘fake duplicates’ is to buffer row events and write them only after the row operation has been successfully performed, or the entire transaction has been committed.

            marko, that's strange, because in case of transactional engines data is anyway not immediately written to binlog during row operation. Instead, it is written to a thread-local transaction cache, which is flushed to the binlog during commti() call of binlog_hton

            nikitamalyavin Nikita Malyavin added a comment - marko , that's strange, because in case of transactional engines data is anyway not immediately written to binlog during row operation. Instead, it is written to a thread-local transaction cache, which is flushed to the binlog during commti() call of binlog_hton

            MDEV-15250 covers the ‘fake duplicate’ problem in InnoDB native online table rebuild (or online CREATE UNIQUE INDEX).

            marko Marko Mäkelä added a comment - MDEV-15250 covers the ‘fake duplicate’ problem in InnoDB native online table rebuild (or online CREATE UNIQUE INDEX ).

            I’m glad to see that the tests include a case where a concurrent UPDATE during ADD PRIMARY KEY causes a failure of the ALTER TABLE operation. But, I do not see that test for INSERT, or for an UPDATE that would be executed via a foreign key constraint (ON UPDATE CASCADE).

            marko Marko Mäkelä added a comment - I’m glad to see that the tests include a case where a concurrent UPDATE during ADD PRIMARY KEY causes a failure of the ALTER TABLE operation. But, I do not see that test for INSERT , or for an UPDATE that would be executed via a foreign key constraint ( ON UPDATE CASCADE ).
            rjasdfiii Rick James added a comment -

            ALTER syntax allows multiple changes. Is it always possible to perform all of them in a single command? Are there some tricky cases that need special care? Perhaps:

            • Swapping the names of two columns;
            • DROPping and ADDing a different PRIMARY KEY;
            • Multiple PARTITION actions.
            • what if the old schema would raise an FK or Uniqueness exception but the new schema would not? (Or vice versa.)
            rjasdfiii Rick James added a comment - ALTER syntax allows multiple changes. Is it always possible to perform all of them in a single command? Are there some tricky cases that need special care? Perhaps: Swapping the names of two columns; DROPping and ADDing a different PRIMARY KEY; Multiple PARTITION actions. what if the old schema would raise an FK or Uniqueness exception but the new schema would not? (Or vice versa.)

            generally all of that is allowed and should work. not everything of that is tested though, it's in todo.

            serg Sergei Golubchik added a comment - generally all of that is allowed and should work. not everything of that is tested though, it's in todo.

            serg, I think that rjasdfiii must be aware of the following error. Do we have a plan to remove it?

            ER_ALTER_OPERATION_NOT_SUPPORTED_REASON_PARTITION
                    chi "分区特定操作尚不支持锁定/算法"
                    eng "Partition specific operations do not yet support LOCK/ALGORITHM"
                    spa "Las operaciones específicas de partición aún no soportan LOCK/ALGORITHM"
            

            I don’t think it was ever allowed to combine partitioning-related operations (such as ALTER TABLE…DROP PARTITIONING) with others, such as ADD INDEX or DROP INDEX. Some combinations might even be a syntax error. In the development branch, I do not see any added tests that would attempt to trigger such errors.

            I believe that there could be other restrictions as well, at least around versioned tables.

            My implementation of online table rebuild (WL#6255) in MySQL 5.6.8 does support online ADD PRIMARY KEY. Starting with MySQL 5.7 and MariaDB 10.2, some sorting will be skipped when the ordering of the PRIMARY KEY does not change (say, when changing from PRIMARY KEY(a,b,c) to PRIMARY KEY(a,b) or PRIMARY KEY(a,c)). I see that some tests with alter table t1 drop primary key, add primary key(b) are included in the development branch.

            I’d expect that swapping the names of two columns is possible ever since MySQL 5.6 or MariaDB 10.10. MDEV-16290 introduced easier syntax for it. It should be something like ALTER TABLE t RENAME COLUMN a TO b, RENAME COLUMN b TO a; Any DROP as well as the first part of RENAME always refer to "old" column names, so there should be no confusion.

            marko Marko Mäkelä added a comment - serg , I think that rjasdfiii must be aware of the following error. Do we have a plan to remove it? ER_ALTER_OPERATION_NOT_SUPPORTED_REASON_PARTITION chi "分区特定操作尚不支持锁定/算法" eng "Partition specific operations do not yet support LOCK/ALGORITHM" spa "Las operaciones específicas de partición aún no soportan LOCK/ALGORITHM" I don’t think it was ever allowed to combine partitioning-related operations (such as ALTER TABLE…DROP PARTITIONING ) with others, such as ADD INDEX or DROP INDEX . Some combinations might even be a syntax error. In the development branch, I do not see any added tests that would attempt to trigger such errors. I believe that there could be other restrictions as well, at least around versioned tables. My implementation of online table rebuild (WL#6255) in MySQL 5.6.8 does support online ADD PRIMARY KEY . Starting with MySQL 5.7 and MariaDB 10.2, some sorting will be skipped when the ordering of the PRIMARY KEY does not change (say, when changing from PRIMARY KEY(a,b,c) to PRIMARY KEY(a,b) or PRIMARY KEY(a,c) ). I see that some tests with alter table t1 drop primary key, add primary key(b) are included in the development branch. I’d expect that swapping the names of two columns is possible ever since MySQL 5.6 or MariaDB 10.10. MDEV-16290 introduced easier syntax for it. It should be something like ALTER TABLE t RENAME COLUMN a TO b, RENAME COLUMN b TO a; Any DROP as well as the first part of RENAME always refer to "old" column names, so there should be no confusion.
            rjasdfiii Rick James added a comment -

            Is it safe to say the following?

            "When multiple changes are allowed in a single ALTER, that will 'always' be as fast or faster than doing the individual Alters separately."

            rjasdfiii Rick James added a comment - Is it safe to say the following? "When multiple changes are allowed in a single ALTER, that will 'always' be as fast or faster than doing the individual Alters separately."
            serg Sergei Golubchik added a comment - - edited

            In the branch preview-10.10-ddl.
            And in bb-10.10-MDEV-16329.

            serg Sergei Golubchik added a comment - - edited In the branch preview-10.10-ddl . And in bb-10.10-MDEV-16329 .

            To Rick James:
            Yes, many alter table operations in a single operation should always be faster than doing individual alter statements.

            monty Michael Widenius added a comment - To Rick James: Yes, many alter table operations in a single operation should always be faster than doing individual alter statements.

            Does this feature work in cases when the table:

            • Has foreign keys and/or triggers?
            • Is written by other tables foreign keys or triggers?

            Do you plan, at some point, to implement a way to pause the data copying in case the server slows down too much and resume it later? As far as I understand, as long as the RBR is in place, this shouldn't break anything.

            f_razzoli Federico Razzoli added a comment - Does this feature work in cases when the table: Has foreign keys and/or triggers? Is written by other tables foreign keys or triggers? Do you plan, at some point, to implement a way to pause the data copying in case the server slows down too much and resume it later? As far as I understand, as long as the RBR is in place, this shouldn't break anything.
            • has foreign keys and/or triggers — yes, it should work, as far as I understand
            • Is written by other tables foreign keys or triggers — triggers aren't a problem, should work fine, cascading foreign keys are a problem, it's MDEV-29068 and one of the main reasons why this preview feature didn't make it into 10.10.1
            serg Sergei Golubchik added a comment - has foreign keys and/or triggers — yes, it should work, as far as I understand Is written by other tables foreign keys or triggers — triggers aren't a problem, should work fine, cascading foreign keys are a problem, it's MDEV-29068 and one of the main reasons why this preview feature didn't make it into 10.10.1
            nikitamalyavin Nikita Malyavin added a comment - - edited

            elenst the feature is ready for your assessment. The code rebased on top of 11.0 can be found on the following branch:

            bb-11.0-MDEV-16329-online-alter

            link for the current head: https://github.com/MariaDB/server/commit/57e3333904a6d45077b47bf27808573a723def30

            nikitamalyavin Nikita Malyavin added a comment - - edited elenst the feature is ready for your assessment. The code rebased on top of 11.0 can be found on the following branch: bb-11.0-MDEV-16329-online-alter link for the current head: https://github.com/MariaDB/server/commit/57e3333904a6d45077b47bf27808573a723def30

            what branch did you rebase? bb-10.11-oalter has a bunch of commits not in bb-11.0-MDEV-16329-online-alter

            serg Sergei Golubchik added a comment - what branch did you rebase? bb-10.11-oalter has a bunch of commits not in bb-11.0- MDEV-16329 -online-alter

            I rebased bb-10.11-ddl-nikita. Thanks for reminding, will cherry-pick your updates

            nikitamalyavin Nikita Malyavin added a comment - I rebased bb-10.11-ddl-nikita. Thanks for reminding, will cherry-pick your updates
            nikitamalyavin Nikita Malyavin added a comment - elenst , the new branch for testing is bb-11.0-oalter . Head: https://github.com/MariaDB/server/commit/f9b33ac570337be320f718d52fd88d301a2bc1e7
            nunop Nuno added a comment -

            Hey!
            Good day.

            Just curious – will this allow us to do "ALTER ONLINE TABLE" when there is a virtual generated column in the table?

            Thanks!

            nunop Nuno added a comment - Hey! Good day. Just curious – will this allow us to do "ALTER ONLINE TABLE" when there is a virtual generated column in the table? Thanks!

            Hello @nuno! Yes, virtual generated columns are supported

            nikitamalyavin Nikita Malyavin added a comment - Hello @nuno! Yes, virtual generated columns are supported
            elenst Elena Stepanova added a comment - - edited

            Tests performed on bb-11.2-oalter c29ff60b didn't reveal any serious issues. In my opinion, the feature as of this revision can be pushed into 11.2 and released with 11.2.1 RC.


            The simplest scenario which demonstrates benefits of the new development is:

            • one connection performs ALTER TABLE which requires COPY algorithm on a significantly large table;
            • another connection wants to keep running DML on the table.

            Here is a comparative example of it executed on a 10M InnoDB table with the new feature vs the baseline. One connection starts ALTER, another starts running point DMLs (single row updates by PK), and continues doing so until ALTER ends. After that, the number of successfully updated rows is counted.
            There are no timeouts in this scenario because duration of the ALTER is below the timeout limits.
            Disclaimer: it is not an official benchmark, the results are only for relative comparison, the absolute numbers have no value.

            baseline

            alter_duration (sec)    rows_updated
            24.47                   0
            

            Online alter

            alter_duration (sec)    rows_updated
            28.18                   711035
            

            That is, without online alter the table is blocked for the whole duration of ALTER, no updates are executed. With online alter, at a cost of small increase in ALTER duration, updates continue to be executed during the most part of ALTER.
            The results are scalable to bigger tables and longer duration.
            In addition to performance numbers, it also has a qualitative effect, as with a longer locking ALTER DML would start timing out. The online ALTER prevents it to a big extent.

            A user needs to be aware though that while the non-locking copy alter should be beneficial in the vast majority of realistic use cases, there can be scenarios when it can significantly impact performance. One such scenario known to us is the notorious problem of RBR on tables without primary key. When non-locking ALTER is performed on such a table, and DML affecting a big number of records is run in parallel, the ALTER can become extremely slow, and further DML can also be affected.
            If there is a chance of such scenarios (and there is no possibility of improving the schema immediately by adding primary keys to the tables), ALTER should be performed with explicit LOCK=SHARED clause. If this is also impossible, then LOCK_ALTER_TABLE_COPY flag should be added to the old_mode variable until the schema can be improved.

            Here is a comparative example of such scenario executed on a 5M InnoDB table without a PK. One connection starts ALTER, another one first runs big UPDATE (10K rows updated at once), and then a series of 500 small updates, 100 rows each. running point DMLs (single row updates by PK).
            Same disclaimer as above applies.

            baseline (seconds)

            alter_duration   first_dml_duration   dml_loop_duration   rows_updated
            13.08            13.09                5.99                60000
            

            online alter (seconds)

            alter_duration   first_dml_duration   dml_loop_duration   rows_updated
            516.15           0.02                 12.32               60000
            

            That is, on the baseline the first big update waits for the ALTER to end, but the ALTER itself is fast enough, and when it ends, the following DML is performed without obstacles.
            With online alter, the first DML is indeed executed in parallel with ALTER, but the following DML becomes affected by concurrency with ALTER, while ALTER itself becomes very slow. Besides, the slow ALTER holds a metadata lock which can cause problems with seeing table definitions etc.
            This scenario is also highly scalable and in unfortunate circumstances ALTER can become practically endless.

            elenst Elena Stepanova added a comment - - edited Tests performed on bb-11.2-oalter c29ff60b didn't reveal any serious issues. In my opinion, the feature as of this revision can be pushed into 11.2 and released with 11.2.1 RC. The simplest scenario which demonstrates benefits of the new development is: one connection performs ALTER TABLE which requires COPY algorithm on a significantly large table; another connection wants to keep running DML on the table. Here is a comparative example of it executed on a 10M InnoDB table with the new feature vs the baseline. One connection starts ALTER, another starts running point DMLs (single row updates by PK), and continues doing so until ALTER ends. After that, the number of successfully updated rows is counted. There are no timeouts in this scenario because duration of the ALTER is below the timeout limits. Disclaimer: it is not an official benchmark, the results are only for relative comparison, the absolute numbers have no value. baseline alter_duration (sec) rows_updated 24.47 0 Online alter alter_duration (sec) rows_updated 28.18 711035 That is, without online alter the table is blocked for the whole duration of ALTER, no updates are executed. With online alter, at a cost of small increase in ALTER duration, updates continue to be executed during the most part of ALTER. The results are scalable to bigger tables and longer duration. In addition to performance numbers, it also has a qualitative effect, as with a longer locking ALTER DML would start timing out. The online ALTER prevents it to a big extent. A user needs to be aware though that while the non-locking copy alter should be beneficial in the vast majority of realistic use cases, there can be scenarios when it can significantly impact performance. One such scenario known to us is the notorious problem of RBR on tables without primary key. When non-locking ALTER is performed on such a table, and DML affecting a big number of records is run in parallel, the ALTER can become extremely slow, and further DML can also be affected. If there is a chance of such scenarios (and there is no possibility of improving the schema immediately by adding primary keys to the tables), ALTER should be performed with explicit LOCK=SHARED clause. If this is also impossible, then LOCK_ALTER_TABLE_COPY flag should be added to the old_mode variable until the schema can be improved. Here is a comparative example of such scenario executed on a 5M InnoDB table without a PK. One connection starts ALTER, another one first runs big UPDATE (10K rows updated at once), and then a series of 500 small updates, 100 rows each. running point DMLs (single row updates by PK). Same disclaimer as above applies. baseline (seconds) alter_duration first_dml_duration dml_loop_duration rows_updated 13.08 13.09 5.99 60000 online alter (seconds) alter_duration first_dml_duration dml_loop_duration rows_updated 516.15 0.02 12.32 60000 That is, on the baseline the first big update waits for the ALTER to end, but the ALTER itself is fast enough, and when it ends, the following DML is performed without obstacles. With online alter, the first DML is indeed executed in parallel with ALTER, but the following DML becomes affected by concurrency with ALTER, while ALTER itself becomes very slow. Besides, the slow ALTER holds a metadata lock which can cause problems with seeing table definitions etc. This scenario is also highly scalable and in unfortunate circumstances ALTER can become practically endless.
            bjquinn BJ Quinn added a comment -

            I'm trying to take advantage of this new feature on a very large table that is otherwise very difficult to change (~3TB, ~700 million rows). The feature seems to work as advertised, which is fantastic! But it is insanely slow – it looks like it might take 45 days or so to complete changing a single column from VARCHAR to TEXT. I even tried setting LOCK=EXCLUSIVE, which seems like it would be a bit faster, but not by a whole lot.

            But the strange thing is that my hardware is not at all being stressed. This is a high performance test server that has no other activity on it other than my ALTER. Neither this table nor the other tables on the server are being written to or read from. CPU usage is low, ~10% on the core that's running the ALTER. And disk activity is even lower. Is there something that can be done to force the ALTER to be more aggressive about using available resources so that it completes more quickly?

            bjquinn BJ Quinn added a comment - I'm trying to take advantage of this new feature on a very large table that is otherwise very difficult to change (~3TB, ~700 million rows). The feature seems to work as advertised, which is fantastic! But it is insanely slow – it looks like it might take 45 days or so to complete changing a single column from VARCHAR to TEXT. I even tried setting LOCK=EXCLUSIVE, which seems like it would be a bit faster, but not by a whole lot. But the strange thing is that my hardware is not at all being stressed. This is a high performance test server that has no other activity on it other than my ALTER. Neither this table nor the other tables on the server are being written to or read from. CPU usage is low, ~10% on the core that's running the ALTER. And disk activity is even lower. Is there something that can be done to force the ALTER to be more aggressive about using available resources so that it completes more quickly?

            LOCK=EXCLUSIVE (or LOCK=SHARED) that would pretty much mean the old ALTER TABLE implementation, where concurrent writes are not allowed.

            What does information_schema.PROCESSLIST show in the ALTER TABLE row?
            Particularly interesting are STAGE, MAX_STAGE, PROGRESS columns.

            serg Sergei Golubchik added a comment - LOCK=EXCLUSIVE (or LOCK=SHARED) that would pretty much mean the old ALTER TABLE implementation, where concurrent writes are not allowed. What does information_schema.PROCESSLIST show in the ALTER TABLE row? Particularly interesting are STAGE , MAX_STAGE , PROGRESS columns.
            bjquinn BJ Quinn added a comment - - edited

            Sorry, I should have clarified, I knew that LOCK=EXCLUSIVE would short circuit the online alter functionality, I was just surprised that it also was not pushing the hardware very hard.

            I left it running for about 5 days and it got to State = 'copy to tmp table' and Progress was about 11.

            I killed it and restarted it and it basically immediately gets to 'copy to tmp table' and starts slowly increasing the Progress number. Watching it more closely this time, I notice periods of activity (a single core at 100% and disk activity at about 10-15%) for several seconds and then no CPU or disk activity for several seconds. The length of the active and inactive periods both vary, sometimes 10 seconds, sometimes 60+ seconds. Not sure what would be causing the inactive periods, but I'm assuming no progress is being made during those periods (EDIT: I confirmed that the Progress field continuously increments when the CPU/disk are active and does not increase while CPU/disk are inactive). I'm not sure what else could be bottlenecking the process if both CPU and disk are inactive, unless there's some intentional throttling mechanism intended to not overload the system. I have plenty of CPU and disk I/O, so I'd be happy to push my hardware harder and not have these inactive periods to shorten the time to alter the table.

            I'm just on a test server now, but on the production server I will need the online alter functionality, especially if it takes several hours to days to alter the table, but I was hoping it would be less than ~45 days.

            Thanks for your help!

            bjquinn BJ Quinn added a comment - - edited Sorry, I should have clarified, I knew that LOCK=EXCLUSIVE would short circuit the online alter functionality, I was just surprised that it also was not pushing the hardware very hard. I left it running for about 5 days and it got to State = 'copy to tmp table' and Progress was about 11. I killed it and restarted it and it basically immediately gets to 'copy to tmp table' and starts slowly increasing the Progress number. Watching it more closely this time, I notice periods of activity (a single core at 100% and disk activity at about 10-15%) for several seconds and then no CPU or disk activity for several seconds. The length of the active and inactive periods both vary, sometimes 10 seconds, sometimes 60+ seconds. Not sure what would be causing the inactive periods, but I'm assuming no progress is being made during those periods (EDIT: I confirmed that the Progress field continuously increments when the CPU/disk are active and does not increase while CPU/disk are inactive). I'm not sure what else could be bottlenecking the process if both CPU and disk are inactive, unless there's some intentional throttling mechanism intended to not overload the system. I have plenty of CPU and disk I/O, so I'd be happy to push my hardware harder and not have these inactive periods to shorten the time to alter the table. I'm just on a test server now, but on the production server I will need the online alter functionality, especially if it takes several hours to days to alter the table, but I was hoping it would be less than ~45 days. Thanks for your help!

            bjquinn, I think that the problem that you are highlighting is that ALTER TABLE is single threaded. For the native InnoDB ALTER TABLE, MDEV-16281 has been filed for implementing multi-threaded data loading or index creation. Our version of InnoDB does not natively support any data type conversions (such as INT to BIGINT or CHAR to VARCHAR). Theoretically a VARCHAR to TEXT conversion can be executed as a metadata-only change (no copying needed).

            I do not have any idea how hard it would be reimplement this cross-engine ALTER TABLE to make use of multiple threads. The current storage engine APIs that it invokes are row oriented, expected to update all indexes of the table for each row.

            marko Marko Mäkelä added a comment - bjquinn , I think that the problem that you are highlighting is that ALTER TABLE is single threaded. For the native InnoDB ALTER TABLE , MDEV-16281 has been filed for implementing multi-threaded data loading or index creation. Our version of InnoDB does not natively support any data type conversions (such as INT to BIGINT or CHAR to VARCHAR ). Theoretically a VARCHAR to TEXT conversion can be executed as a metadata-only change (no copying needed). I do not have any idea how hard it would be reimplement this cross-engine ALTER TABLE to make use of multiple threads. The current storage engine APIs that it invokes are row oriented, expected to update all indexes of the table for each row.
            bjquinn BJ Quinn added a comment -

            Using multiple threads would certainly dramatically reduce the time it takes to complete the ALTER, but what I'm seeing here is that it doesn't even consistently use the single thread that it can use. It uses 100% of that thread for a few seconds, and then goes idle for a period, and keeps cycling between active and idle. The ability to use multiple threads would help, but if it would at least stay constantly active on a single thread, it seems like it would complete much faster than what I'm seeing now. I only have a little data, but I measured the rate at which the progress counter increases while the CPU is active vs how much progress it made in 5 days, and it was a 10:1 ratio.

            bjquinn BJ Quinn added a comment - Using multiple threads would certainly dramatically reduce the time it takes to complete the ALTER, but what I'm seeing here is that it doesn't even consistently use the single thread that it can use. It uses 100% of that thread for a few seconds, and then goes idle for a period, and keeps cycling between active and idle. The ability to use multiple threads would help, but if it would at least stay constantly active on a single thread, it seems like it would complete much faster than what I'm seeing now. I only have a little data, but I measured the rate at which the progress counter increases while the CPU is active vs how much progress it made in 5 days, and it was a 10:1 ratio.
            stephane@skysql.com VAROQUI Stephane added a comment - - edited

            I guess you have nothing special about your primary key, with no extra load as Elena comment in benchmarking with no PK table would possibly turn to an infinite time.

            This 3TB table have to be read fist and i guess you are not having 3TB memory. How many random read IO/s can your disk produce single thread? Is the table fragmented ? SAS disk ? hope you are having a RAID of multiple NVME or a SAN capable of 100K reads io/s. What FS ?

            Let's suppose you have 500 io reads/s on fragmented table hypothesis , if no IOps are merged (ZFS case) on FS , reading 16K page it's about 0.8M/s it's about 1000*1000*3/3600/24 = 34.7 days just to read the full table

            For writing, InnoDB can benefit multiple io writers innodb_write_io_threads and have a speed limit of of iinnodb_io_capacity but based on the size of indexes and how much feat in memory the innodb have also to read again to maintain index rotation of index that can not feat memory. So only solution to maintains such big tables without covering full index size in memory is via partitioning ,: Is the table partitioned ?

            stephane@skysql.com VAROQUI Stephane added a comment - - edited I guess you have nothing special about your primary key, with no extra load as Elena comment in benchmarking with no PK table would possibly turn to an infinite time. This 3TB table have to be read fist and i guess you are not having 3TB memory. How many random read IO/s can your disk produce single thread? Is the table fragmented ? SAS disk ? hope you are having a RAID of multiple NVME or a SAN capable of 100K reads io/s. What FS ? Let's suppose you have 500 io reads/s on fragmented table hypothesis , if no IOps are merged (ZFS case) on FS , reading 16K page it's about 0.8M/s it's about 1000*1000*3/3600/24 = 34.7 days just to read the full table For writing, InnoDB can benefit multiple io writers innodb_write_io_threads and have a speed limit of of iinnodb_io_capacity but based on the size of indexes and how much feat in memory the innodb have also to read again to maintain index rotation of index that can not feat memory. So only solution to maintains such big tables without covering full index size in memory is via partitioning ,: Is the table partitioned ?
            bjquinn BJ Quinn added a comment -

            Thanks for the feedback!

            PK is simple, it's just a single int field.

            We have 512GB RAM with 360GB allocated to the innodb buffer pool. So yes that is smaller than the table.

            However, we have 10x Solidigm NVMe SSDs in RAID10, so we have lots and lots of available disk I/O. Filesystem is zfs (so it's not really RAID10, it's striped mirrored vdevs). I do not think disk I/O is the bottleneck. Even in the "active" periods, the disks are only 10-15% active.

            A single core gets 100% busy, which seems to be the bottleneck. But this is only during the active periods, which are a fraction of the overall time. The system (disk, CPU, etc.) is usually inactive, 0% active, while the ALTER is running. No progress shown in the progress column during these inactive periods.

            The table is not currently partitioned. I am open to partitioning the table, though I'd imaging I would have to go through the same long ALTER process to get it partitioned in the first place, so it would still be useful to figure out what this whole active/inactive period thing is.

            bjquinn BJ Quinn added a comment - Thanks for the feedback! PK is simple, it's just a single int field. We have 512GB RAM with 360GB allocated to the innodb buffer pool. So yes that is smaller than the table. However, we have 10x Solidigm NVMe SSDs in RAID10, so we have lots and lots of available disk I/O. Filesystem is zfs (so it's not really RAID10, it's striped mirrored vdevs). I do not think disk I/O is the bottleneck. Even in the "active" periods, the disks are only 10-15% active. A single core gets 100% busy, which seems to be the bottleneck. But this is only during the active periods, which are a fraction of the overall time. The system (disk, CPU, etc.) is usually inactive, 0% active, while the ALTER is running. No progress shown in the progress column during these inactive periods. The table is not currently partitioned. I am open to partitioning the table, though I'd imaging I would have to go through the same long ALTER process to get it partitioned in the first place, so it would still be useful to figure out what this whole active/inactive period thing is.

            binlog_cache_size & binlog_stmt_cache_size are used by this MDEV worth checking the impact

            stephane@skysql.com VAROQUI Stephane added a comment - binlog_cache_size & binlog_stmt_cache_size are used by this MDEV worth checking the impact
            bjquinn BJ Quinn added a comment -

            Thanks, I tried binlog_cache_size=10485760 and binlog_stmt_cache_size=10485760 (10MB) but that did not seem to have an effect.

            bjquinn BJ Quinn added a comment - Thanks, I tried binlog_cache_size=10485760 and binlog_stmt_cache_size=10485760 (10MB) but that did not seem to have an effect.

            Since the previous comments suggest that the observed slowness is not specific to the online alter (non-online ALTER is similarly slow), I suppose online alter tuning with binlog variables is unlikely to help here.

            elenst Elena Stepanova added a comment - Since the previous comments suggest that the observed slowness is not specific to the online alter (non-online ALTER is similarly slow), I suppose online alter tuning with binlog variables is unlikely to help here.

            I filed MDEV-33087 for the bug that the copy_data_between_tables() phase for InnoDB is not making use of the MDEV-24621 optimization.

            marko Marko Mäkelä added a comment - I filed MDEV-33087 for the bug that the copy_data_between_tables() phase for InnoDB is not making use of the MDEV-24621 optimization.
            bjquinn BJ Quinn added a comment -

            Thanks! Do you think that's what's causing the alternating active/inactive cycles, or is it something that's affecting the overall efficiency of the ALTER?

            bjquinn BJ Quinn added a comment - Thanks! Do you think that's what's causing the alternating active/inactive cycles, or is it something that's affecting the overall efficiency of the ALTER?

            MDEV-33094 was filed for further optimizing the online log application. It is currently writing undo log records inside InnoDB, for no good reason.

            bjquinn, I do not have any idea what could be causing the active/inactive cycles. Would it be possible to collect stack traces of all threads (attach a debugger to the running process) while the system is inactive? (Or just something like http://poormansprofiler.org once per second?) Also, a system profiler like perf or offcputime could be helpful, but the latter is tricky because you’d typically need all code to be compiled with -fno-omit-frame-pointer in order to get meaningful stack traces (because the stack unwinder in the Linux kernel requires frame pointers; see 1234). Back in September, I successfully used offcputime in MDEV-32050 to identify one bottleneck that I was completely unaware of.

            marko Marko Mäkelä added a comment - MDEV-33094 was filed for further optimizing the online log application. It is currently writing undo log records inside InnoDB, for no good reason. bjquinn , I do not have any idea what could be causing the active/inactive cycles. Would it be possible to collect stack traces of all threads (attach a debugger to the running process) while the system is inactive? (Or just something like http://poormansprofiler.org once per second?) Also, a system profiler like perf or offcputime could be helpful, but the latter is tricky because you’d typically need all code to be compiled with -fno-omit-frame-pointer in order to get meaningful stack traces (because the stack unwinder in the Linux kernel requires frame pointers; see 1234 ). Back in September, I successfully used offcputime in MDEV-32050 to identify one bottleneck that I was completely unaware of.
            bjquinn BJ Quinn added a comment - - edited

            Thanks Marko. I should be able to set it up to capture the stack traces, this system is not yet in production so I should be able to do whatever is necessary. I'll try to get that to you soon.

            Stéphane also had a good suggestion to test mysql -e ’’select * from bigtable’ > /dev/null and see if I get a similar active/inactive cycle. I did not, but it does settle on about 65% CPU usage over time after starting at 100% CPU usage. Disks are 10% to 25% busy, so I don't think that's the bottleneck here.

            EDIT: I'm going to be out of town the next couple of weeks so some of this data might be delayed.

            bjquinn BJ Quinn added a comment - - edited Thanks Marko. I should be able to set it up to capture the stack traces, this system is not yet in production so I should be able to do whatever is necessary. I'll try to get that to you soon. Stéphane also had a good suggestion to test mysql -e ’’select * from bigtable’ > /dev/null and see if I get a similar active/inactive cycle. I did not, but it does settle on about 65% CPU usage over time after starting at 100% CPU usage. Disks are 10% to 25% busy, so I don't think that's the bottleneck here. EDIT: I'm going to be out of town the next couple of weeks so some of this data might be delayed.
            bjquinn BJ Quinn added a comment -

            Marko, attached is the result of poormansprofiler.org, ran once a second while the CPU was idle. Please let me know if this is helpful or you need me to collect this data any differently.

            One thing I noticed that Stephane pointed out was that it got better, at least early on, if innodb_log_file_size was set larger, but as the table I'm testing with is much larger than I can reasonably set innodb_log_file_size to, the issue recurs after a while anyway. But it may be related.

            Thanks!! output.txt

            bjquinn BJ Quinn added a comment - Marko, attached is the result of poormansprofiler.org, ran once a second while the CPU was idle. Please let me know if this is helpful or you need me to collect this data any differently. One thing I noticed that Stephane pointed out was that it got better, at least early on, if innodb_log_file_size was set larger, but as the table I'm testing with is much larger than I can reasonably set innodb_log_file_size to, the issue recurs after a while anyway. But it may be related. Thanks!! output.txt
            bjquinn BJ Quinn added a comment -

            Marko, please disregard. This may have ended up being a hardware problem that was affecting multiple identical servers. A firmware bug in our SSDs. In case anyone is interested, here was the problem and apparent solution – https://forum-proxmox-com.translate.goog/threads/nvme-qid-timeout.51579/?_x_tr_sl=de&_x_tr_tl=en&_x_tr_hl=en&_x_tr_pto=sc

            bjquinn BJ Quinn added a comment - Marko, please disregard. This may have ended up being a hardware problem that was affecting multiple identical servers. A firmware bug in our SSDs. In case anyone is interested, here was the problem and apparent solution – https://forum-proxmox-com.translate.goog/threads/nvme-qid-timeout.51579/?_x_tr_sl=de&_x_tr_tl=en&_x_tr_hl=en&_x_tr_pto=sc

            People

              nikitamalyavin Nikita Malyavin
              marko Marko Mäkelä
              Votes:
              10 Vote for this issue
              Watchers:
              28 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.