Details

    • Task
    • Status: Closed (View Workflow)
    • Major
    • Resolution: Fixed
    • None
    • Tests
    • None

    Description

      Goal
      Problems to solve
      ... Varying result files
      ...... Solution
      ... Unsupported features
      ...... Solution
      ... Different primitives
      ...... Solution
      ... Filed bugs
      Tuning
      ... Assumptions
      ... Common tuning steps
      ... Examples
      ...... MyISAM
      ...... InnoDB plugin
      ...... MERGE

      Goal

      The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
      The suite it not supposed to provide exhaustive engine testing, and certainly cannot test non-standard engine-specific features (due to its very nature of being agnostic to the engine under test), but rather perform a relatively quick evaluation of standard functionality expected from a storage engine.

      Problems to solve

      Existing MTR tests are not very suitable for running on different storage engines.

      Problem 1: Varying result files

      Traditional MTR/mysqltest have very strict requirements in regard to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

      Example:

      OPTIMIZE TABLE t1;
       
      # One engine might in certain situations say
       
      Table   Op      Msg_type        Msg_text
      test.t1 optimize        status  Table is already up to date
       
      # Another always says
       
      Table   Op      Msg_type        Msg_text
      test.t1 optimize        status  OK

      Neither is wrong, and yet if we have one variant in the result file, for another engine the test will fail.

      Till recently the only solution was to copy the whole suite somewhere and recreate result files, thus introducing usually unwanted code duplication.

      Solution

      To solve this problem, we will use functionality developed in scope of MDEV-30.
      With this task, the original test results can be adapted to the storage engine by creating diff files (basically, patches). These patches are stored in the engine folder rather than in the MTR suite folder, so no changes in the original test suite is performed.

      Problem 2: Unsupported features

      Most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a traditional MTR test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail – not just produce a result mismatch which we could patch, but actually fail in the middle of execution, – so we will have to stop running it, and ALTER TABLE will not be tested at all; and the same will happen to any test which uses keys, even if it's used briefly for an unimportant part of the test.

      This problem is even bigger than the first one, because not only does it currently require copying the test suite and maintaining a separate set of result files, but also modifying the test files themselves; and often modifications will be extensive enough to rule out even the possibility of future merges with the original test suite.

      Solution

      We will solve this problem by allowing all tests to run through intermediate failures to the end (unless the server crashes or a syntax error in the test itself occurs). Thus, the tests will also produce mismatches, and the engine maintainer will be able to decide whether the test is really inapplicable and needs to be disabled, or the partial failures are acceptable and can be stored in the result diff file.

      Example:

      We are testing that a table option is supported by the engine (merely is accepted and stored); lets say AUTO_INCREMENT. We create a table with the option and check that it is not lost; but we also want to see that it works with ALTER; so, we add an alter table statement where the option is modified.

      Now, if we try to run such a test with FEDERATED engine, it will break on ALTER – not because the option is unsupported, but because ALTER is. In the storage engine test suite, the test will continue to the end and will produce a mismatch (because the ALTER statement will cause an error). Then, someone who tunes the suite for FEDERATED engine, can see the difference, decide that it's reasonable, since ALTER is not supposed to work, and create an rdiff file. Thus, we do not skip the test for the table option entirely, but simply limit the testing to functionality the engine supports.

      The tests will try to be smart in their internal logic, but within reason. When a statement is likely to fail (e.g. a feature is often unsupported), the test will check the result of the statement and will produce a verbose error message, including versions why it could happen. Also, if a failure happened on table creation, or in some other key moment, so that a part of test becomes useless (if a table was not created, there is no point at trying to alter it, etc.), the test will ignore a part of the flow, and proceed to the next part. But it does not happen always, the checks are only performed when the probability of a failure is reasonably high.
      Of course, if a part of the test is bypassed, it will also cause a mismatch, so the maintainer will be aware of that and will be able to check why it happens.

      Additionally, some tests will check the value of the default index (see below notes about configuration), and if it is unset, certain tests will be completely skipped, because every part of them requires indexes (e.g. index.test).

      For the functionality described in the ENGINES table (transactions, savepoints, XA), if a test requires one of the features and it is shown as unsupported in the table, the test will produce a warning. But unlike with the indexes, it will not be skipped, because we assume that for a new engine information in the ENGINES table might be inaccurate.

      Problem 3: Different primitives

      Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
      CREATE TABLE t (i INT)
      it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like a connection string for FEDERATED engine. Since there is, obviously, no connection string in generic tests, every table creation for FEDERATED will fail, and nothing will be tested. Currently, there is no way to work around that apart from copying the tests and modifying them manually.

      In even more complicated scenario, in order to create a table properly, additional actions need to be performed; e.g. to create a functional MERGE table, we need to also create an underlying MyISAM table; and if we want to alter a MERGE table, we should alter the underlying table(s) accordingly.

      Solution

      We will provide the engine maintainer with several tools to tune the suite for their engine.

      • Some variables can be set to configure the basic test behavior:
        • the engine name (to be used in CREATE TABLE and be masked in SHOW CREATE TABLE);
        • default column options (when any are required, e.g. NOT NULL for CSV);
        • default table options (when any are required, e.g. connection for FEDERATED);
        • default index (INDEX, UNIQUE INDEX, PRIMARY KEY, whichever supported, or a special index);
        • default types (int type, char type, in case standard ones are not supported);
      • if any actions need to be performed before/after each test, they can be described in include files which are always executed (e.g. creation of a server for FEDERATED before a test, and dropping it after a test);
      • if a server requires non-standard procedures for table creation or modification, they can be tuned in include files which tests to use to perform these actions (CREATE and ALTER table are not run directly in the tests, only by calling the include files, so they are configurable);
      • the engine can have its own set of disabled files, so that there is no need to provide only selected test names on the MTR command line – the whole suite can be run, and unnecessary files can be disabled through the list;
      • server options can be configured;
      • non-default combinations can be configured for the engine;
      • subsuites can be completely ignored for the engine, so that if there is no interest in running partitioning tests or transactional tests, they can be simply omitted without any effort.

      For more details, see the 'Tuning' section.

      Bugs filed while working on the suite:

      LP:973039 / MDEV-211 (Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK)
      LP:976104 / MDEV-216 (Assertion `0' failed in my_message_sql on UPDATE IGNORE, or unknown error on release build)
      LP:990187 / MDEV-237 (Assertion `share->reopen == 1' failed at maria_extra on ADD PARTITION)
      LP:994275 / MDEV-248 (Assertion `real->type() == Item::FIELD_ITEM' failed in add_not_null_conds(JOIN*))
      LP:994854 / MDEV-254 (Server hangs on updating an XtraDB table after FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT)
      LP:997397 (TRUNCATE on a partitioned Aria table does not reset AUTO_INCREMENT)
      MDEV-365 (Assertion `block->type == PAGECACHE_EMPTY_PAGE ...' failed in pagecache_read on ADD PARTITION)
      MDEV-366 (Assertion `share->reopen == 1' failed in maria_extra on DROP TABLE which is locked twice)
      MDEV-388 (Creating a federated table with a non-existing server returns a random error code)
      MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
      MySQL:64888 (Inconsistent behavior of dynamic concurrent_insert values)
      MySQL:64892 (Session-level low_priority_updates does not work for INSERT)
      MySQL:65146 (WITH CONSISTENT SNAPSHOT does not work with isolation level SERIALIZABLE)
      MySQL:65225 (InnoDB miscalculates auto-increment after changing auto_increment_increment)
      MySQL:65429 / LP:1004910 (Assertion failure block->page.space == page_get_space_id(page_align(ptr)))
      MySQL:65431 / LP:1005052 (Error 1430 while selecting from a federated table with index on a bit column)
      MySQL:65846 (INSERT DELAYED on a BLACKHOLE table hangs forever)
      MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)

      The list might be incomplete

      Tuning

      Please note that the test suite is synthetic. It means that it contains tests for a set of features which (the whole set) is not supported by any known engine. The same is true for result files – they are taken from different engines, depending on which produce the most sensible result, and in rare cases are even artificial. This means that there is no engine which would pass the whole suite without any tuning – not even MyISAM, which contributed to the result files the most.

      Assumptions

      We presume that the tests are set on a source tree build (which is reasonable, considering that the main goal is to help with storage engine development). We want to configure the tests for some engine OurENGINE.

      The storage engine code is located in <basedir>/storage/<ourengine> folder.

      The storage_engine test suite is located in <basedir>/mysql-test/suite/storage_engine folder and contains subsuites in subfolders of the corresponding names (currently <basedir>/mysql-test/suite/storage_engine/parts and <basedir>/mysql-test/suite/storage_engine/trx.

      Common tuning steps

      1. Create <basedir>/storage/ourengine/mysql-test/storage_engine folder.

      2. Copy <basedir>/mysql-test/suite/storage_engine/define_engine.inc to <basedir>/storage/ourengine/mysql-test/storage_engine and edit the file (at the very least set ENGINE variable; check other variables which you find there, and modify as needed.

      3. If the engine requires any additional server options (e.g. to load the engine, or to tune it, or both), create <basedir>/storage/ourengine/mysql-test/storage_engine/suite.opt file and add the options there.

      4. If you know in advance that the engine requires additional steps before a test, add them at the end of define_engine.inc.

      5. If you created any SQL objects in define_engine.inc, create file <basedir>/storage/ourengine/mysql-test/storage_engine/cleanup_engine.inc, or copy a stub from <basedir>/mysql-test/suite/storage_engine, and add the logic to drop the objects.

      6. If you know in advance that the engine requires non-standard table creation and/or modification procedure, copy <basedir>/mysql-test/suite/storage_engine/create_table.inc and <basedir>/mysql-test/suite/storage_engine/alter_table.inc into <basedir>/storage/ourengine/mysql-test/storage_engine/ and modify them as needed.

      7. Try to run the 1st test:
      perl ./mtr --suite=storage_engine-ourengine 1st

      8. If the test produces a mismatch, analyze it and decide whether table creation or test options or server options require more tuning, or the difference is expected.

      9. If the difference is expected, create an rdiff file as
      {{ diff -u <basedir>/mysql-test/suite/storage_engine/1st.result <basedir>/mysql-test/suite/storage_engine/1st.reject > <basedir>/storage/ourengine/mysql-test/storage_engine/1st.rdiff

      10. When the 1st test passes, run the whole suite:

      perl ./mtr --suite=storage_engine-ourengine --force --max-test-fail=0

      11. Analyze failures, modify parameters or include files as needed, create rdiff files.

      12. If any tests requires specific non-standard server/engine options, create files <testname>.opt in <basedir>/storage/ourengine/mysql-test/storage_engine.

      13. If any tests have to be skipped, add them to <basedir>/storage/ourengine/mysql-test/storage_engine/disabled.def.

      14. When you are satisfied with the results of storage_engine suite, proceed to the subsuites. If you are interested in running partitions tests, create folder <basedir>/storage/ourengine/mysql-test/storage_engine/parts

      15. Repeat step 3, if needed, only now create suite.opt in <basedir>/storage/ourengine/mysql-test/storage_engine/parts.

      16. Run the subsuite as perl ./mtr --suite=storage_engine/parts-ourengine --force --max-test-fail=0

      17. Repeat steps 11-13, only now create files in <basedir>/storage/ourengine/mysql-test/storage_engine/parts

      18. If you want to run transactional tests, repeat steps 14-17 for trx subsuite.

      19. To execute the whole set of tests, run perl ./mtr --suite=storage_engine-ourengine,storage_engine/*-ourengine

      Examples

      Below are some real-life experiences of tuning the test suite for engines that currently come with MariaDB. Actual exact steps are always unique and depend on the storage engine (otherwise it wouldn't be tuning). In some cases only very basic actions are required; in others, it takes more work. In general, the more a storage engine is "different", the trickier is the task.

      Easy level: MyISAM

      Lets see how to make the suite work for a relatively standard engine, in terms of behavior similar to main MySQL engines.
      We will create an overlay for MyISAM.
      Note: "overlay" is a term introduced by MDEV-30, and it basically means a test suite or set of suites adapted for a certain engine

      cd <basedir>/mysql-test
      mkdir -p ../storage/myisam/mysql-test/storage_engine
      cp suite/storage_engine/define_engine.inc ../storage/myisam/mysql-test/storage_engine/

      Edit the copied version of define_engine.inc to set ENGINE to MyISAM:

      @@ -8,7 +8,7 @@
       # The name of the engine under test must be defined in $ENGINE variable.
       # You can set it either here (uncomment and edit) or in your environment.
       #
      -# let $ENGINE =;
      +let $ENGINE = MyISAM;
       #
       ################################
       #

      All other parameters look good: MyISAM does not require any specific table or column options, and it supports non-unique indexes, INT and CHAR types. These all are defaults (you can see it in define_engine.inc).

      Also, we do not need any additional server options to activate the engine, since MyISAM is always there.

      So, now we can try to run the 1st test:

      perl ./mtr --suite=storage_engine-myisam 1st
       
      ...
       
      storage_engine-myisam.1st        [ pass ]     20

      The first test passed. Okay, now we can run the whole suite. Some tests will fail, this is expected; we need to see the results
      so we can decide whether we should accept the difference, or disable the test, or patch the code.
      So, we will run it with --force and --max-test-fail=0, to see all at once (you might also want to redirect the output to a file, because you will need it):

       
      perl ./mtr --force --max-test-fail=0 --suite=storage_engine-myisam
       
      ...
       
      Spent 42.193 of 64 seconds executing testcases
       
      Completed: Failed 7/99 tests, 92.93% were successful.
       
      Failing test(s): storage_engine-myisam.alter_tablespace storage_engine-myisam.check_table storage_engine-myisam.foreign_keys storage_engine-myisam.index_type_hash storage_engine-myisam.show_engine storage_engine-myisam.tbl_opt_insert_method storage_engine-myisam.tbl_opt_union

      Note: Total execution time depends on the engine and on the machine. For MyISAM it's pretty fast, for other engines might take longer.

      7 failing tests on the first iteration is extremely good, of course it won't be that bright with other engines. But it's MyISAM, what did you expect...

      (Please keep in mind that the suite results are synthetic, meaning that they belong to different engines, and in rare cases are even artificial, e.g. when the only available engine supporting the functionality currently has a bug. So, no engine is expected to pass all tests without tuning. Not even MyISAM.)

      Now it's time to analyze results.

      Result handling in this suite is somewhat different from standard MTR tests. Unless you managed to crash the server or to hit a syntax error in the test itself, all results will be mismatches – that is, a test will not abort on a failed statement, but instead will try to proceed. The big benefit of it is that if your engine does not support everything, the tests are still usable, you'll just need to approve the difference in the results (by creating an rdiff file).

      Of course, it also means that the tests produce more noise than usual – e.g. if your engine does not support ALTER, a standard MTR test would break on the first ALTER statement, while ours will continue and will allow you to use the logic, but will also produce a bunch of mismatches due to failing statements. Internal logic in tests does its best to make it cleaner, but still, the noise is expected.

      So, with this knowledge, lets find the failures and go through them one by one.
      Tip: if you saved the output to a file, failures can easily be found by ' fail ' search string (without quote marks).

      The first failing test is alter_tablespace. Well, naturally – no tablespaces for MyISAM. But lets look at the output.

      Mismatch says that some stuff is missing, and instead the test produces this:

      +ERROR HY000: Table storage engine for 't1' doesn't have this option
      +# ERROR: Statement ended with errno 1031, errname ER_ILLEGAL_HA (expected to succeed)
      +# ------------ UNEXPECTED RESULT ------------
      +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
      +# The statement|command finished with ER_ILLEGAL_HA.
      +# Tablespace operations or the syntax or the mix could be unsupported. 
      +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
      +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
      +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
      +# -------------------------------------------

      Since MyISAM is not supposed to support tablespaces, changing the engine code is not an option. You can decide whether to disable the test entirely, or to create an rdiff file for it. Unless you have really tough restrictions on suite execution time, I recommend keeping tests and creating rdiffs. Even if you don't care about the result, the test, running as a regression check, will at least show that this very first statement has not suddenly started crashing the server, or something unexpectedly changed in the behavior.

      Creating an rdiff file is simple:

      diff -u suite/storage_engine/alter_tablespace.result suite/storage_engine/alter_tablespace.reject > ../storage/myisam/mysql-test/storage_engine/alter_tablespace.rdiff

      Next failed test: check_table.

      Its difference is simple:

       INSERT INTO t1 (a,b) VALUES (6,'f');
       CHECK TABLE t1 FAST;
       Table  Op      Msg_type        Msg_text
      -test.t1        check   status  OK
      +test.t1        check   status  Table is already up to date
       INSERT INTO t1 (a,b) VALUES (7,'g');
       INSERT INTO t2 (a,b) VALUES (8,'h');
       CHECK TABLE t2, t1 MEDIUM;
      @@ -52,7 +52,7 @@
       INSERT INTO t1 (a) VALUES (17),(120),(132);
       CHECK TABLE t1 FAST;
       Table  Op      Msg_type        Msg_text
      -test.t1        check   status  OK
      +test.t1        check   status  Table is already up to date
       INSERT INTO t1 (a) VALUES (801),(900),(7714);
       CHECK TABLE t1 MEDIUM;
       Table  Op      Msg_type        Msg_text

      No harm if the engine realizes that the table is up to date and says so; adding a diff.

      diff -u suite/storage_engine/check_table.result suite/storage_engine/check_table.reject > ../storage/myisam/mysql-test/storage_engine/check_table.rdiff

      Next failing test: foreign_keys

      Just like alter_tablespace, it produces a lot of messages about possibly unsupported functionality, which is natural, since MyISAM doesn't have foreign keys. Adding a diff.

      Next failing test: index_type_hash

      It produces mismatches where HASH type is replaced by BTREE type:

       SHOW KEYS IN t1;
       Table  Non_unique      Key_name        Seq_in_index    Column_name     Collation       Cardinality     Sub_part        Packed  Null    Index_type      Comment Index_comment
      -t1     1       a       1       a       #       #       NULL    NULL    #       HASH            
      +t1     1       a       1       a       #       #       NULL    NULL    #       BTREE           

      Since MyISAM doesn't support HASH type, it is fine. Adding a diff.

      Next failing test: show_engine

      This test is optimistic and expects that SHOW ENGINE <engine name> STATUS command returns something – that is, the test obfuscates the output, but expects that it contains a row. For MyISAM it is not the case, hence the diff looks like a missing row:

      @@ -4,7 +4,6 @@
       SHOW ENGINE <STORAGE_ENGINE> STATUS;
       Type   Name    Status
      -<STORAGE_ENGINE>               ### Engine status, can be long and changeable ###
       # For SHOW MUTEX even the number of lines is volatile, so the result logging is disabled,

      Adding a diff.

      Next failing tests: tbl_opt_insert_method, tbl_opt_union

      MyISAM does not use the table option INSERT_METHOD, it's the MERGE engine thing. But table creation does not normally fail on unsupported options, they are simply ignored. That's what we see here:

      @@ -5,7 +5,7 @@
       t1     CREATE TABLE `t1` (
         `a` int(11) DEFAULT NULL,
         `b` char(8) DEFAULT NULL
      -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1 INSERT_METHOD=FIRST
      +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1

      SHOW ENGINE does not show the option. This is fine, adding rdiff. Exactly the same for UNION.

      These were all 7 failures.

      Now it's time to take care of subsuites. Currently there are two of them: parts (stands for 'partitions'), and trx (stands for 'transactions').

      MyISAM definitely supports partitioning, so lets try them first.

      (we are still in <basedir>/mysql-test)

      mkdir ../storage/myisam/mysql-test/storage_engine/parts

      This will show MTR that our engine is interested in the storage_engine/parts subsuite.

      No additional parameters or tricks should be needed for MyISAM to run partition tests, since the suite already sets --partition option. So, we'll just run it:

       
      perl ./mtr --force --max-test-fail=0 --suite=storage_engine/parts-myisam
       
      ... 
       
      Spent 1.168 of 5 seconds executing testcases
       
      Completed: All 8 tests were successful.

      Note: For now, it is a very basic suite, only contains a few tests and literally takes several seconds

      All good, tests passed, nothing needs to be done here.

      Finally, there is the transactions subsuite (also contains tests for XA and snapshots). It is also very basic, but we know that MyISAM doesn't support transactions, XA and snapshots, so results won't be this pretty. Maybe we don't even need to run it, but why not try.

      mkdir ../storage/myisam/mysql-test/storage_engine/trx

       
      perl ./mtr --force --max-test-fail=0 --suite=storage_engine/trx-myisam
       
      Spent 0.000 of 10 seconds executing testcases
       
      Completed: Failed 13/13 tests, 0.00% were successful.
       
      Failing test(s): storage_engine/trx-myisam.cons_snapshot_repeatable_read storage_engine/trx-myisam.cons_snapshot_serializable storage_engine/trx-myisam.delete storage_engine/trx-myisam.insert storage_engine/trx-myisam.level_read_committed storage_engine/trx-myisam.level_read_uncommitted storage_engine/trx-myisam.level_repeatable_read storage_engine/trx-myisam.level_serializable storage_engine/trx-myisam.select_for_update storage_engine/trx-myisam.select_lock_in_share_mode storage_engine/trx-myisam.update storage_engine/trx-myisam.xa storage_engine/trx-myisam.xa_recovery

      The results are mess, as expected. All differences start with an extra warning (sometimes more than one, as the same is produced for snapshots and XA):

      +# -- WARNING ----------------------------------------------------------------
      +# According to I_S.ENGINES, MyISAM does not support transactions.
      +# If it is true, the test will most likely fail; you can 
      +# either create an rdiff file, or add the test to disabled.def.
      +# If transactions should be supported, check the data in Information Schema.
      +# ---------------------------------------------------------------------------

      The rest is a mix of wrong results (missing rows, extra rows, different rows), warnings about rollback not working, etc. With 3rd-party engines, it would be up to a maintainer whether to add diffs for all tests or just ignore the whole thing since transactions are not supported. If you choose to ignore, simply delete the newly created ../storage/<your_engine>/mysql-test/storage_engine/trx, and the subsuite won't be executed for the engine. I choose to keep it, so I'll add rdiffs for all 13 files, it's not that much work, after all.

      diff -u suite/storage_engine/trx/cons_snapshot_repeatable_read.result suite/storage_engine/trx/cons_snapshot_repeatable_read.reject > ../storage/myisam/mysql-test/storage_engine/trx/cons_snapshot_repeatable_read.rdiff
      ...
      diff -u suite/storage_engine/trx/xa_recovery.result suite/storage_engine/trx/xa_recovery.reject > ../storage/myisam/mysql-test/storage_engine/trx/xa_recovery.rdiff

      Tip: If you decide to do the same, take time to go through the reject files and see that your engine works as expected, under the circumstances; sometimes making things do what they are not normally expected to do reveals hidden problems.

      Now we can run everything together:

       
      perl ./mtr --suite=storage_engine-myisam,storage_engine/*-myisam
       
      ...
       
      Spent 46.249 of 70 seconds executing testcases
       
      Completed: All 120 tests were successful.

      That's all. Now just stay out of failures.

      Intermediate level: InnoDB plugin

      A little bit more work is required to create an overlay for InnoDB. Lets try to do it for InnoDB plugin (which is not loaded by default as of 5.5.25, but is built there).

      Again, start with creating the overlay directory:

      mkdir -p ../storage/innobase/mysql-test/storage_engine
      cp suite/storage_engine/define_engine.inc ../storage/innobase/mysql-test/storage_engine/
      Edit ../storage/innobase/mysql-test/storage_engine/define_engine.inc

      @@ -8,7 +8,7 @@
       # The name of the engine under test must be defined in $ENGINE variable.
       # You can set it either here (uncomment and edit) or in your environment.
       #
      -# let $ENGINE =;
      +let $ENGINE = InnoDB;
       #
       ################################
       #

      As for MyISAM, all defaults are fine for InnoDB. But now we also need to server startup options to run server with the InnoDB plugin.

      create the file ../storage/innobase/mysql-test/storage_engine/suite.opt:

      --ignore-builtin-innodb
      --plugin-load=ha_innodb
      --innodb

      It should be enough for the base suite. Lets run the 1st test now:

       
      perl ./mtr --suite=storage_engine-innobase 1st
       
      ...
       
      storage_engine-innobase.1st        [ pass ]    852

      And then the whole suite:

       
      perl ./mtr --suite=storage_engine-innobase --max-test-fail=0 --force
       
      ...
       
      Spent 153.712 of 402 seconds executing testcases
       
      Completed: Failed 28/99 tests, 71.72% were successful.
       
      Failing test(s): storage_engine-innobase.alter_table_online storage_engine-innobase.alter_tablespace storage_engine-innobase.autoinc_secondary storage_engine-innobase.autoinc_vars storage_engine-innobase.cache_index storage_engine-innobase.checksum_table_live storage_engine-innobase.delete_low_prio storage_engine-innobase.fulltext_search storage_engine-innobase.index_enable_disable storage_engine-innobase.index_type_hash storage_engine-innobase.insert_delayed storage_engine-innobase.insert_high_prio storage_engine-innobase.insert_low_prio storage_engine-innobase.lock_concurrent storage_engine-innobase.optimize_table storage_engine-innobase.repair_table storage_engine-innobase.select_high_prio storage_engine-innobase.tbl_opt_ai storage_engine-innobase.tbl_opt_data_index_dir storage_engine-innobase.tbl_opt_insert_method storage_engine-innobase.tbl_opt_key_block_size storage_engine-innobase.tbl_opt_row_format storage_engine-innobase.tbl_opt_union storage_engine-innobase.type_char_indexes storage_engine-innobase.type_float_indexes storage_engine-innobase.type_spatial_indexes storage_engine-innobase.update_low_prio storage_engine-innobase.vcol

      Not as great as it was with MyISAM. Lets see the details.

      Some mismatches are either identical or similar to those in MyISAM, and caused by unsupported functionality (e.g. fulltext search, hash indexes, optimize_table, etc.). I won't go through them here, will just add rdiff files.

      But some deserve attention.

      alter_table_online:

       ALTER ONLINE TABLE t1 CHANGE b new_name <INT_COLUMN>;
      +ERROR HY000: Can't execute the given 'ALTER' command as online
      +# ERROR: Statement ended with errno 1915, errname ER_CANT_DO_ONLINE (expected to succeed)
      +# ------------ UNEXPECTED RESULT ------------
      +# The statement|command finished with ER_CANT_DO_ONLINE.
      +# Functionality or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors. 
      +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
      +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
      +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
      +# -------------------------------------------

      It's hard to say whether all engines that support ALTER ONLINE should support them for the same set of changes; most likely not, and what we see here is just an InnoDB limitation. On the other hand, we know that MariaDB supports ALTER ONLINE, and namely renaming a column (see http://kb.askmonty.org/en/alter-table), and InnoDB supports at least some ALTER ONLINE operations (e.g. CHANGE COLUMN i i INT DEFAULT 1 works); so I think it's worth filing it as a low-priority bug, at least to make sure it works as expected: https://mariadb.atlassian.net/browse/MDEV-397

      For now, I will add the test to ../storage/innobase/mysql-test/storage_engine/disabled.def list (need to create it, since it's the first test we disable for the engine):

      alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)

      If later it turns out to be expected behavior or limitation, I will remove the line from disabled.def, and will instead add an rdiff file.

      alter_tablespace:

      +# ERROR: Statement ended with errno 1030, errname ER_GET_ERRNO (expected to succeed)
      +# ------------ UNEXPECTED RESULT ------------
      +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
      +# The statement|command finished with ER_GET_ERRNO.
      +# Tablespace operations or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors. 
      +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
      +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
      +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
      +# -------------------------------------------

      Now, that seems unexpected. But then again, tablespace operations are only applicable when InnoDB works in innodb-file-per-table mode, which we did not set in our options. Unless we want to use it for all tests, lets set it for this one only:

      ../storage/innobase/mysql-test/storage_engine/alter_tablespace.opt

      --innodb-file-per-table=1

      autoinc_vars:

       INSERT INTO t1 (a,b) VALUES (NULL,'g'),(NULL,'h'),(NULL,'i');
       SELECT LAST_INSERT_ID();
       LAST_INSERT_ID()
      -850
      +1100
       SELECT * FROM t1;
       a      b
       1      a
      +1100   g
      +1150   h
      +1200   i
       2      b
       200    d
       3      c
       500    e
       800    f
      -850    g
      -900    h
      -950    i
       DROP TABLE t1;
       SET auto_increment_increment = 500;
       SET auto_increment_offset = 300;

      This is weird. Now real investigation starts – there is a good reason to look at the reject file to see the continuous flow:

      ...
       
      SET auto_increment_increment = 300;
      INSERT INTO t1 (a,b) VALUES (NULL,'d'),(NULL,'e'),(NULL,'f');
      SELECT LAST_INSERT_ID();
      LAST_INSERT_ID()
      200
      SELECT * FROM t1;
      a       b
      1       a
      2       b
      200     d
      3       c
      500     e
      800     f
      SET auto_increment_increment = 50;
      INSERT INTO t1 (a,b) VALUES (NULL,'g'),(NULL,'h'),(NULL,'i');
      SELECT LAST_INSERT_ID();
      LAST_INSERT_ID()
      1100
      SELECT * FROM t1;
      a       b
      1       a
      1100    g
      1150    h
      1200    i
      2       b
      200     d
      3       c
      500     e
      800     f
      DROP TABLE t1;

      The first insert works all right with auto_increment_increment = 300. Then we change it to 50, but the following insert still uses 300 for the first value it inserts, and only then switches to 50. Thus we get 1100 instead of 850, and following values also differ. This smells like a bug, although not a very serious one. Since a brief check shows it's also reproducible on Oracle MySQL, we will file it on bugs.mysql.com: http://bugs.mysql.com/bug.php?id=65225 (I actually did it some time ago, when I tried to run the storage engine suite for InnoDB for the first time, that's why it's not brand new).

      And we will also add the test to ../storage/innobase/mysql-test/storage_engine/disabled.def:

      alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
      autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)

      delete_low_prio, insert_high_prio, insert_low_prio, select_high_prio, update_low_prio:

      They all have similar fragments in their output:

      +# Timeout in include/wait_show_condition.inc for = 'DELETE FROM t1'
      +#         show_statement : SHOW PROCESSLIST
      +#         field          : Info
      +#         condition      : = 'DELETE FROM t1'
      +#         max_run_time   : 3
      +# ------------ UNEXPECTED RESULT ------------
      +# The statement|command finished with timeout in wait_show_condition.inc.
      +# DELETE or table locking or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors. 
      +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
      +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
      +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
      +# -------------------------------------------

      As the documentation says, the high|low priority functionality (e.g. DELETE LOW_PRIORITY) only works for table-level locking, and the whole test is based on this assumption. InnoDB uses row-level locking, so the entire flow does not work quite as expected. We still can add rdiff files, but, unlike the most of other tests, these ones take relatively long (probably over 10 seconds each). Besides, since locking works entirely different here, the test results are likely to be unstable, as it will be all about timing. So, it makes more sense to disable the tests by adding them to ../storage/innobase/mysql-test/storage_engine/disabled.def:

      alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
      autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
      delete_low_prio : InnoDB does not use table-level locking
      insert_high_prio : InnoDB does not use table-level locking
      insert_low_prio : InnoDB does not use table-level locking
      select_high_prio : InnoDB does not use table-level locking
      update_low_prio : InnoDB does not use table-level locking

      tbl_opt_ai:

       Table  Create Table
       t1     CREATE TABLE `t1` (
         `a` int(11) DEFAULT NULL
      -) ENGINE=<STORAGE_ENGINE> AUTO_INCREMENT=10 DEFAULT CHARSET=latin1
      +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
       ALTER TABLE t1 AUTO_INCREMENT=100;
       SHOW CREATE TABLE t1;
       Table  Create Table
       t1     CREATE TABLE `t1` (
         `a` int(11) DEFAULT NULL
      -) ENGINE=<STORAGE_ENGINE> AUTO_INCREMENT=100 DEFAULT CHARSET=latin1
      +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
       DROP TABLE t1;

      We already looked at ignored table options in MyISAM tests, but this one is different. Why would AUTO_INCREMENT be ignored, it should be supported all right by InnoDB? (Brief manual check confirms it). Some digging shows, however, that in our case it is truly ignored. It is reproducible with Oracle MySQL, filing a bug on bugs.mysql.com: http://bugs.mysql.com/bug.php?id=65901

      Adding the test to ../storage/innobase/mysql-test/storage_engine/disabled.def:

      alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
      autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
      delete_low_prio : InnoDB does not use table-level locking
      insert_high_prio : InnoDB does not use table-level locking
      insert_low_prio : InnoDB does not use table-level locking
      select_high_prio : InnoDB does not use table-level locking
      update_low_prio : InnoDB does not use table-level locking
      tbl_opt_ai : MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)

      tbl_opt_key_block_size, tbl_opt_row_format:

       CREATE TABLE t1 (a <INT_COLUMN>, b <CHAR_COLUMN>) ENGINE=<STORAGE_ENGINE> <CUSTOM_TABLE_OPTIONS> KEY_BLOCK_SIZE=8;
      +Warnings:
      +Warning        1478    InnoDB: KEY_BLOCK_SIZE requires innodb_file_per_table.
      +Warning        1478    InnoDB: KEY_BLOCK_SIZE requires innodb_file_format > Antelope.
      +Warning        1478    InnoDB: ignoring KEY_BLOCK_SIZE=8.

      Doing the same as we did for alter_tablespace, only now adding both innodb_file_per_table and innodb_file_format:

      ../storage/innobase/mysql-test/storage_engine/tbl_opt_key_block_size.opt:

      --innodb-file-per-table=1
      --innodb-file-format=Barracuda

      type_char_indexes:

       SET SESSION optimizer_switch = 'engine_condition_pushdown=on';
       EXPLAIN SELECT * FROM t1 WHERE c > 'a';
       id     select_type     table   type    possible_keys   key     key_len ref     rows    Extra
      -#      #       #       range   c_v     c_v     #       #       #       Using index condition
      +#      #       #       range   c_v     c_v     #       #       #       Using where
       SELECT * FROM t1 WHERE c > 'a';
       c      c20     v16     v128
       b      char3   varchar1a       varchar1b
      @@ -135,7 +135,7 @@
       r3a
       EXPLAIN SELECT * FROM t1 WHERE v16 = 'varchar1a' OR v16 = 'varchar3a' ORDER BY v16;
       id     select_type     table   type    possible_keys   key     key_len ref     rows    Extra
      -#      #       #       range   #       v16     #       #       #       #
      +#      #       #       ALL     #       NULL    #       #       #       #
       SELECT * FROM t1 WHERE v16 = 'varchar1a' OR v16 = 'varchar3a' ORDER BY v16;
       c      c20     v16     v128
       a      char1   varchar1a       varchar1b

      Note: For now we assume that inside one engine, statistics is stable enough to produce consistent results on each test run, which is why we show certain fields in explain to let you decide whether you are satisfied with them or not. If further experience shows that even for the same engine, these tests routinely produce different results, and more often than not it's valid behavior, we might change it.

      For now, I will consider these results acceptable, and will add rdiff.

      As I said before, the rest of failures do not deserve verbose analysis, they are pretty straightforward, I just added rdiff for each of them.

      Now working with storage_engine/parts and storage_engine/trx.

      mkdir ../storage/innobase/mysql-test/storage_engine/trx
      mkdir ../storage/innobase/mysql-test/storage_engine/parts

      Copy your previously created suite.opt file to each of the subfolders: as far as MTR is concerned, they are separate suites.

      cp ../storage/innobase/mysql-test/storage_engine/suite.opt ../storage/innobase/mysql-test/storage_engine/trx/
      cp ../storage/innobase/mysql-test/storage_engine/suite.opt ../storage/innobase/mysql-test/storage_engine/parts/

      Maybe you'll want to add something else to those options. I, for one, will add --innodb-lock-wait-timeout=1 to ../storage/innobase/mysql-test/storage_engine/trx/suite.opt. Probably it should have been done for other suites, too – but it's never late, if there are any timeout issues observed.

      When you add rdiff files for subsuites, don't forget to put them in the subfolders:

      diff -u suite/storage_engine/parts/checksum_table.result suite/storage_engine/parts/checksum_table.reject > ../storage/innobase/mysql-test/storage_engine/parts/checksum_table.rdiff
      etc.

      Again, mostly failures are mismatches due to different output or unsupported functionality.
      Note: Also note that repair_table test results are likely to differ, even if repair is supported, since the test tries to corrupt existing table files, which are different for each engine.

      trx/cons_snapshot_serializable:

       # If consistent read works on this isolation level (SERIALIZABLE), the following SELECT should not return the value we inserted (1)
       SELECT * FROM t1;
       a
      +1
       COMMIT;

      It is a bug. Filing as http://bugs.mysql.com/bug.php?id=65146 and adding to disabled.def (don't forget that it should be under trx folder now:
      ../storage/innobase/mysql-test/storage_engine/trx/disabled.def:

      cons_snapshot_serializable : MySQL:65146 (CONSISTENT SNAPSHOT does not work with SERIALIZABLE)

      Now, running the whole set:

      perl ./mtr --suite=storage_engine-innobase,storage_engine/*-innobase 
       
      ...
       
      Spent 300.715 of 364 seconds executing testcases
       
      Completed: All 111 tests were successful.

      Much slower than for MyISAM, but that's how it is usually is.

      Advanced level: MERGE

      Yet more tricks would be required to tune the same suite for the MERGE engine, because now we will also have to think about how a table is created.
      We can't just create a plain MERGE table and work with it, it needs to have an underlying table, at least one; and if we alter the merge table, underlying tables need to be altered accordingly, otherwise the merge table will become non-functional.

      Start the same way as we started for other engines, by creating the overlay folder:

      mkdir -p ../storage/myisammrg/mysql-test/storage_engine
      cp suite/storage_engine/define_engine.inc ../storage/myisammrg/mysql-test/storage_engine/

      We know that we'll need INSERT_METHOD and UNION in our table options; in other circumstances, they should have been added to $default_tbl_opts; but we cannot set a global UNION, because it will contain different underlying tables, and since we will be modifying the creation procedure anyway, there is no point at adding INSERT_METHOD here, either.

      @@ -8,7 +8,7 @@
       # The name of the engine under test must be defined in $ENGINE variable.
       # You can set it either here (uncomment and edit) or in your environment.
       #
      -# let $ENGINE =;
      +let $ENGINE = MRG_MYISAM;
       #
       ################################
       #

      What happens if we now run the 1st test if we did before?

      perl ./mtr --suite=storage_engine-myisammrg 1st

       SHOW COLUMNS IN t1;
       INSERT INTO t1 VALUES (1,'a');
      +ERROR HY000: Table 't1' is read only
      +# ------------ UNEXPECTED RESULT ------------
      +# The statement|command finished with ER_OPEN_AS_READONLY.
      +# INSERT INTO .. VALUES or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors. 
      +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
      +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
      +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
      +# -------------------------------------------

      That's because we don't have underlying tables under the merge table. We need to modify table creation procedure.
      First, we need to decide how to do it. There can be many ways, I will choose a simple one, as I think:

      • before each test, I will create a special mrg schema, which will contain underlying tables, so I don't need to remember all the names when it's time to cleanup;
      • at the end of the test, i will drop the mrg schema, and thus will get rid of all additional objects at once;
      • whenever a new test table has to be created, I will create a MyISAM table with the same name in mrg schema, and will point my test table at it;
      • whenever a test table has to be altered, I will also alter the MyISAM table with the same name in mrg schema.

      In order to achieve this, we need to override 3 files, and modify our already created ../storage/myisammrg/mysql-test/storage_engine/define_engine.inc. Lets start with the latter.

      define_engine.inc is the include file which is executed before each test. So, it's the place to put the logic which precedes a test.
      At the end of ../storage/myisammrg/mysql-test/storage_engine/define_engine.inc I will mrg schema creation:

      @@ -40,6 +40,10 @@
       # Here you can place your custom MTR code which needs to be executed before each test,
       # e.g. creation of an additional schema or table, etc.
       # The cleanup part should be defined in cleanup_engine.inc
      +--disable_warnings
      +DROP DATABASE IF EXISTS mrg;
      +--enable_warnings
      +CREATE DATABASE mrg;

      Now, it's time for the 3 files to override:

      cp suite/storage_engine/cleanup_engine.inc ../storage/myisammrg/mysql-test/storage_engine/
      cp suite/storage_engine/create_table.inc ../storage/myisammrg/mysql-test/storage_engine/
      cp suite/storage_engine/alter_table.inc ../storage/myisammrg/mysql-test/storage_engine/

      cleanup_engine.inc is the file which is executed after each test; so, in ../storage/myisammrg/mysql-test/storage_engine/cleanup_engine.inc I will be dropping my mrg schema:

      @@ -8,4 +8,9 @@
       # Here you can add whatever is needed to cleanup 
       # in case your define_engine.inc created any artefacts,
       # e.g. an additional schema and/or tables.
      +--disable_query_log
      +--disable_warnings
      +DROP DATABASE IF EXISTS mrg;
      +--enable_warnings
      +--enable_query_log

      Now, the actual table creation.
      Tests do not run CREATE TABLE / ALTER TABLE statements directly, they always call create_table.inc or alter_table.inc, correspondingly. So, if we edit them properly, it will affect all tests at once – the gain is worth spending some effort.

      Below I will show the changes I had made; in fact, there are many ways to achieve the same goal, probably some of them more efficient. Be creative when the time comes.

      --- suite/storage_engine/create_table.inc	2012-07-15 17:46:03.638461728 +0400
      +++ ../storage/myisammrg/mysql-test/storage_engine/create_table.inc	2012-07-15 22:08:29.324511647 +0400
      @@ -54,6 +54,15 @@
         --let $table_name = t1
       }
       
      +# Child statement is a statement that will create an underlying table.
      +# From this point, it will deviate from the main statement, that's why
      +# we start creating it here in parallel with the main one.
      +# For underlying tables, we will create a table in mrg schema, e.g. 
      +# for table t1 the underlying table will be mrg.t1, etc.
      +# Since we will only create one child here, it should be enough. If we want more,
      +# we can always add a suffix, e.g. mrg.t1_child1, mrg.t1_child2, etc.
      +
      +--let $child_statement = $create_statement mrg.$table_name
       --let $create_statement = $create_statement $table_name
       
       if (!$create_definition)
      @@ -70,6 +79,9 @@
       if ($create_definition)
       {
         --let $create_statement = $create_statement ($create_definition)
      +  # Table definition for the underlying table should be the same
      +  # as for the MERGE table
      +  --let $child_statement = $child_statement ($create_definition)
       }
       
       # If $default_engine is set, we will rely on the default storage engine
      @@ -78,6 +90,12 @@
       {
         --let $create_statement = $create_statement ENGINE=$storage_engine
       }
      +# Engine for an underlying table differs
      +--let $child_statement = $child_statement ENGINE=MyISAM
      +
      +# Save default table options, we will want to restore them later
      +--let $default_tbl_opts_saved = $default_tbl_opts
      +--let $default_tbl_opts = $default_tbl_opts UNION(mrg.$table_name) INSERT_METHOD=LAST
       
       # Default table options from define_engine.inc
       --let $create_statement = $create_statement $default_tbl_opts
      @@ -86,6 +104,7 @@
       if ($table_options)
       {
         --let $create_statement = $create_statement $table_options
      +  --let $child_statement = $child_statement $table_options
       }
       
       # The difference between $extra_tbl_opts and $table_options
      @@ -98,16 +117,19 @@
       if ($extra_tbl_opts)
       {
         --let $create_statement = $create_statement $extra_tbl_opts
      +  --let $child_statement = $child_statement $extra_tbl_opts
       }
       
       if ($as_select)
       {
         --let $create_statement = $create_statement AS $as_select
      +  --let $child_statement = $child_statement AS $as_select
       }
       
       if ($partition_options)
       {
         --let $create_statement = $create_statement $partition_options
      +  --let $child_statement = $child_statement $partition_options
       }
       
       # We now have the complete CREATE statement in $create_statement.
      @@ -120,6 +142,12 @@
       # Surround it by --disable_query_log/--enable_query_log
       # if you don't want it to appear in the result output.
       #####################
      +--disable_warnings
      +--disable_query_log
      +eval DROP TABLE IF EXISTS mrg.$table_name;
      +eval $child_statement;
      +--enable_query_log
      +--enable_warnings
       
       if ($disable_query_log)
       {
      @@ -166,6 +194,10 @@
       --let $temporary = 0
       --let $disable_query_log = 0
       
      +# Restore default table options now
      +--let $default_tbl_opts = $default_tbl_opts_saved
      +
      +
       # Restore the error codes of the main statement
       --let $mysql_errno = $my_errno
       --let $mysql_errname = $my_errname

      We know we also need to modify alter_table.inc, but it's interesting to see if our changes actually work.

       
      perl ./mtr --suite=storage_engine-myisammrg 1st
       
      ...
       
      storage_engine-myisammrg.1st             [ pass ]     26

      Great. Lets now modify ../storage/myisammrg/mysql-test/storage_engine/alter_table.inc:

      @@ -20,9 +20,12 @@
       # --let $alter_definition = ADD COLUMN b $char_col DEFAULT ''
       # 
       
      +--let $child_alter_definition = $alter_definition
      +
       if ($rename_to)
       {
         --let $alter_definition = RENAME TO $rename_to
      +  --let $child_alter_definition = RENAME TO mrg.$rename_to
       }
       
       if (!$alter_definition)
      @@ -43,6 +46,9 @@
       }
       
       --let $alter_statement = $alter_statement TABLE $table_name $alter_definition
      +# We don't want to do ONLINE on underlying tables, we are not testing MyISAM
      +--let $child_statement = ALTER TABLE mrg.$table_name $child_alter_definition
      +
       
       
       # We now have the complete ALTER statement in $alter_statement.
      @@ -75,6 +81,20 @@
       # Surround it by --disable_query_log/--enable_query_log
       # if you don't want it to appear in the result output.
       #####################
      +--disable_query_log
      +--disable_warnings
      +
      +# We will only try to alter the underlying table if the main alter was successful
      +if (!$my_errno)
      +{
      +  if ($rename_to)
      +  {
      +    eval ALTER TABLE $rename_to UNION(mrg.$rename_to);
      +  }
      +  eval $child_statement;
      +}
      +--enable_warnings
      +--enable_query_log
       
       # Unset the parameters, we don't want them to be accidentally reused later
       --let $alter_definition = 

      Note that in both create_table and alter_table we run our additional code with disable_query_log / disable_result_log. It's a tradeoff: this way we reduce the number of mismatches (because our additional code does not produce anything), but it will also make investigation more difficult, should a problem start somewhere in this code. It's up to the person who maintains the engine suite to decide what's best.

      Example:
      We have a MERGE table which points to an underlying table containing non-unique values. Normally, the test assumes that the table under test contains these values, of course; but in our case it's actually the underlying MyISAM table.
      Then, the test performs ALTER TABLE .. ADD UNIQUE INDEX ... and expects it to fail.
      In our case, the statement on the MERGE table will succeed, but the statement on the underlying table will fail quietly; if the test tries to do something else afterwards, it reveal that the merge table and the underlying table diverged, but it won't be clear from the test output why it happened.

      Now lets try to run the suite:

      perl ./mtr --suite=storage_engine-myisammrg --force --max-test-fail=0
       
      Spent 34.141 of 80 seconds executing testcases
       
      Completed: Failed 41/98 tests, 58.16% were successful.
       

      Not great, but not that bad either, considering. Lets look at the results.

      alter_table and some other tests produce the following mismatch on SHOW CREATE TABLE:

      @@ -127,7 +127,7 @@
         `a` int(11) DEFAULT NULL,
         `b` char(8) DEFAULT NULL,
         `c` char(8) DEFAULT NULL
      -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=utf8
      +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=utf8 INSERT_METHOD=LAST UNION=(`mrg`.`t1`)
       ALTER TABLE t1 DEFAULT CHARACTER SET = latin1 COLLATE latin1_general_ci;

      Quite as expected, since we have additional options on our tables; requires adding an rdiff.

      alter_table_online:

       ALTER ONLINE TABLE t1 MODIFY b <INT_COLUMN> DEFAULT 5;
      -ERROR HY000: Can't execute the given 'ALTER' command as online
      +# ERROR: Statement succeeded (expected results: ER_CANT_DO_ONLINE)
      +# ------------ UNEXPECTED RESULT ------------
      +# The statement|command succeeded unexpectedly.
      +# Functionality or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors. 
      +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
      +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
      +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
      +# -------------------------------------------

      This is all right I guess. It's good that online alter can be done, right?
      But this is bad:

       ALTER ONLINE TABLE t1 CHANGE b new_name <INT_COLUMN>;
      -ERROR HY000: Can't execute the given 'ALTER' command as online
      +ERROR HY000: Unable to open underlying table which is differently defined or of non-MyISAM type or doesn't exist
      +# ERROR: Statement ended with errno 1168, errname ER_WRONG_MRG_TABLE (expected results: ER_CANT_DO_ONLINE)
       ALTER ONLINE TABLE t1 COMMENT 'new comment';

      Looking earlier in the test output, we find out that we are working with temporary tables here. And there is the bug MySQL:57657 which says that altering a temporary MERGE table is broken in 5.5. Whether to add an rdiff or disable the test – it's a question. I think I will disable, after all, although it's a bit sad. You can choose to be smarter, and since you have your own alter_table.inc anyway, add some logic in there, checking whether a table is temporary or not.

      create_table:

       CREATE TABLE t1 ENGINE=<STORAGE_ENGINE> <CUSTOM_TABLE_OPTIONS> AS SELECT 1 UNION SELECT 2;
      -SHOW CREATE TABLE t1;
      -Table  Create Table
      -t1     CREATE TABLE `t1` (
      -  `1` bigint(20) NOT NULL DEFAULT '0'
      -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
      -SELECT * FROM t1;
      -1
      -1
      -2
      -DROP TABLE t1;
      +ERROR HY000: 'test.t1' is not BASE TABLE
      +# ERROR: Statement ended with errno 1347, errname ER_WRONG_OBJECT (expected to succeed)
      +# ------------ UNEXPECTED RESULT ------------
      +# The statement|command finished with ER_WRONG_OBJECT.
      +# CREATE TABLE .. AS SELECT or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors. 
      +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
      +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
      +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
      +# -------------------------------------------

      AS SELECT doesn't work with MERGE tables; we didn't consider it in our simple changes of create_table.inc, because we only do AS SELECT a few times in the suite, so it seems easier just to accept this difference here. Although in general, it's up to the person who modifies the creation procedure.

      lock:

      The test is quite messed up, because merge children are locked through the parent tables, which the test of course does not expect. E.g. if it locks two tables and then drops them, it expects that nothing is locked any longer, which is not true for the merge tables. Adding an rdiff, anyway locking is very specific for merge tables and needs to be tested as an engine feature rather than as basic functionality.

      The rest are usual mismatches due to unsupported functionality etc.

      MERGE engine doesn't support partitions and transactions, but again, lets see what happens, since it's nearly for free:

      mkdir ../storage/myisammrg/mysql-test/storage_engine/parts
      mkdir ../storage/myisammrg/mysql-test/storage_engine/trx

      perl ./mtr --suite=storage_engine/*-myisammrg --force --max-test-fail=0

      All tests failed, of course.

      For all partitioned tables:

      +ERROR HY000: Engine cannot be used in partitioned tables
      +# ERROR: Statement ended with errno 1572, errname ER_PARTITION_MERGE_ERROR (expected to succeed)
      +# ------------ UNEXPECTED RESULT ------------
      +# [ CREATE TABLE t1 (a INT(11) /*!*/ /*Custom column options*/) ENGINE=MRG_MYISAM /*!*/ /*Custom table options*/ UNION(mrg.t1) INSERT_METHOD=LAST PARTITION BY HASH(a) PARTITIONS 2 ]
      +# The statement|command finished with ER_PARTITION_MERGE_ERROR.
      +# Partitions or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors. 
      +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
      +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
      +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
      +# -------------------------------------------

      Transactional tests run somehow, but of course diffs are as extensive as they were for MyISAM. All this is expected, and can be solved either by removing the nearly created trx and parts subdirs, or adding rdiffs. It seems reasonable to remove parts and keep trx, but with the paranoic assumption that one day an attempt to create a partitioned MERGE table will crash the server, I will keep parts too; anyway they all together take less than a second (rejecting table creation and failing everything with "table doesn't exist" is fast). So, I will add rdiffs for each file.

      Running all at once now:

      perl ./mtr --suite=storage_engine-myisammrg,storage_engine/*-myisammrg
       
      Spent 46.994 of 70 seconds executing testcases
       
      Completed: All 119 tests were successful.

      Attachments

        Issue Links

          Activity

            ratzpo Rasmus Johansson (Inactive) created issue -
            ratzpo Rasmus Johansson (Inactive) made changes -
            Field Original Value New Value
            Issue Type New Feature [ 2 ] Story [ 6 ]
            ratzpo Rasmus Johansson (Inactive) made changes -
            Issue Type Story [ 6 ] Task [ 3 ]
            serg Sergei Golubchik made changes -
            serg Sergei Golubchik made changes -
            Resolution Duplicate [ 3 ]
            Status Open [ 1 ] Closed [ 6 ]
            serg Sergei Golubchik made changes -
            Workflow jira [ 10100 ] defaullt [ 10647 ]

            MDEV-30 provided instrumentation for creating storage/plugin test suites. This task will cover creating the generic suite which storage suites will be based upon.

            elenst Elena Stepanova added a comment - MDEV-30 provided instrumentation for creating storage/plugin test suites. This task will cover creating the generic suite which storage suites will be based upon.
            elenst Elena Stepanova made changes -
            Assignee Rasmus Johansson [ ratzpo ] Elena Stepanova [ elenst ]
            Resolution Duplicate [ 3 ]
            Status Closed [ 6 ] Reopened [ 4 ]
            elenst Elena Stepanova made changes -
            Fix Version/s 5.3.3 [ 10001 ]
            Summary Storage independent test suite Generic storage engine test suite
            elenst Elena Stepanova made changes -
            elenst Elena Stepanova made changes -
            elenst Elena Stepanova made changes -
            Description The goal of this task is to create a test suite which could serve as an acceptance/conformance test for a storage engine.

            It will mostly consist of existing MTR tests. No big changes to existing tests are planned in scope of this task.
            When the set of tests is defined, we might have to fill certain gaps by adding new tests.

            The whole universe of existing MTR tests can be divided into 3 categories:

            1. True engine-independent tests
            These are tests which do not use storage engines at all. While they should be safe to run with any storage engine, they do not add value to testing an engine, so they will not be included into the suite.

            2. Engine-specific tests
            There are numerous tests which cover functionality or behavior specific for a certain engine. Obviously, these tests cannot be included into the generic suite.
            Note: P_S tests also fall into this category.

            3. Tests which are able to work with different engines
            These might be either tests which rely on the default storage engine (e.g. create tables without ENGINE clause), or use the engine as a variable (mostly $engine_type). These tests are our target category.

            Some tests are a mix of 2 and 3 -- they might use some tables of a hardcoded engine and leave some flexible. They are also important as they allow to test a mix of engines in the workflow.

            serg Sergei Golubchik made changes -
            Labels tests
            elenst Elena Stepanova made changes -
            Description The goal of this task is to create a test suite which could serve as an acceptance/conformance test for a storage engine.

            It will mostly consist of existing MTR tests. No big changes to existing tests are planned in scope of this task.
            When the set of tests is defined, we might have to fill certain gaps by adding new tests.

            The whole universe of existing MTR tests can be divided into 3 categories:

            1. True engine-independent tests
            These are tests which do not use storage engines at all. While they should be safe to run with any storage engine, they do not add value to testing an engine, so they will not be included into the suite.

            2. Engine-specific tests
            There are numerous tests which cover functionality or behavior specific for a certain engine. Obviously, these tests cannot be included into the generic suite.
            Note: P_S tests also fall into this category.

            3. Tests which are able to work with different engines
            These might be either tests which rely on the default storage engine (e.g. create tables without ENGINE clause), or use the engine as a variable (mostly $engine_type). These tests are our target category.

            Some tests are a mix of 2 and 3 -- they might use some tables of a hardcoded engine and leave some flexible. They are also important as they allow to test a mix of engines in the workflow.

            The goal of this task is to create a test suite which could serve as an acceptance/conformance test for a storage engine.

            === Test files ===

            The new suite will mostly consist of existing MTR tests. No big changes to existing tests are planned in scope of this task.
            The whole universe of existing MTR tests can be divided into 3 categories:

            1. True engine-independent tests
            These are tests which do not use storage engines at all. While they should be safe to run with any storage engine, they do not add value to testing an engine, so they will not be included into the suite.
            Example:
            main.mysql_protocols

            2. Engine-specific tests
            There are numerous tests which cover functionality or behavior specific for a certain engine. Obviously, these tests cannot be included into the generic suite.
            Examples:
            main.myisam-blob
            perfschema.checksum

            3. Tests which are able to work with different engines
            These might be either tests which rely on the default storage engine (e.g. create tables without ENGINE clause), or use the engine as a variable (mostly $engine_type). These tests are our target category.
            Examples:
            main.bool
            rpl.bit

            Some tests are a mix of 2 and 3 -- they might use some tables of a hardcoded engine and leave some flexible. They are also important as they allow to test a mix of engines in the workflow.
            Examples:
            main.select
            rpl.rpl_mixed_mixing_engines


            When existing tests have been filtered, we might have to fill certain gaps by adding new tests. There will be simple cases, for example partitioning suite provides generic inc files to test based functionality, but actual *.test files are all engine-specific. We will need a test file which either uses an external variable, or relies on the default storage engine.
            Detecting less obvious gaps will not be this easy, but it can be an ongoing process, extended beyond the first release of the storage engine test suite.
             

            elenst Elena Stepanova made changes -
            Description The goal of this task is to create a test suite which could serve as an acceptance/conformance test for a storage engine.

            === Test files ===

            The new suite will mostly consist of existing MTR tests. No big changes to existing tests are planned in scope of this task.
            The whole universe of existing MTR tests can be divided into 3 categories:

            1. True engine-independent tests
            These are tests which do not use storage engines at all. While they should be safe to run with any storage engine, they do not add value to testing an engine, so they will not be included into the suite.
            Example:
            main.mysql_protocols

            2. Engine-specific tests
            There are numerous tests which cover functionality or behavior specific for a certain engine. Obviously, these tests cannot be included into the generic suite.
            Examples:
            main.myisam-blob
            perfschema.checksum

            3. Tests which are able to work with different engines
            These might be either tests which rely on the default storage engine (e.g. create tables without ENGINE clause), or use the engine as a variable (mostly $engine_type). These tests are our target category.
            Examples:
            main.bool
            rpl.bit

            Some tests are a mix of 2 and 3 -- they might use some tables of a hardcoded engine and leave some flexible. They are also important as they allow to test a mix of engines in the workflow.
            Examples:
            main.select
            rpl.rpl_mixed_mixing_engines


            When existing tests have been filtered, we might have to fill certain gaps by adding new tests. There will be simple cases, for example partitioning suite provides generic inc files to test based functionality, but actual *.test files are all engine-specific. We will need a test file which either uses an external variable, or relies on the default storage engine.
            Detecting less obvious gaps will not be this easy, but it can be an ongoing process, extended beyond the first release of the storage engine test suite.
             

            The goal of this task is to create a test suite which could serve as an acceptance/conformance test for a storage engine.

            === Test files ===

            The new suite will mostly consist of existing MTR tests. No big changes to existing tests are planned in scope of this task.
            The whole universe of existing MTR tests can be divided into 3 categories:

            1. True engine-independent tests
            These are tests which do not use storage engines at all. While they should be safe to run with any storage engine, they do not add value to testing an engine, so they will not be included into the suite.
            Examples:
            - main.mysql_protocols
            - mtr.newcomb

            2. Engine-specific tests
            There are numerous tests which cover functionality or behavior specific for a certain engine. Obviously, these tests cannot be included into the generic suite.
            Examples:
            - main.myisam-blob
            - perfschema.checksum

            3. Tests which are able to work with different engines
            These might be either tests which rely on the default storage engine (e.g. create tables without ENGINE clause), or use the engine as a variable (mostly $engine_type). These tests are our target category.
            Examples:
            - main.bool
            - rpl.bit

            Some tests are a mix of 2 and 3 -- they might use some tables of a hardcoded engine and leave some flexible. They are also important as they allow to test a mix of engines in the workflow.
            Examples:
            - main.select
            - rpl.rpl_mixed_mixing_engines


            When existing tests have been filtered, we might have to fill certain gaps by adding new tests. There will be simple cases, for example partitioning suite provides generic inc files to test based functionality, but actual *.test files are all engine-specific. We will need a test file which either uses an external variable, or relies on the default storage engine.
            Detecting less obvious gaps will not be this easy, but it can be an ongoing process, extended beyond the first release of the storage engine test suite.
             

            === The structure of the suite ===

            Existing MTR test suites divide tests mainly according to the server functionality: there are replication, binlog suites etc. At the same time, each of them contains tests which use features specific (but not mandatory) for a storage engine, e.g. savepoints. It correlates with the specifics of server testing, but not so much with engine testing. For testing an engine, it makes no sense to separate functionality which an engine does not have the power to decline. For example, binlog and replication testing can just as well be a part of the basic set of tests, because they must work somehow with any engine. On the other hand, an engine can declare that it does not support partitioning, or savepoints, or transactions in general. So, it would be convenient to split the tests into corresponding categories. MTR allows subsuites, so we can have a structure like
            storage_engine/basic
            storage_engine/partitions
            storage_engine/trx
            storage_engine/savepoints

            etc. In this case, when a vendor wants to use the suite, they can simply ignore subsuites for functionality their engine does not support.

            elenst Elena Stepanova made changes -
            Description The goal of this task is to create a test suite which could serve as an acceptance/conformance test for a storage engine.

            === Test files ===

            The new suite will mostly consist of existing MTR tests. No big changes to existing tests are planned in scope of this task.
            The whole universe of existing MTR tests can be divided into 3 categories:

            1. True engine-independent tests
            These are tests which do not use storage engines at all. While they should be safe to run with any storage engine, they do not add value to testing an engine, so they will not be included into the suite.
            Examples:
            - main.mysql_protocols
            - mtr.newcomb

            2. Engine-specific tests
            There are numerous tests which cover functionality or behavior specific for a certain engine. Obviously, these tests cannot be included into the generic suite.
            Examples:
            - main.myisam-blob
            - perfschema.checksum

            3. Tests which are able to work with different engines
            These might be either tests which rely on the default storage engine (e.g. create tables without ENGINE clause), or use the engine as a variable (mostly $engine_type). These tests are our target category.
            Examples:
            - main.bool
            - rpl.bit

            Some tests are a mix of 2 and 3 -- they might use some tables of a hardcoded engine and leave some flexible. They are also important as they allow to test a mix of engines in the workflow.
            Examples:
            - main.select
            - rpl.rpl_mixed_mixing_engines


            When existing tests have been filtered, we might have to fill certain gaps by adding new tests. There will be simple cases, for example partitioning suite provides generic inc files to test based functionality, but actual *.test files are all engine-specific. We will need a test file which either uses an external variable, or relies on the default storage engine.
            Detecting less obvious gaps will not be this easy, but it can be an ongoing process, extended beyond the first release of the storage engine test suite.
             

            === The structure of the suite ===

            Existing MTR test suites divide tests mainly according to the server functionality: there are replication, binlog suites etc. At the same time, each of them contains tests which use features specific (but not mandatory) for a storage engine, e.g. savepoints. It correlates with the specifics of server testing, but not so much with engine testing. For testing an engine, it makes no sense to separate functionality which an engine does not have the power to decline. For example, binlog and replication testing can just as well be a part of the basic set of tests, because they must work somehow with any engine. On the other hand, an engine can declare that it does not support partitioning, or savepoints, or transactions in general. So, it would be convenient to split the tests into corresponding categories. MTR allows subsuites, so we can have a structure like
            storage_engine/basic
            storage_engine/partitions
            storage_engine/trx
            storage_engine/savepoints

            etc. In this case, when a vendor wants to use the suite, they can simply ignore subsuites for functionality their engine does not support.

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            Since this set of tests can be implemented in different ways, not just as a test suite in MTR terms, to avoid further conclusion, I will call it "[SE] test pack" (as opposed to "test suite" as 'main' or suite/* in MTR).

            == Test cases ==

            The new pack will mostly consist of existing MTR tests. No big changes to existing tests are planned in scope of this task.

            The whole universe of existing MTR tests can be divided into 3 categories:

            1. True engine-independent tests
            These are tests which do not use storage engines at all. While they should be safe to run with any storage engine, they do not add value to testing an engine, so they will not be included into the SE test pack.
            Examples:
            - main.mysql_protocols
            - mtr.newcomb

            2. Engine-specific tests
            There are numerous tests which cover functionality or behavior specific for a certain engine. Obviously, these tests should not be included into the generic test pack.
            Examples:
            - main.myisam-blob
            - perfschema.checksum

            3. Tests which are able to work with different engines
            These might be either tests which rely on the default storage engine (e.g. create tables without ENGINE clause), or use the engine as a variable (mostly $engine_type). These tests are our target category.
            Examples:
            - main.bool
            - rpl.bit

            Some tests are a mix of 2 and 3 -- they might use some tables of a hardcoded engine and leave some flexible. They are also important as they allow to test a mix of engines in the workflow.
            Examples:
            - main.select
            - rpl.rpl_mixed_mixing_engines


            After existing tests have been filtered, we might have to fill certain gaps by adding new tests. There will be simple cases, for example partitioning suite provides generic inc files to test based functionality, but actual *.test files are all engine-specific. We will need a test file which either uses an external variable, or relies on the default storage engine.
            Detecting less obvious gaps will not be this easy, but it can be an ongoing process, extended beyond the first release of the SE test pack.
             

            == Preferable structure of the pack ==

            Existing MTR test suites divide tests mainly according to the server functionality: there are replication, binlog suites etc. At the same time, each of them contains tests which use features specific (but not mandatory) for a storage engine, e.g. savepoints. It correlates with the specifics of server testing, but not so much with engine testing. For testing an engine, it makes no sense to separate functionality which an engine does not have the power to decline. For example, binlog and replication testing can just as well be a part of the basic set of tests, because they must work somehow with any engine. On the other hand, an engine can declare that it does not support partitioning, or savepoints, or transactions in general. So, it would be convenient to split the tests into corresponding categories, something like
            storage_engine/basic
            storage_engine/partitions
            storage_engine/trx
            storage_engine/savepoints

            etc. In this case, when a vendor wants to use the test pack, they can simply ignore subsets for functionality their engine does not support.

            However, it might be difficult to achieve due to implementation problems (see below).


            == Implementation ==

            For the end user (in this case storage engine vendor), the top-level representation of the test pack will be an overlay of one or several MTR test suites. The underlying test suites, however, can be implemented in different ways.

            === Undesirable implementation possibilities ===

            Since the SE test pack will mostly consist of already existing tests, the simplest way to create it would be to copy the test and result files to the new suite(s) (to rename some, if necessary), and to have an overlay of the new suite(s). However, duplicating files will make further maintenance so costly and error-prune that we won't consider this a viable option.

            The second easiest way would be to reshuffle existing tests, collecting the ones that we need for storage engine testing and moving them into separate suite(s). While it does not have the disadvantage of the previous approach, there are many others, such as damaging existing testing. For example, if we move a number of files from replication suite to a storage engine suite, we would not be able to run the same replication configurations and combinations without a considerable additional effort.

            === Acceptable implementation possibilities ===

            I see two reasonable ways to create and provide the SE test pack to users.

            ==== 1. Set of overlays of existing test suites ====

            After we define the set of existing tests as described above in "Test cases" section, we will create an overlay for every test suite which contains at least one test that we have chosen for our test pack. Thus, we will end up with a set of overlays. For each overlay, depending on our preferences or the number of files in the parent suite (total vs selected), we will either use the inclusive approach and create a template wrapper for each desired test, or we'll choose the exclusive approach and will create a list of disabled tests (all but those that we chose for the suite) and a template of an option file.

            The advantage of this approach is that it can be used with the existing MTR instrumentation, no more changes are needed (except for maybe some bugfixes). However, there are noticeable disadvantages.

            - Maintenance
            Since overlays do not stack, to be able to use the test pack, a storage engine vendor will have to actually copy the set of overlays that we provided, and make changes on their copy.
            Then, whenever a new test is added to any existing suite (which has an overlay), if the exclusive approach was previously taken and we don't want the new test to become a part of the SE pack, the vendor will have to guess about it somehow and add the test to their version of 'disabled'. If the inclusive approach was taken and we want the new test to become a part of the SE pack, then again, the vendor will need to find out about it and add the wrapper for the new test.

            - Test pack structure
            As described in the section "Preferable structure", the existing test suite structure is not optimal for storage engine testing. However, if we use this approach, we have no choice, we'll have to maintain the same exact structure: if we found a good generic test in perfschema, or maria, or any other test suite, we will have to create an overlay for it. It also means that the vendor will not be able to create any other overlay of the same suite.

            - Noise and misuse of 'disabled'
            There are ~2800 tests in MTR test suites now. Lets say we choose 700 of them for the SE pack, spread across the most of suites. It means that we will either need to create 700 wrappers (which is not too difficult, just is not pretty), or that we'll need to put 2100 tests permanently onto the 'disabled' lists, which, in turn, means they will show up on test runs and flood the test output. It will also make tracking the real disabled tests (those which were disabled temporarily due to errors) more difficult.

            Summary Generic storage engine test suite Generic storage engine test pack
            elenst Elena Stepanova made changes -
            Description The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            Since this set of tests can be implemented in different ways, not just as a test suite in MTR terms, to avoid further conclusion, I will call it "[SE] test pack" (as opposed to "test suite" as 'main' or suite/* in MTR).

            == Test cases ==

            The new pack will mostly consist of existing MTR tests. No big changes to existing tests are planned in scope of this task.

            The whole universe of existing MTR tests can be divided into 3 categories:

            1. True engine-independent tests
            These are tests which do not use storage engines at all. While they should be safe to run with any storage engine, they do not add value to testing an engine, so they will not be included into the SE test pack.
            Examples:
            - main.mysql_protocols
            - mtr.newcomb

            2. Engine-specific tests
            There are numerous tests which cover functionality or behavior specific for a certain engine. Obviously, these tests should not be included into the generic test pack.
            Examples:
            - main.myisam-blob
            - perfschema.checksum

            3. Tests which are able to work with different engines
            These might be either tests which rely on the default storage engine (e.g. create tables without ENGINE clause), or use the engine as a variable (mostly $engine_type). These tests are our target category.
            Examples:
            - main.bool
            - rpl.bit

            Some tests are a mix of 2 and 3 -- they might use some tables of a hardcoded engine and leave some flexible. They are also important as they allow to test a mix of engines in the workflow.
            Examples:
            - main.select
            - rpl.rpl_mixed_mixing_engines


            After existing tests have been filtered, we might have to fill certain gaps by adding new tests. There will be simple cases, for example partitioning suite provides generic inc files to test based functionality, but actual *.test files are all engine-specific. We will need a test file which either uses an external variable, or relies on the default storage engine.
            Detecting less obvious gaps will not be this easy, but it can be an ongoing process, extended beyond the first release of the SE test pack.
             

            == Preferable structure of the pack ==

            Existing MTR test suites divide tests mainly according to the server functionality: there are replication, binlog suites etc. At the same time, each of them contains tests which use features specific (but not mandatory) for a storage engine, e.g. savepoints. It correlates with the specifics of server testing, but not so much with engine testing. For testing an engine, it makes no sense to separate functionality which an engine does not have the power to decline. For example, binlog and replication testing can just as well be a part of the basic set of tests, because they must work somehow with any engine. On the other hand, an engine can declare that it does not support partitioning, or savepoints, or transactions in general. So, it would be convenient to split the tests into corresponding categories, something like
            storage_engine/basic
            storage_engine/partitions
            storage_engine/trx
            storage_engine/savepoints

            etc. In this case, when a vendor wants to use the test pack, they can simply ignore subsets for functionality their engine does not support.

            However, it might be difficult to achieve due to implementation problems (see below).


            == Implementation ==

            For the end user (in this case storage engine vendor), the top-level representation of the test pack will be an overlay of one or several MTR test suites. The underlying test suites, however, can be implemented in different ways.

            === Undesirable implementation possibilities ===

            Since the SE test pack will mostly consist of already existing tests, the simplest way to create it would be to copy the test and result files to the new suite(s) (to rename some, if necessary), and to have an overlay of the new suite(s). However, duplicating files will make further maintenance so costly and error-prune that we won't consider this a viable option.

            The second easiest way would be to reshuffle existing tests, collecting the ones that we need for storage engine testing and moving them into separate suite(s). While it does not have the disadvantage of the previous approach, there are many others, such as damaging existing testing. For example, if we move a number of files from replication suite to a storage engine suite, we would not be able to run the same replication configurations and combinations without a considerable additional effort.

            === Acceptable implementation possibilities ===

            I see two reasonable ways to create and provide the SE test pack to users.

            ==== 1. Set of overlays of existing test suites ====

            After we define the set of existing tests as described above in "Test cases" section, we will create an overlay for every test suite which contains at least one test that we have chosen for our test pack. Thus, we will end up with a set of overlays. For each overlay, depending on our preferences or the number of files in the parent suite (total vs selected), we will either use the inclusive approach and create a template wrapper for each desired test, or we'll choose the exclusive approach and will create a list of disabled tests (all but those that we chose for the suite) and a template of an option file.

            The advantage of this approach is that it can be used with the existing MTR instrumentation, no more changes are needed (except for maybe some bugfixes). However, there are noticeable disadvantages.

            - Maintenance
            Since overlays do not stack, to be able to use the test pack, a storage engine vendor will have to actually copy the set of overlays that we provided, and make changes on their copy.
            Then, whenever a new test is added to any existing suite (which has an overlay), if the exclusive approach was previously taken and we don't want the new test to become a part of the SE pack, the vendor will have to guess about it somehow and add the test to their version of 'disabled'. If the inclusive approach was taken and we want the new test to become a part of the SE pack, then again, the vendor will need to find out about it and add the wrapper for the new test.

            - Test pack structure
            As described in the section "Preferable structure", the existing test suite structure is not optimal for storage engine testing. However, if we use this approach, we have no choice, we'll have to maintain the same exact structure: if we found a good generic test in perfschema, or maria, or any other test suite, we will have to create an overlay for it. It also means that the vendor will not be able to create any other overlay of the same suite.

            - Noise and misuse of 'disabled'
            There are ~2800 tests in MTR test suites now. Lets say we choose 700 of them for the SE pack, spread across the most of suites. It means that we will either need to create 700 wrappers (which is not too difficult, just is not pretty), or that we'll need to put 2100 tests permanently onto the 'disabled' lists, which, in turn, means they will show up on test runs and flood the test output. It will also make tracking the real disabled tests (those which were disabled temporarily due to errors) more difficult.

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            Since this set of tests can be implemented in different ways, not just as a test suite in MTR terms, to avoid confusion, I will call it "[SE] test pack" (as opposed to "test suite" as 'main' or suite/* in MTR).

            == Test cases ==

            The new pack will mostly consist of existing MTR tests. No big changes to existing tests are planned in scope of this task.

            The whole universe of existing MTR tests can be divided into 3 categories:

            1. True engine-independent tests
            These are tests which do not use storage engines at all. While they should be safe to run with any storage engine, they do not add value to testing an engine, so they will not be included into the SE test pack.
            Examples:
            - main.mysql_protocols
            - mtr.newcomb

            2. Engine-specific tests
            There are numerous tests which cover functionality or behavior specific for a certain engine. Obviously, these tests should not be included into the generic test pack.
            Examples:
            - main.myisam-blob
            - perfschema.checksum

            3. Tests which are able to work with different engines
            These might be either tests which rely on the default storage engine (e.g. create tables without ENGINE clause), or use the engine as a variable (mostly $engine_type). These tests are our target category.
            Examples:
            - main.bool
            - rpl.bit

            Some tests are a mix of 2 and 3 -- they might use some tables of a hardcoded engine and leave some flexible. They are also important as they allow to test a mix of engines in the workflow.
            Examples:
            - main.select
            - rpl.rpl_mixed_mixing_engines


            After existing tests have been filtered, we might have to fill certain gaps by adding new tests. There will be simple cases, for example partitioning suite provides generic inc files to test based functionality, but actual *.test files are all engine-specific. We will need a test file which either uses an external variable, or relies on the default storage engine.
            Detecting less obvious gaps will not be this easy, but it can be an ongoing process, extended beyond the first release of the SE test pack.
             

            == Preferable structure of the pack ==

            Existing MTR test suites divide tests mainly according to the server functionality: there are replication, binlog suites etc. At the same time, each of them contains tests which use features specific (but not mandatory) for a storage engine, e.g. savepoints. It correlates with the specifics of server testing, but not so much with engine testing. For testing an engine, it makes no sense to separate functionality which an engine does not have the power to decline. For example, binlog and replication testing can just as well be a part of the basic set of tests, because they must work somehow with any engine. On the other hand, an engine can declare that it does not support partitioning, or savepoints, or transactions in general. So, it would be convenient to split the tests into corresponding categories, something like
            storage_engine/basic
            storage_engine/partitions
            storage_engine/trx
            storage_engine/savepoints

            etc. In this case, when a vendor wants to use the test pack, they can simply ignore subsets for functionality their engine does not support.

            However, it might be difficult to achieve due to implementation problems (see below).


            == Implementation ==

            For the end user (in this case storage engine vendor), the top-level representation of the test pack will be an overlay of one or several MTR test suites. The underlying test suites, however, can be implemented in different ways.

            === Undesirable implementation possibilities ===

            Since the SE test pack will mostly consist of already existing tests, the simplest way to create it would be to copy the test and result files to the new suite(s) (to rename some, if necessary), and to have an overlay of the new suite(s). However, duplicating files will make further maintenance so costly and error-prune that we won't consider this a viable option.

            The second easiest way would be to reshuffle existing tests, collecting the ones that we need for storage engine testing and moving them into separate suite(s). While it does not have the disadvantage of the previous approach, there are many others, such as damaging existing testing. For example, if we move a number of files from replication suite to a storage engine suite, we would not be able to run the same replication configurations and combinations without a considerable additional effort.

            === Acceptable implementation possibilities ===

            I see two reasonable ways to create and provide the SE test pack to users.

            ==== 1. Set of overlays of existing test suites ====

            After we define the set of existing tests as described above in "Test cases" section, we will create an overlay for every test suite which contains at least one test that we have chosen for our test pack. Thus, we will end up with a set of overlays. For each overlay, depending on our preferences or the number of files in the parent suite (total vs selected), we will either use the inclusive approach and create a template wrapper for each desired test, or we'll choose the exclusive approach and will create a list of disabled tests (all but those that we chose for the suite) and a template of an option file.

            The advantage of this approach is that it can be used with the existing MTR instrumentation, no more changes are needed (except for maybe some bugfixes). However, there are noticeable disadvantages.

            - Maintenance
            Since overlays do not stack, to be able to use the test pack, a storage engine vendor will have to actually copy the set of overlays that we provided, and make changes on their copy.
            Then, whenever a new test is added to any existing suite (which has an overlay), if the exclusive approach was previously taken and we don't want the new test to become a part of the SE pack, the vendor will have to guess about it somehow and add the test to their version of 'disabled'. If the inclusive approach was taken and we want the new test to become a part of the SE pack, then again, the vendor will need to find out about it and add the wrapper for the new test.

            - Usability
            This test pack will not be usable as a regular MTR suite (or a set of suites); in other words, it could not be run without copying it into the storage engine folder, creating an opt file etc. As a side-effect, it will not be usable for release engines/builds as the packages do not have the required structure.

            - Test pack structure
            As described in the section "Preferable structure", the existing test suite structure is not optimal for storage engine testing. However, if we use this approach, we have no choice, we'll have to maintain the same exact structure: if we found a good generic test in perfschema, or maria, or any other test suite, we will have to create an overlay for it. It also means that the vendor will not be able to create any other overlay of the same suite.

            - Noise and misuse of 'disabled'
            There are ~2800 tests in MTR test suites now. Lets say we choose 700 of them for the SE pack, spread across the most of suites. It means that we will either need to create 700 wrappers (which is not too difficult, just is not pretty), or that we'll need to put 2100 tests permanently onto the 'disabled' lists, which, in turn, means they will show up on test runs and flood the test output. It will also make tracking the real disabled tests (those which were disabled temporarily due to errors) more difficult.


            ==== 2. Overlay of a slice of tests ====

            The main disadvantage of this approach is that it introduces yet another concept to already overcomplicated MTR, and requires implementation of it (although it should not be too big).
            The draft names of the concept is "slice" or "referenced tests".

            Currently we are able to create a suite with suite.pm file which can define a list of tests to run, through re-defining list_cases subroutine. The subroutine returns the list of test case names as an array:
            ( 'test1', 'test2', ...).
            Test files, result files and other auxiliary files are looked up in the suite directory (and corresponding subfolders), as usual.

            Suppose we would have another subroutine, list_referenced_cases. Instead of the array of names, it would return pairs (not necessarily a hash as the elements might be not unique), where the 1st element is a name of a test, and the 2nd element is the name of the suite where the test is located:
            ( ( 'test1', 'main' ), ( 'test2', 'rpl' ), ... )

            MTR would read the list of references along with the 'normal' list, and would know that everything that is related to this test (test file, result file, test-specific cnf/opt/combination files, etc.) should in fact be read from the referenced suite folder, while suite-specific files are still read from the current suite folder.

            There would be no merging or additional lookup places, but replacement. Lets take the example of result file lookup from http://kb.askmonty.org/en/mtr-auxiliary-files.

            Existing behavior:

            Consider a test foobar.test in the combination pair aa,bb, that is run in the overlay rty of the suite qwe, in other words, for the test that mtr prints as

            qwe-rty.foobar 'aa,bb' [ pass ]

            Any of the following 15 file names can be used:

               1. rty/r/foo,aa,bb.result
               2. rty/r/foo,aa,bb.rdiff
               3. qwe/r/foo,aa,bb.result
               4. qwe/r/foo,aa,bb.rdiff
               5. rty/r/foo,aa.result
               6. rty/r/foo,aa.rdiff
               7. qwe/r/foo,aa.result
               8. qwe/r/foo,aa.rdiff
               9. rty/r/foo,bb.result
              10. rty/r/foo,bb.rdiff
              11. qwe/r/foo,bb.result
              12. qwe/r/foo,bb.rdiff
              13. rty/r/foo.result
              14. rty/r/foo.rdiff
              15. qwe/r/foo.result

            They are listed, precisely, in the order of preference, and mtr will walk that list from top to bottom and the first file that is found will be used.
            _Note: is it a typo? Should there be either 'foo.test' and qwe-rty.foo, or should files be named foobar,...?_


            Now, consider that we we run the test 'foo' in the overlay 'uvw' of the suite 'opr', and the suite 'opr' references the test 'foo' as
            ( 'foo', 'qwe' )
            That is, 'uvw' is the overlay, 'opr' is the parent, and 'qwe' is the referenced suite.

            We want the result to be displayed as
            opr[qwe]-uvw.foo 'aa,bb' [ pass ]

            And we want the components of the result file be still searched in 15 locations:

               1. uvw/r/foo,aa,bb.result
               2. uvw/r/foo,aa,bb.rdiff
               3. qwe/r/foo,aa,bb.result
               4. qwe/r/foo,aa,bb.rdiff
               5. uvw/r/foo,aa.result
               6. uvw/r/foo,aa.rdiff
               7. qwe/r/foo,aa.result
               8. qwe/r/foo,aa.rdiff
               9. uvw/r/foo,bb.result
              10. uvw/r/foo,bb.rdiff
              11. qwe/r/foo,bb.result
              12. qwe/r/foo,bb.rdiff
              13. uvw/r/foo.result
              14. uvw/r/foo.rdiff
              15. qwe/r/foo.result

            For the purpose of test-specific lookups, the referenced suite fully replaces the parent suite folder (while for suite-specific search the parent suite folder is still used).

            I suppose it could be done by adding another member (referenced suite) to My::Test and populating it accordingly. Then, if the value is defined, in some cases it would be used instead of the normal suite value. However, I didn't get to create a proof of concept yet, so it's theoretical.

            This addition would allow us to implement the test pack as an actual MTR test suite, with subsuites according to functional areas:
            storage_engine/basic
            storage_engine/transactions
            etc.
            Each subsuite would contain suite.pm which would defined the list of referenced tests (and, optionally, the list of 'normal' tests if we want to add any):
            storage_engine/basic:
            list_referenced_cases { .. return ( ( 'bool', 'main' ), ( 'bit', 'rpl' ), ... ) };
            list_cases { .. return ( 'new_test1', 'new_test2', ... ) };

            The suite could also contain suite.opt, combinations etc.

            _Note: some remaining technical questions with this solution:
            1. what to do with disabled.def files -- on one hand, it's a suite-specific file, on the other hand, it would be much more meaningful to check if a test is disabled in the referenced folder.
            2. if we configure references as pairs of (<test name>, <suite name>), it means we can end up with duplicate test names. I would consider it a limitation for now (if both tests are desperately needed, they can be placed into different subsuites). Later we could also introduce a test nickname, if it seemed necessary._

            This way everyone could use the test pack as a normal MTR suite, just adding the default storage engine to the command line and launching --suite=storage_engine/basic,storage_engine/transactions etc. (that's how it works now with nested suites, although maybe we could improve this and allow to run it as --suite=storage_engine).

            At the same time, we would provide a template of the overlay for this test suite, which could be used by storage vendors. There they would add their rdiffs, disable unwanted tests, etc., but the base contents of the test pack would still be under our control.

            elenst Elena Stepanova made changes -
            Description The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            Since this set of tests can be implemented in different ways, not just as a test suite in MTR terms, to avoid confusion, I will call it "[SE] test pack" (as opposed to "test suite" as 'main' or suite/* in MTR).

            == Test cases ==

            The new pack will mostly consist of existing MTR tests. No big changes to existing tests are planned in scope of this task.

            The whole universe of existing MTR tests can be divided into 3 categories:

            1. True engine-independent tests
            These are tests which do not use storage engines at all. While they should be safe to run with any storage engine, they do not add value to testing an engine, so they will not be included into the SE test pack.
            Examples:
            - main.mysql_protocols
            - mtr.newcomb

            2. Engine-specific tests
            There are numerous tests which cover functionality or behavior specific for a certain engine. Obviously, these tests should not be included into the generic test pack.
            Examples:
            - main.myisam-blob
            - perfschema.checksum

            3. Tests which are able to work with different engines
            These might be either tests which rely on the default storage engine (e.g. create tables without ENGINE clause), or use the engine as a variable (mostly $engine_type). These tests are our target category.
            Examples:
            - main.bool
            - rpl.bit

            Some tests are a mix of 2 and 3 -- they might use some tables of a hardcoded engine and leave some flexible. They are also important as they allow to test a mix of engines in the workflow.
            Examples:
            - main.select
            - rpl.rpl_mixed_mixing_engines


            After existing tests have been filtered, we might have to fill certain gaps by adding new tests. There will be simple cases, for example partitioning suite provides generic inc files to test based functionality, but actual *.test files are all engine-specific. We will need a test file which either uses an external variable, or relies on the default storage engine.
            Detecting less obvious gaps will not be this easy, but it can be an ongoing process, extended beyond the first release of the SE test pack.
             

            == Preferable structure of the pack ==

            Existing MTR test suites divide tests mainly according to the server functionality: there are replication, binlog suites etc. At the same time, each of them contains tests which use features specific (but not mandatory) for a storage engine, e.g. savepoints. It correlates with the specifics of server testing, but not so much with engine testing. For testing an engine, it makes no sense to separate functionality which an engine does not have the power to decline. For example, binlog and replication testing can just as well be a part of the basic set of tests, because they must work somehow with any engine. On the other hand, an engine can declare that it does not support partitioning, or savepoints, or transactions in general. So, it would be convenient to split the tests into corresponding categories, something like
            storage_engine/basic
            storage_engine/partitions
            storage_engine/trx
            storage_engine/savepoints

            etc. In this case, when a vendor wants to use the test pack, they can simply ignore subsets for functionality their engine does not support.

            However, it might be difficult to achieve due to implementation problems (see below).


            == Implementation ==

            For the end user (in this case storage engine vendor), the top-level representation of the test pack will be an overlay of one or several MTR test suites. The underlying test suites, however, can be implemented in different ways.

            === Undesirable implementation possibilities ===

            Since the SE test pack will mostly consist of already existing tests, the simplest way to create it would be to copy the test and result files to the new suite(s) (to rename some, if necessary), and to have an overlay of the new suite(s). However, duplicating files will make further maintenance so costly and error-prune that we won't consider this a viable option.

            The second easiest way would be to reshuffle existing tests, collecting the ones that we need for storage engine testing and moving them into separate suite(s). While it does not have the disadvantage of the previous approach, there are many others, such as damaging existing testing. For example, if we move a number of files from replication suite to a storage engine suite, we would not be able to run the same replication configurations and combinations without a considerable additional effort.

            === Acceptable implementation possibilities ===

            I see two reasonable ways to create and provide the SE test pack to users.

            ==== 1. Set of overlays of existing test suites ====

            After we define the set of existing tests as described above in "Test cases" section, we will create an overlay for every test suite which contains at least one test that we have chosen for our test pack. Thus, we will end up with a set of overlays. For each overlay, depending on our preferences or the number of files in the parent suite (total vs selected), we will either use the inclusive approach and create a template wrapper for each desired test, or we'll choose the exclusive approach and will create a list of disabled tests (all but those that we chose for the suite) and a template of an option file.

            The advantage of this approach is that it can be used with the existing MTR instrumentation, no more changes are needed (except for maybe some bugfixes). However, there are noticeable disadvantages.

            - Maintenance
            Since overlays do not stack, to be able to use the test pack, a storage engine vendor will have to actually copy the set of overlays that we provided, and make changes on their copy.
            Then, whenever a new test is added to any existing suite (which has an overlay), if the exclusive approach was previously taken and we don't want the new test to become a part of the SE pack, the vendor will have to guess about it somehow and add the test to their version of 'disabled'. If the inclusive approach was taken and we want the new test to become a part of the SE pack, then again, the vendor will need to find out about it and add the wrapper for the new test.

            - Usability
            This test pack will not be usable as a regular MTR suite (or a set of suites); in other words, it could not be run without copying it into the storage engine folder, creating an opt file etc. As a side-effect, it will not be usable for release engines/builds as the packages do not have the required structure.

            - Test pack structure
            As described in the section "Preferable structure", the existing test suite structure is not optimal for storage engine testing. However, if we use this approach, we have no choice, we'll have to maintain the same exact structure: if we found a good generic test in perfschema, or maria, or any other test suite, we will have to create an overlay for it. It also means that the vendor will not be able to create any other overlay of the same suite.

            - Noise and misuse of 'disabled'
            There are ~2800 tests in MTR test suites now. Lets say we choose 700 of them for the SE pack, spread across the most of suites. It means that we will either need to create 700 wrappers (which is not too difficult, just is not pretty), or that we'll need to put 2100 tests permanently onto the 'disabled' lists, which, in turn, means they will show up on test runs and flood the test output. It will also make tracking the real disabled tests (those which were disabled temporarily due to errors) more difficult.


            ==== 2. Overlay of a slice of tests ====

            The main disadvantage of this approach is that it introduces yet another concept to already overcomplicated MTR, and requires implementation of it (although it should not be too big).
            The draft names of the concept is "slice" or "referenced tests".

            Currently we are able to create a suite with suite.pm file which can define a list of tests to run, through re-defining list_cases subroutine. The subroutine returns the list of test case names as an array:
            ( 'test1', 'test2', ...).
            Test files, result files and other auxiliary files are looked up in the suite directory (and corresponding subfolders), as usual.

            Suppose we would have another subroutine, list_referenced_cases. Instead of the array of names, it would return pairs (not necessarily a hash as the elements might be not unique), where the 1st element is a name of a test, and the 2nd element is the name of the suite where the test is located:
            ( ( 'test1', 'main' ), ( 'test2', 'rpl' ), ... )

            MTR would read the list of references along with the 'normal' list, and would know that everything that is related to this test (test file, result file, test-specific cnf/opt/combination files, etc.) should in fact be read from the referenced suite folder, while suite-specific files are still read from the current suite folder.

            There would be no merging or additional lookup places, but replacement. Lets take the example of result file lookup from http://kb.askmonty.org/en/mtr-auxiliary-files.

            Existing behavior:

            Consider a test foobar.test in the combination pair aa,bb, that is run in the overlay rty of the suite qwe, in other words, for the test that mtr prints as

            qwe-rty.foobar 'aa,bb' [ pass ]

            Any of the following 15 file names can be used:

               1. rty/r/foo,aa,bb.result
               2. rty/r/foo,aa,bb.rdiff
               3. qwe/r/foo,aa,bb.result
               4. qwe/r/foo,aa,bb.rdiff
               5. rty/r/foo,aa.result
               6. rty/r/foo,aa.rdiff
               7. qwe/r/foo,aa.result
               8. qwe/r/foo,aa.rdiff
               9. rty/r/foo,bb.result
              10. rty/r/foo,bb.rdiff
              11. qwe/r/foo,bb.result
              12. qwe/r/foo,bb.rdiff
              13. rty/r/foo.result
              14. rty/r/foo.rdiff
              15. qwe/r/foo.result

            They are listed, precisely, in the order of preference, and mtr will walk that list from top to bottom and the first file that is found will be used.
            _Note: is it a typo? Should there be either 'foo.test' and qwe-rty.foo, or should files be named foobar,...?_


            Now, consider that we we run the test 'foo' in the overlay 'uvw' of the suite 'opr', and the suite 'opr' references the test 'foo' as
            ( 'foo', 'qwe' )
            That is, 'uvw' is the overlay, 'opr' is the parent, and 'qwe' is the referenced suite.

            We want the result to be displayed as
            opr[qwe]-uvw.foo 'aa,bb' [ pass ]

            And we want the components of the result file be still searched in 15 locations:

               1. uvw/r/foo,aa,bb.result
               2. uvw/r/foo,aa,bb.rdiff
               3. qwe/r/foo,aa,bb.result
               4. qwe/r/foo,aa,bb.rdiff
               5. uvw/r/foo,aa.result
               6. uvw/r/foo,aa.rdiff
               7. qwe/r/foo,aa.result
               8. qwe/r/foo,aa.rdiff
               9. uvw/r/foo,bb.result
              10. uvw/r/foo,bb.rdiff
              11. qwe/r/foo,bb.result
              12. qwe/r/foo,bb.rdiff
              13. uvw/r/foo.result
              14. uvw/r/foo.rdiff
              15. qwe/r/foo.result

            For the purpose of test-specific lookups, the referenced suite fully replaces the parent suite folder (while for suite-specific search the parent suite folder is still used).

            I suppose it could be done by adding another member (referenced suite) to My::Test and populating it accordingly. Then, if the value is defined, in some cases it would be used instead of the normal suite value. However, I didn't get to create a proof of concept yet, so it's theoretical.

            This addition would allow us to implement the test pack as an actual MTR test suite, with subsuites according to functional areas:
            storage_engine/basic
            storage_engine/transactions
            etc.
            Each subsuite would contain suite.pm which would defined the list of referenced tests (and, optionally, the list of 'normal' tests if we want to add any):
            storage_engine/basic:
            list_referenced_cases { .. return ( ( 'bool', 'main' ), ( 'bit', 'rpl' ), ... ) };
            list_cases { .. return ( 'new_test1', 'new_test2', ... ) };

            The suite could also contain suite.opt, combinations etc.

            _Note: some remaining technical questions with this solution:
            1. what to do with disabled.def files -- on one hand, it's a suite-specific file, on the other hand, it would be much more meaningful to check if a test is disabled in the referenced folder.
            2. if we configure references as pairs of (<test name>, <suite name>), it means we can end up with duplicate test names. I would consider it a limitation for now (if both tests are desperately needed, they can be placed into different subsuites). Later we could also introduce a test nickname, if it seemed necessary._

            This way everyone could use the test pack as a normal MTR suite, just adding the default storage engine to the command line and launching --suite=storage_engine/basic,storage_engine/transactions etc. (that's how it works now with nested suites, although maybe we could improve this and allow to run it as --suite=storage_engine).

            At the same time, we would provide a template of the overlay for this test suite, which could be used by storage vendors. There they would add their rdiffs, disable unwanted tests, etc., but the base contents of the test pack would still be under our control.

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.

            Since this set will not necessarily be a test suite in MTR terms, to avoid confusion, I will call it "[SE] *test pack*" (as opposed to "test suite" in MTR).

            *Test cases*

            The new pack will mostly consist of existing MTR tests. No big changes to existing tests are planned in scope of this task.

            Existing MTR tests can be roughly divided into 3 categories:

            * True engine-independent tests
            Tests which do not use storage engines at all. While they should be safe to run with any storage engine, they do not add value to testing an engine, so they will not be included into the SE test pack.
            Examples:
            ** main.mysql_protocols
            ** mtr.newcomb

            * Engine-specific tests
            There are numerous tests which cover functionality or behavior specific to a certain engine. Obviously, these tests should not be included into the generic test pack.
            Examples:
            ** main.myisam-blob
            ** perfschema.checksum

            * Tests which can work with different engines
            These might be either tests which rely on the default storage engine (e.g. create tables without ENGINE clause), or use the engine as a variable (mostly $engine_type). These tests are our target category.
            Examples:
            ** main.bool
            ** rpl.bit

            * Some tests are a mix -- they might use some tables of a hardcoded engine and leave some flexible. They are also important as they allow to test a mix of engines in the workflow.
            Examples:
            ** main.select
            ** rpl.rpl_mixed_mixing_engines


            If we find gaps while going through existing tests, we might have to fill them by adding new tests. There will be simple cases, for example partitioning suite provides generic inc files to test basic functionality, but actual *.test files are all engine-specific. We will need a test file which either uses an external variable, or relies on the default storage engine.
            Detecting less obvious gaps will not be this easy, but it can be an ongoing process, extended beyond the first release of the SE test pack.
             
            *Preferable structure of the pack*

            Existing MTR test suites divide tests mainly according to the server functionality: there are replication, binlog suites etc. At the same time, each of them might contain tests using features which are not mandatory for a storage engine, e.g. savepoints. It correlates with the specifics of server testing, but not so much with engine testing. For testing an engine, it makes no sense to separate functionality which an engine does not have the power to decline. For example, binlog and replication tests can just as well be a part of the basic set of tests, because they must work somehow with any engine. On the other hand, an engine can declare that it does not support partitioning, or savepoints, or transactions in general. So, it would be convenient to split the tests into corresponding categories, something like
            * storage_engine/basic
            * storage_engine/partitions
            * storage_engine/trx
            * storage_engine/savepoints

            etc. In this case, when a vendor wants to use the test pack, they can simply ignore subsets for functionality their engine does not support.

            However, it might be difficult to achieve due to implementation problems (see below).


            *Implementation*

            For the end user (in this case storage engine vendor), the top-level representation of the test pack will be an overlay of one or several MTR test suites. The underlying test suites, however, can be implemented in different ways.

            *== Undesirable implementation possibilities ==*

            Since the SE test pack will mostly consist of already existing tests, the simplest way to create it would be to copy the test and result files to the new suite(s) (to rename some, if necessary), and to have an overlay of the new suite(s). However, duplicating files will make further maintenance so costly and error-prune that we won't consider this a viable option.

            The second easiest way would be to reshuffle existing tests, collecting the ones that we need for storage engine testing and moving them into separate suite(s). While it does not have the disadvantage of the previous approach, there are many others, such as damaging existing testing, probably making further merges more difficult, etc. For example, if we move a number of files from replication suite to a storage engine suite, we would not be able to run the same replication configurations and combinations without a considerable additional effort.

            *== Acceptable implementation possibilities ==*

            I see two reasonable ways to create and provide the SE test pack to users.

            *==== 1. Set of overlays of existing test suites ====*

            After we define the set of existing tests as described above in "Test cases" section, we will create an overlay for every test suite which contains at least one test that we have chosen for our test pack. Thus, we will end up with a set of overlays. For each overlay, depending on our preferences or the number of files in the parent suite (total vs selected), we will either use the inclusive approach and create a template for include files and a wrapper for each desired test, or choose the exclusive approach and will create a list of disabled tests (all but those that we chose for the suite) and a template of an option file.

            The advantage of this approach is that it can be used with the existing MTR instrumentation, no more changes are needed (except for maybe some bugfixes). However, there are disadvantages.

            * Maintenance
            Since overlays do not stack, to be able to use the test pack, a storage engine vendor will have to actually copy the set of overlays that we provided, and make changes on their copy.
            Then, whenever a new test is added to any existing suite (which has an overlay), if the exclusive approach was previously taken and we don't want the new test to become a part of the SE pack, the vendor will have to guess about it somehow and add the test to their version of disabled.def. If the inclusive approach was taken and we want the new test to become a part of the SE pack, then again, the vendor will need to find out about it and add the wrapper for the new test.

            * Usability
            This test pack will not be usable as a regular MTR suite (or a set of suites); in other words, it could not be run without copying it into the storage engine folder, creating an opt file etc. As a side-effect, it will not be usable for release engines/builds as the packages do not have the required structure.

            * Test pack structure
            As described in the section "Preferable structure", the existing test suite structure is not optimal for storage engine testing. However, if we use this approach (overlays over existing suites), we have no choice, we'll have to maintain the same exact structure: if we found a good generic test in perfschema, or maria, or any other test suite, we will have to create an overlay of the entire suite. It also means that the vendor will not be able to create any other overlay of the same suite.

            * Noise and misuse of disabled.def
            There are ~2800 test files in MTR test suites now. Lets say we choose 700 of them for the SE pack, spread across the most of suites. It means that we will either need to create 700 wrappers (which is not too difficult, just is not pretty), or that we'll need to put 2K tests permanently onto the 'disabled' lists, which, in turn, means they will show up on test runs and flood the test output. It will also make tracking the real disabled tests (those which were disabled temporarily due to errors) more difficult.


            *==== 2. Overlay of a slice of tests ====*

            The main disadvantage of this approach is that it introduces yet another concept to already overcomplicated MTR, and requires implementation of it (although it should not be too big task).
            The draft names of the concept are "slice" or "referenced tests".

            Currently we can create suite.pm file which is able to define a list of tests to run, through re-defining list_cases subroutine. The subroutine returns the list of test case names as an array:
            ( 'test1', 'test2', ...).
            Test files, result files and other auxiliary files are looked up in the suite directory (and corresponding subfolders), as usual.

            Suppose we would have another subroutine, list_referenced_cases. Instead of the array of names, it would return pairs (not necessarily a hash as the elements might be not unique), where the 1st element is a name of a test, and the 2nd element is the name of the suite where the test is located:
            ( ( 'test1', 'main' ), ( 'test2', 'rpl' ), ... )

            MTR would read the list of references along with the 'normal' list, and would know that everything that is related to this referenced test (test file, result file, test-specific cnf/opt/combination files, etc.) should in fact be read from the referenced suite folder, while suite-specific files are still read from the current suite folder.

            There would be no merging or additional lookup places, only replacement. Lets take the example of result file lookup from http://kb.askmonty.org/en/mtr-auxiliary-files.

            Existing behavior:

            Consider a test foobar.test in the combination pair aa,bb, that is run in the overlay rty of the suite qwe, in other words, for the test that mtr prints as

            {noformat}
            qwe-rty.foobar 'aa,bb' [ pass ]
            {noformat}

            Any of the following 15 file names can be used:

               1. rty/r/foo,aa,bb.result
               2. rty/r/foo,aa,bb.rdiff
               3. qwe/r/foo,aa,bb.result
               4. qwe/r/foo,aa,bb.rdiff
               5. rty/r/foo,aa.result
               6. rty/r/foo,aa.rdiff
               7. qwe/r/foo,aa.result
               8. qwe/r/foo,aa.rdiff
               9. rty/r/foo,bb.result
              10. rty/r/foo,bb.rdiff
              11. qwe/r/foo,bb.result
              12. qwe/r/foo,bb.rdiff
              13. rty/r/foo.result
              14. rty/r/foo.rdiff
              15. qwe/r/foo.result

            _Note: is it a typo? Should there be either 'foo.test' and qwe-rty.foo, or should files be named foobar,...?_

            Now, consider that we we run the test 'foo' in the overlay 'uvw' of the suite 'opr', and the suite 'opr' references the test 'foo' as
            ( 'foo', 'qwe' )
            That is, 'uvw' is the overlay, 'opr' is the parent, and 'qwe' is the referenced suite (the real location of the test 'foo').

            We want the result to be displayed as

            {noformat}
            opr[qwe]-uvw.foo 'aa,bb' [ pass ]
            {noformat}

            And we want the components of the result file be still searched in 15 locations:

               1. uvw/r/foo,aa,bb.result
               2. uvw/r/foo,aa,bb.rdiff
               3. qwe/r/foo,aa,bb.result
               4. qwe/r/foo,aa,bb.rdiff
               5. uvw/r/foo,aa.result
               6. uvw/r/foo,aa.rdiff
               7. qwe/r/foo,aa.result
               8. qwe/r/foo,aa.rdiff
               9. uvw/r/foo,bb.result
              10. uvw/r/foo,bb.rdiff
              11. qwe/r/foo,bb.result
              12. qwe/r/foo,bb.rdiff
              13. uvw/r/foo.result
              14. uvw/r/foo.rdiff
              15. qwe/r/foo.result

            That is, for the purpose of test-specific lookups, the referenced suite fully replaces the parent suite folder (while for suite-specific search the parent suite folder is still used).

            _I suppose it could be done by adding another member (referenced suite) to My::Test and populating it accordingly. Then, if the value is defined, in some cases it would be used instead of the normal suite value. However, I didn't get to create a working version yet, so I'm just guessing._

            This addition would allow us to implement the test pack as an actual MTR test suite, with subsuites according to functional areas:
            - storage_engine/basic
            - storage_engine/transactions
            etc.

            Each subsuite would contain suite.pm which would define the list of referenced tests (and, optionally, the list of 'normal' tests if we want to add any):
            storage_engine/basic/suite.pm:

            {noformat}
            ...
            list_referenced_cases {
               ..
               return ( ( 'bool', 'main' ), ( 'bit', 'rpl' ), ... )
            };
            list_cases {
               ..
               return ( 'new_test1', 'new_test2', ... )
            };
            ...
            {noformat}

            The suite could also contain suite.opt, combinations etc.

            {quote}
            _Note: some remaining technical questions with this solution:_
            _1. what to do with disabled.def files? On one hand, it's a suite-specific file, on the other hand, it would be much more meaningful to check if a test is disabled in the referenced folder._
            _2. if we configure references as pairs of (<test name>, <suite name>), it means we can end up with duplicate test names. I would consider it a limitation for now (if both tests are desperately needed, they can be placed into different subsuites). Later we could also introduce a test nickname, if it seemed necessary._
            {quote}

            This way everyone could use the test pack as a normal MTR suite, by just adding the default storage engine to the command line and launching --suite=storage_engine/basic,storage_engine/transactions etc. (that's how it works now with nested suites, although maybe we could improve this and allow to run it as --suite=storage_engine).

            At the same time, we would provide a template of the overlay for this test suite, which could be used by storage vendors. There they would add their rdiffs, disable unwanted tests, etc., but the base contents of the test pack would be still under our control.


            *What needs to be done*

            Personally, I find the 2nd approach from the previous section more solid, but since we might not agree on this, I will outline further actions for either of the two.

            +Whichever approach we choose, 1st or 2nd:+

            * Go through the existing test files (and sometimes include files) and decide, on per-case basis, which tests are valuable enough to include them into the SE test pack. It is tedious task as there are over 3K files altogether, but it has to be done.
            {quote}
            _Additionally, we need to make notes on the tests which cannot be included as is, but might be modified without affecting the current functionality; and those which should be copied and made generic. These additional changes do not have to be implemented right away, for the initial version of the test pack it is enough to have the list of existing files; but it makes no sense to go through the tests later again._
            {quote}
            _Rough estimation: 1 min per file, 3.5K files in mysql-test, 3500 min => ~60 hours. Realistically can be more._

            * if important gaps in coverage are noticed, add the tests to fill them;
            _No estimation since the scope is unknown; in any case, if the gaps are not critical, it can be done later._

            * create a new folder under mysql-test, probably overlays;
            _Estimation: negligible_

            * create overlays/README explaining what to do with the overlay templates;
            _Estimation: 0.5 h_

            * create a new folder storage_engine under overlays (SE vendors will need to copy it under storage/<SE name>/ and rename into mysql-test).
            _Estimation: negligible_


            +If we decide to go with the 1st approach (overlays of existing test suites):+

            * for each test suite where we found valuable test cases, create a folder of the same name under overlays/storage_engine/;
            _Estimation: negligible_

            * under each overlay folder, create stubs for combinations, suite.opt, disabled.def files;
            _Estimation: 0.5 hours (need to add comments etc.)_

            * depending on the ratio between the total number of tests in the parent suite and the number of tests we selected in this suite, either add unneeded tests to the disabled.def file, or create stubs for have_<SE name>.inc and a wrapper for each needed test (e.g. if the parent suite contains 400 tests and we only need 10, it's easier to use the inclusive approach, but if we need 300, it might make more sense to disable the rest);
            _Rough estimation: highly depends on the resulting number of files, lets say 4h as a wild guess_


            +If we decide to go with the 2nd approach (overlay over a test suite with referenced tests):+

            * Implement the changes in MTR (I don't have LDD at the moment, but if we decide to go this way I could try to create a proof of concept);
            _Estimation: can't guess at this point, I can hardly imagine it would take more than a few hours, but I might be wrong_

            * create the new suite storage_engine under mysql-test/suite/
            _Estimation: negligible_

            * define useful categories which the selected tests can be split into;
            _Estimation: 0.5h (requires thinking)_

            * for each defined category, create a subsuite storage_engine/<category_name>
            _Estimation: negligible_

            * under each subsuite, create stubs for combinations, suite.opt files;
            _Estimation: 0.5h (comments etc.)_

            * for each subsuite, create suite.pm with the list of referenced tests (and normal tests if we added any).
            _Rough estimation: 20min per file, probably 5-6 subsuites => ~2h_

            elenst Elena Stepanova made changes -
            elenst Elena Stepanova added a comment - - edited

            Whichever approach we choose, we will need MDEV-202 to work, as MTR already contains nested suites which we'll need for our test pack (suite/engines/*)

            elenst Elena Stepanova added a comment - - edited Whichever approach we choose, we will need MDEV-202 to work, as MTR already contains nested suites which we'll need for our test pack (suite/engines/*)
            elenst Elena Stepanova made changes -
            elenst Elena Stepanova made changes -
            Description The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.

            Since this set will not necessarily be a test suite in MTR terms, to avoid confusion, I will call it "[SE] *test pack*" (as opposed to "test suite" in MTR).

            *Test cases*

            The new pack will mostly consist of existing MTR tests. No big changes to existing tests are planned in scope of this task.

            Existing MTR tests can be roughly divided into 3 categories:

            * True engine-independent tests
            Tests which do not use storage engines at all. While they should be safe to run with any storage engine, they do not add value to testing an engine, so they will not be included into the SE test pack.
            Examples:
            ** main.mysql_protocols
            ** mtr.newcomb

            * Engine-specific tests
            There are numerous tests which cover functionality or behavior specific to a certain engine. Obviously, these tests should not be included into the generic test pack.
            Examples:
            ** main.myisam-blob
            ** perfschema.checksum

            * Tests which can work with different engines
            These might be either tests which rely on the default storage engine (e.g. create tables without ENGINE clause), or use the engine as a variable (mostly $engine_type). These tests are our target category.
            Examples:
            ** main.bool
            ** rpl.bit

            * Some tests are a mix -- they might use some tables of a hardcoded engine and leave some flexible. They are also important as they allow to test a mix of engines in the workflow.
            Examples:
            ** main.select
            ** rpl.rpl_mixed_mixing_engines


            If we find gaps while going through existing tests, we might have to fill them by adding new tests. There will be simple cases, for example partitioning suite provides generic inc files to test basic functionality, but actual *.test files are all engine-specific. We will need a test file which either uses an external variable, or relies on the default storage engine.
            Detecting less obvious gaps will not be this easy, but it can be an ongoing process, extended beyond the first release of the SE test pack.
             
            *Preferable structure of the pack*

            Existing MTR test suites divide tests mainly according to the server functionality: there are replication, binlog suites etc. At the same time, each of them might contain tests using features which are not mandatory for a storage engine, e.g. savepoints. It correlates with the specifics of server testing, but not so much with engine testing. For testing an engine, it makes no sense to separate functionality which an engine does not have the power to decline. For example, binlog and replication tests can just as well be a part of the basic set of tests, because they must work somehow with any engine. On the other hand, an engine can declare that it does not support partitioning, or savepoints, or transactions in general. So, it would be convenient to split the tests into corresponding categories, something like
            * storage_engine/basic
            * storage_engine/partitions
            * storage_engine/trx
            * storage_engine/savepoints

            etc. In this case, when a vendor wants to use the test pack, they can simply ignore subsets for functionality their engine does not support.

            However, it might be difficult to achieve due to implementation problems (see below).


            *Implementation*

            For the end user (in this case storage engine vendor), the top-level representation of the test pack will be an overlay of one or several MTR test suites. The underlying test suites, however, can be implemented in different ways.

            *== Undesirable implementation possibilities ==*

            Since the SE test pack will mostly consist of already existing tests, the simplest way to create it would be to copy the test and result files to the new suite(s) (to rename some, if necessary), and to have an overlay of the new suite(s). However, duplicating files will make further maintenance so costly and error-prune that we won't consider this a viable option.

            The second easiest way would be to reshuffle existing tests, collecting the ones that we need for storage engine testing and moving them into separate suite(s). While it does not have the disadvantage of the previous approach, there are many others, such as damaging existing testing, probably making further merges more difficult, etc. For example, if we move a number of files from replication suite to a storage engine suite, we would not be able to run the same replication configurations and combinations without a considerable additional effort.

            *== Acceptable implementation possibilities ==*

            I see two reasonable ways to create and provide the SE test pack to users.

            *==== 1. Set of overlays of existing test suites ====*

            After we define the set of existing tests as described above in "Test cases" section, we will create an overlay for every test suite which contains at least one test that we have chosen for our test pack. Thus, we will end up with a set of overlays. For each overlay, depending on our preferences or the number of files in the parent suite (total vs selected), we will either use the inclusive approach and create a template for include files and a wrapper for each desired test, or choose the exclusive approach and will create a list of disabled tests (all but those that we chose for the suite) and a template of an option file.

            The advantage of this approach is that it can be used with the existing MTR instrumentation, no more changes are needed (except for maybe some bugfixes). However, there are disadvantages.

            * Maintenance
            Since overlays do not stack, to be able to use the test pack, a storage engine vendor will have to actually copy the set of overlays that we provided, and make changes on their copy.
            Then, whenever a new test is added to any existing suite (which has an overlay), if the exclusive approach was previously taken and we don't want the new test to become a part of the SE pack, the vendor will have to guess about it somehow and add the test to their version of disabled.def. If the inclusive approach was taken and we want the new test to become a part of the SE pack, then again, the vendor will need to find out about it and add the wrapper for the new test.

            * Usability
            This test pack will not be usable as a regular MTR suite (or a set of suites); in other words, it could not be run without copying it into the storage engine folder, creating an opt file etc. As a side-effect, it will not be usable for release engines/builds as the packages do not have the required structure.

            * Test pack structure
            As described in the section "Preferable structure", the existing test suite structure is not optimal for storage engine testing. However, if we use this approach (overlays over existing suites), we have no choice, we'll have to maintain the same exact structure: if we found a good generic test in perfschema, or maria, or any other test suite, we will have to create an overlay of the entire suite. It also means that the vendor will not be able to create any other overlay of the same suite.

            * Noise and misuse of disabled.def
            There are ~2800 test files in MTR test suites now. Lets say we choose 700 of them for the SE pack, spread across the most of suites. It means that we will either need to create 700 wrappers (which is not too difficult, just is not pretty), or that we'll need to put 2K tests permanently onto the 'disabled' lists, which, in turn, means they will show up on test runs and flood the test output. It will also make tracking the real disabled tests (those which were disabled temporarily due to errors) more difficult.


            *==== 2. Overlay of a slice of tests ====*

            The main disadvantage of this approach is that it introduces yet another concept to already overcomplicated MTR, and requires implementation of it (although it should not be too big task).
            The draft names of the concept are "slice" or "referenced tests".

            Currently we can create suite.pm file which is able to define a list of tests to run, through re-defining list_cases subroutine. The subroutine returns the list of test case names as an array:
            ( 'test1', 'test2', ...).
            Test files, result files and other auxiliary files are looked up in the suite directory (and corresponding subfolders), as usual.

            Suppose we would have another subroutine, list_referenced_cases. Instead of the array of names, it would return pairs (not necessarily a hash as the elements might be not unique), where the 1st element is a name of a test, and the 2nd element is the name of the suite where the test is located:
            ( ( 'test1', 'main' ), ( 'test2', 'rpl' ), ... )

            MTR would read the list of references along with the 'normal' list, and would know that everything that is related to this referenced test (test file, result file, test-specific cnf/opt/combination files, etc.) should in fact be read from the referenced suite folder, while suite-specific files are still read from the current suite folder.

            There would be no merging or additional lookup places, only replacement. Lets take the example of result file lookup from http://kb.askmonty.org/en/mtr-auxiliary-files.

            Existing behavior:

            Consider a test foobar.test in the combination pair aa,bb, that is run in the overlay rty of the suite qwe, in other words, for the test that mtr prints as

            {noformat}
            qwe-rty.foobar 'aa,bb' [ pass ]
            {noformat}

            Any of the following 15 file names can be used:

               1. rty/r/foo,aa,bb.result
               2. rty/r/foo,aa,bb.rdiff
               3. qwe/r/foo,aa,bb.result
               4. qwe/r/foo,aa,bb.rdiff
               5. rty/r/foo,aa.result
               6. rty/r/foo,aa.rdiff
               7. qwe/r/foo,aa.result
               8. qwe/r/foo,aa.rdiff
               9. rty/r/foo,bb.result
              10. rty/r/foo,bb.rdiff
              11. qwe/r/foo,bb.result
              12. qwe/r/foo,bb.rdiff
              13. rty/r/foo.result
              14. rty/r/foo.rdiff
              15. qwe/r/foo.result

            _Note: is it a typo? Should there be either 'foo.test' and qwe-rty.foo, or should files be named foobar,...?_

            Now, consider that we we run the test 'foo' in the overlay 'uvw' of the suite 'opr', and the suite 'opr' references the test 'foo' as
            ( 'foo', 'qwe' )
            That is, 'uvw' is the overlay, 'opr' is the parent, and 'qwe' is the referenced suite (the real location of the test 'foo').

            We want the result to be displayed as

            {noformat}
            opr[qwe]-uvw.foo 'aa,bb' [ pass ]
            {noformat}

            And we want the components of the result file be still searched in 15 locations:

               1. uvw/r/foo,aa,bb.result
               2. uvw/r/foo,aa,bb.rdiff
               3. qwe/r/foo,aa,bb.result
               4. qwe/r/foo,aa,bb.rdiff
               5. uvw/r/foo,aa.result
               6. uvw/r/foo,aa.rdiff
               7. qwe/r/foo,aa.result
               8. qwe/r/foo,aa.rdiff
               9. uvw/r/foo,bb.result
              10. uvw/r/foo,bb.rdiff
              11. qwe/r/foo,bb.result
              12. qwe/r/foo,bb.rdiff
              13. uvw/r/foo.result
              14. uvw/r/foo.rdiff
              15. qwe/r/foo.result

            That is, for the purpose of test-specific lookups, the referenced suite fully replaces the parent suite folder (while for suite-specific search the parent suite folder is still used).

            _I suppose it could be done by adding another member (referenced suite) to My::Test and populating it accordingly. Then, if the value is defined, in some cases it would be used instead of the normal suite value. However, I didn't get to create a working version yet, so I'm just guessing._

            This addition would allow us to implement the test pack as an actual MTR test suite, with subsuites according to functional areas:
            - storage_engine/basic
            - storage_engine/transactions
            etc.

            Each subsuite would contain suite.pm which would define the list of referenced tests (and, optionally, the list of 'normal' tests if we want to add any):
            storage_engine/basic/suite.pm:

            {noformat}
            ...
            list_referenced_cases {
               ..
               return ( ( 'bool', 'main' ), ( 'bit', 'rpl' ), ... )
            };
            list_cases {
               ..
               return ( 'new_test1', 'new_test2', ... )
            };
            ...
            {noformat}

            The suite could also contain suite.opt, combinations etc.

            {quote}
            _Note: some remaining technical questions with this solution:_
            _1. what to do with disabled.def files? On one hand, it's a suite-specific file, on the other hand, it would be much more meaningful to check if a test is disabled in the referenced folder._
            _2. if we configure references as pairs of (<test name>, <suite name>), it means we can end up with duplicate test names. I would consider it a limitation for now (if both tests are desperately needed, they can be placed into different subsuites). Later we could also introduce a test nickname, if it seemed necessary._
            {quote}

            This way everyone could use the test pack as a normal MTR suite, by just adding the default storage engine to the command line and launching --suite=storage_engine/basic,storage_engine/transactions etc. (that's how it works now with nested suites, although maybe we could improve this and allow to run it as --suite=storage_engine).

            At the same time, we would provide a template of the overlay for this test suite, which could be used by storage vendors. There they would add their rdiffs, disable unwanted tests, etc., but the base contents of the test pack would be still under our control.


            *What needs to be done*

            Personally, I find the 2nd approach from the previous section more solid, but since we might not agree on this, I will outline further actions for either of the two.

            +Whichever approach we choose, 1st or 2nd:+

            * Go through the existing test files (and sometimes include files) and decide, on per-case basis, which tests are valuable enough to include them into the SE test pack. It is tedious task as there are over 3K files altogether, but it has to be done.
            {quote}
            _Additionally, we need to make notes on the tests which cannot be included as is, but might be modified without affecting the current functionality; and those which should be copied and made generic. These additional changes do not have to be implemented right away, for the initial version of the test pack it is enough to have the list of existing files; but it makes no sense to go through the tests later again._
            {quote}
            _Rough estimation: 1 min per file, 3.5K files in mysql-test, 3500 min => ~60 hours. Realistically can be more._

            * if important gaps in coverage are noticed, add the tests to fill them;
            _No estimation since the scope is unknown; in any case, if the gaps are not critical, it can be done later._

            * create a new folder under mysql-test, probably overlays;
            _Estimation: negligible_

            * create overlays/README explaining what to do with the overlay templates;
            _Estimation: 0.5 h_

            * create a new folder storage_engine under overlays (SE vendors will need to copy it under storage/<SE name>/ and rename into mysql-test).
            _Estimation: negligible_


            +If we decide to go with the 1st approach (overlays of existing test suites):+

            * for each test suite where we found valuable test cases, create a folder of the same name under overlays/storage_engine/;
            _Estimation: negligible_

            * under each overlay folder, create stubs for combinations, suite.opt, disabled.def files;
            _Estimation: 0.5 hours (need to add comments etc.)_

            * depending on the ratio between the total number of tests in the parent suite and the number of tests we selected in this suite, either add unneeded tests to the disabled.def file, or create stubs for have_<SE name>.inc and a wrapper for each needed test (e.g. if the parent suite contains 400 tests and we only need 10, it's easier to use the inclusive approach, but if we need 300, it might make more sense to disable the rest);
            _Rough estimation: highly depends on the resulting number of files, lets say 4h as a wild guess_


            +If we decide to go with the 2nd approach (overlay over a test suite with referenced tests):+

            * Implement the changes in MTR (I don't have LDD at the moment, but if we decide to go this way I could try to create a proof of concept);
            _Estimation: can't guess at this point, I can hardly imagine it would take more than a few hours, but I might be wrong_

            * create the new suite storage_engine under mysql-test/suite/
            _Estimation: negligible_

            * define useful categories which the selected tests can be split into;
            _Estimation: 0.5h (requires thinking)_

            * for each defined category, create a subsuite storage_engine/<category_name>
            _Estimation: negligible_

            * under each subsuite, create stubs for combinations, suite.opt files;
            _Estimation: 0.5h (comments etc.)_

            * for each subsuite, create suite.pm with the list of referenced tests (and normal tests if we added any).
            _Rough estimation: 20min per file, probably 5-6 subsuites => ~2h_

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.

            Since this set will not necessarily be a test suite in MTR terms, to avoid confusion, I will call it "[SE] *test pack*" (as opposed to "test suite" in MTR).

            *Test cases*

            The new pack will mostly consist of existing MTR tests. No big changes to existing tests are planned in scope of this task.

            Existing MTR tests can be roughly divided into 3 categories:

            * True engine-independent tests
            Tests which do not use storage engines at all. While they should be safe to run with any storage engine, they do not add value to testing an engine, so they will not be included into the SE test pack.
            Examples:
            ** main.mysql_protocols
            ** mtr.newcomb

            * Engine-specific tests
            There are numerous tests which cover functionality or behavior specific to a certain engine. Obviously, these tests should not be included into the generic test pack.
            Examples:
            ** main.myisam-blob
            ** perfschema.checksum

            * Tests which can work with different engines
            These might be either tests which rely on the default storage engine (e.g. create tables without ENGINE clause), or use the engine as a variable (mostly $engine_type). These tests are our target category.
            Examples:
            ** main.bool
            ** rpl.bit

            * Some tests are a mix -- they might use some tables of a hardcoded engine and leave some flexible. They are also important as they allow to test a mix of engines in the workflow.
            Examples:
            ** main.select
            ** rpl.rpl_mixed_mixing_engines


            If we find gaps while going through existing tests, we might have to fill them by adding new tests. There will be simple cases, for example partitioning suite provides generic inc files to test basic functionality, but actual *.test files are all engine-specific. We will need a test file which either uses an external variable, or relies on the default storage engine.
            Detecting less obvious gaps will not be this easy, but it can be an ongoing process, extended beyond the first release of the SE test pack.
             
            *Preferable structure of the pack*

            Existing MTR test suites divide tests mainly according to the server functionality: there are replication, binlog suites etc. At the same time, each of them might contain tests using features which are not mandatory for a storage engine, e.g. savepoints. It correlates with the specifics of server testing, but not so much with engine testing. For testing an engine, it makes no sense to separate functionality which an engine does not have the power to decline. For example, binlog and replication tests can just as well be a part of the basic set of tests, because they must work somehow with any engine. On the other hand, an engine can declare that it does not support partitioning, or savepoints, or transactions in general. So, it would be convenient to split the tests into corresponding categories, something like
            * storage_engine/basic
            * storage_engine/partitions
            * storage_engine/trx
            * storage_engine/savepoints

            etc. In this case, when a vendor wants to use the test pack, they can simply ignore subsets for functionality their engine does not support.

            However, it might be difficult to achieve due to implementation problems (see below).


            *Implementation*

            For the end user (in this case storage engine vendor), the top-level representation of the test pack will be an overlay of one or several MTR test suites. The underlying test suites, however, can be implemented in different ways.

            *== Undesirable implementation possibilities ==*

            Since the SE test pack will mostly consist of already existing tests, the simplest way to create it would be to copy the test and result files to the new suite(s) (to rename some, if necessary), and to have an overlay of the new suite(s). However, duplicating files will make further maintenance so costly and error-prune that we won't consider this a viable option.

            The second easiest way would be to reshuffle existing tests, collecting the ones that we need for storage engine testing and moving them into separate suite(s). While it does not have the disadvantage of the previous approach, there are many others, such as damaging existing testing, probably making further merges more difficult, etc. For example, if we move a number of files from replication suite to a storage engine suite, we would not be able to run the same replication configurations and combinations without a considerable additional effort.

            *== Acceptable implementation possibilities ==*

            I see two reasonable ways to create and provide the SE test pack to users.

            *==== 1. Set of overlays of existing test suites ====*

            After we define the set of existing tests as described above in "Test cases" section, we will create an overlay for every test suite which contains at least one test that we have chosen for our test pack. Thus, we will end up with a set of overlays. For each overlay, depending on our preferences or the number of files in the parent suite (total vs selected), we will either use the inclusive approach and create a template for include files and a wrapper for each desired test, or choose the exclusive approach and will create a list of disabled tests (all but those that we chose for the suite) and a template of an option file.

            The advantage of this approach is that it can be used with the existing MTR instrumentation, no more changes are needed (except for maybe some bugfixes). However, there are disadvantages.

            * Maintenance
            Since overlays do not stack, to be able to use the test pack, a storage engine vendor will have to actually copy the set of overlays that we provided, and make changes on their copy.
            Then, whenever a new test is added to any existing suite (which has an overlay), if the exclusive approach was previously taken and we don't want the new test to become a part of the SE pack, the vendor will have to guess about it somehow and add the test to their version of disabled.def. If the inclusive approach was taken and we want the new test to become a part of the SE pack, then again, the vendor will need to find out about it and add the wrapper for the new test.

            * Usability
            This test pack will not be usable as a regular MTR suite (or a set of suites); in other words, it could not be run without copying it into the storage engine folder, creating an opt file etc. As a side-effect, it will not be usable for release engines/builds as the packages do not have the required structure.

            * Test pack structure
            As described in the section "Preferable structure", the existing test suite structure is not optimal for storage engine testing. However, if we use this approach (overlays over existing suites), we have no choice, we'll have to maintain the same exact structure: if we found a good generic test in perfschema, or maria, or any other test suite, we will have to create an overlay of the entire suite. It also means that the vendor will not be able to create any other overlay of the same suite.

            * Noise and misuse of disabled.def
            There are ~2800 test files in MTR test suites now. Lets say we choose 700 of them for the SE pack, spread across the most of suites. It means that we will either need to create 700 wrappers (which is not too difficult, just is not pretty), or that we'll need to put 2K tests permanently onto the 'disabled' lists, which, in turn, means they will show up on test runs and flood the test output. It will also make tracking the real disabled tests (those which were disabled temporarily due to errors) more difficult.


            *==== 2. Overlay of a slice of tests ====*

            The main disadvantage of this approach is that it introduces yet another concept to already overcomplicated MTR, and requires implementation of it (although it should not be too big task).
            The draft names of the concept are "slice" or "referenced tests".

            Currently we can create suite.pm file which is able to define a list of tests to run, through re-defining list_cases subroutine. The subroutine returns the list of test case names as an array:
            ( 'test1', 'test2', ...).
            Test files, result files and other auxiliary files are looked up in the suite directory (and corresponding subfolders), as usual.

            Suppose we would have another subroutine, list_referenced_cases. Instead of the array of names, it would return pairs (not necessarily a hash as the elements might be not unique), where the 1st element is a name of a test, and the 2nd element is the name of the suite where the test is located:
            ( ( 'test1', 'main' ), ( 'test2', 'rpl' ), ... )

            MTR would read the list of references along with the 'normal' list, and would know that everything that is related to this referenced test (test file, result file, test-specific cnf/opt/combination files, etc.) should in fact be read from the referenced suite folder, while suite-specific files are still read from the current suite folder.

            There would be no merging or additional lookup places, only replacement. Lets take the example of result file lookup from http://kb.askmonty.org/en/mtr-auxiliary-files.

            Existing behavior:

            Consider a test foobar.test in the combination pair aa,bb, that is run in the overlay rty of the suite qwe, in other words, for the test that mtr prints as

            {noformat}
            qwe-rty.foobar 'aa,bb' [ pass ]
            {noformat}

            Any of the following 15 file names can be used:

               1. rty/r/foo,aa,bb.result
               2. rty/r/foo,aa,bb.rdiff
               3. qwe/r/foo,aa,bb.result
               4. qwe/r/foo,aa,bb.rdiff
               5. rty/r/foo,aa.result
               6. rty/r/foo,aa.rdiff
               7. qwe/r/foo,aa.result
               8. qwe/r/foo,aa.rdiff
               9. rty/r/foo,bb.result
              10. rty/r/foo,bb.rdiff
              11. qwe/r/foo,bb.result
              12. qwe/r/foo,bb.rdiff
              13. rty/r/foo.result
              14. rty/r/foo.rdiff
              15. qwe/r/foo.result

            _Note: is it a typo? Should there be either 'foo.test' and qwe-rty.foo, or should files be named foobar,...?_

            Now, consider that we we run the test 'foo' in the overlay 'uvw' of the suite 'opr', and the suite 'opr' references the test 'foo' as
            ( 'foo', 'qwe' )
            That is, 'uvw' is the overlay, 'opr' is the parent, and 'qwe' is the referenced suite (the real location of the test 'foo').

            We want the result to be displayed as

            {noformat}
            opr[qwe]-uvw.foo 'aa,bb' [ pass ]
            {noformat}

            And we want the components of the result file be still searched in 15 locations:

               1. uvw/r/foo,aa,bb.result
               2. uvw/r/foo,aa,bb.rdiff
               3. qwe/r/foo,aa,bb.result
               4. qwe/r/foo,aa,bb.rdiff
               5. uvw/r/foo,aa.result
               6. uvw/r/foo,aa.rdiff
               7. qwe/r/foo,aa.result
               8. qwe/r/foo,aa.rdiff
               9. uvw/r/foo,bb.result
              10. uvw/r/foo,bb.rdiff
              11. qwe/r/foo,bb.result
              12. qwe/r/foo,bb.rdiff
              13. uvw/r/foo.result
              14. uvw/r/foo.rdiff
              15. qwe/r/foo.result

            That is, for the purpose of test-specific lookups, the referenced suite fully replaces the parent suite folder (while for suite-specific search the parent suite folder is still used).

            _I suppose it could be done by adding another member (referenced suite) to My::Test and populating it accordingly. Then, if the value is defined, in some cases it would be used instead of the normal suite value. However, I didn't get to create a working version yet, so I'm just guessing._

            This addition would allow us to implement the test pack as an actual MTR test suite, with subsuites according to functional areas:
            - storage_engine/basic
            - storage_engine/transactions
            etc.

            Each subsuite would contain suite.pm which would define the list of referenced tests (and, optionally, the list of 'normal' tests if we want to add any):
            storage_engine/basic/suite.pm:

            {noformat}
            ...
            list_referenced_cases {
               ..
               return ( ( 'bool', 'main' ), ( 'bit', 'rpl' ), ... )
            };
            list_cases {
               ..
               return ( 'new_test1', 'new_test2', ... )
            };
            ...
            {noformat}

            The suite could also contain suite.opt, combinations etc.

            {quote}
            _Note: some remaining technical questions with this solution:_
            _1. what to do with disabled.def files? On one hand, it's a suite-specific file, on the other hand, it would be much more meaningful to check if a test is disabled in the referenced folder._
            _2. if we configure references as pairs of (<test name>, <suite name>), it means we can end up with duplicate test names. I would consider it a limitation for now (if both tests are desperately needed, they can be placed into different subsuites). Later we could also introduce a test nickname, if it seemed necessary._
            {quote}

            This way everyone could use the test pack as a normal MTR suite, by just adding the default storage engine to the command line and launching --suite=storage_engine/basic,storage_engine/transactions etc. (that's how it works now with nested suites, although maybe we could improve this and allow to run it as --suite=storage_engine).

            At the same time, we would provide a template of the overlay for this test suite, which could be used by storage vendors. There they would add their rdiffs, disable unwanted tests, etc., but the base contents of the test pack would be still under our control.


            *What needs to be done*

            Personally, I find the 2nd approach from the previous section more solid, but since we might not agree on this, I will outline further actions for either of the two.

            +Whichever approach we choose, 1st or 2nd:+

            * Go through the existing test files (and sometimes include files) and decide, on per-case basis, which tests are valuable enough to include them into the SE test pack. It is tedious task as there are over 3K files altogether, but it has to be done.
            {quote}
            _Additionally, we need to make notes on the tests which cannot be included as is, but might be modified without affecting the current functionality; and those which should be copied and made generic. These additional changes do not have to be implemented right away, for the initial version of the test pack it is enough to have the list of existing files; but it makes no sense to go through the tests later again._
            {quote}
            _Rough estimation: 1 min per file, 3.5K files in mysql-test, 3500 min => ~60 hours. Realistically can be more._

            * if important gaps in coverage are noticed, add the tests to fill them;
            _No estimation since the scope is unknown; in any case, if the gaps are not critical, it can be done later._

            * create a new folder under mysql-test, probably overlays;
            _Estimation: negligible_

            * create overlays/README explaining what to do with the overlay templates;
            _Estimation: 0.5 h_

            * create a new folder storage_engine under overlays (SE vendors will need to copy it under storage/<SE name>/ and rename into mysql-test).
            _Estimation: negligible_


            +If we decide to go with the 1st approach (overlays of existing test suites):+

            * for each test suite where we found valuable test cases, create a folder of the same name under overlays/storage_engine/;
            _Estimation: negligible_

            * under each overlay folder, create stubs for combinations, suite.opt, disabled.def files;
            _Estimation: 0.5 hours (need to add comments etc.)_

            * depending on the ratio between the total number of tests in the parent suite and the number of tests we selected in this suite, either add unneeded tests to the disabled.def file, or create stubs for have_<SE name>.inc and a wrapper for each needed test (e.g. if the parent suite contains 400 tests and we only need 10, it's easier to use the inclusive approach, but if we need 300, it might make more sense to disable the rest);
            {quote}
            However, since the first edition of the test pack most likely won't be complete (we might want to add more tests later), and since, as described before, propagation of new tests to already cloned sets of overlays might be problematic, we probably should in most cases go with the exclusive approach -- this way we at least keep the ability to add tests.
            {quote}
            _Rough estimation: highly depends on the resulting number of files, lets say 5h as a wild guess_


            +If we decide to go with the 2nd approach (overlay over a test suite with referenced tests):+

            * Implement the changes in MTR (I don't have LDD at the moment, but if we decide to go this way I could try to create a proof of concept);
            _Estimation: can't guess at this point, I can hardly imagine it would take more than a few hours, but I might be wrong_

            * create the new suite storage_engine under mysql-test/suite/
            _Estimation: negligible_

            * define useful categories which the selected tests can be split into;
            _Estimation: 0.5h (requires thinking)_

            * for each defined category, create a subsuite storage_engine/<category_name>
            _Estimation: negligible_

            * under each subsuite, create stubs for combinations, suite.opt files;
            _Estimation: 0.5h (comments etc.)_

            * for each subsuite, create suite.pm with the list of referenced tests (and normal tests if we added any).
            _Rough estimation: 30min per file, probably 5-6 subsuites => ~3h_

            serg Sergei Golubchik made changes -
            Description The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.

            Since this set will not necessarily be a test suite in MTR terms, to avoid confusion, I will call it "[SE] *test pack*" (as opposed to "test suite" in MTR).

            *Test cases*

            The new pack will mostly consist of existing MTR tests. No big changes to existing tests are planned in scope of this task.

            Existing MTR tests can be roughly divided into 3 categories:

            * True engine-independent tests
            Tests which do not use storage engines at all. While they should be safe to run with any storage engine, they do not add value to testing an engine, so they will not be included into the SE test pack.
            Examples:
            ** main.mysql_protocols
            ** mtr.newcomb

            * Engine-specific tests
            There are numerous tests which cover functionality or behavior specific to a certain engine. Obviously, these tests should not be included into the generic test pack.
            Examples:
            ** main.myisam-blob
            ** perfschema.checksum

            * Tests which can work with different engines
            These might be either tests which rely on the default storage engine (e.g. create tables without ENGINE clause), or use the engine as a variable (mostly $engine_type). These tests are our target category.
            Examples:
            ** main.bool
            ** rpl.bit

            * Some tests are a mix -- they might use some tables of a hardcoded engine and leave some flexible. They are also important as they allow to test a mix of engines in the workflow.
            Examples:
            ** main.select
            ** rpl.rpl_mixed_mixing_engines


            If we find gaps while going through existing tests, we might have to fill them by adding new tests. There will be simple cases, for example partitioning suite provides generic inc files to test basic functionality, but actual *.test files are all engine-specific. We will need a test file which either uses an external variable, or relies on the default storage engine.
            Detecting less obvious gaps will not be this easy, but it can be an ongoing process, extended beyond the first release of the SE test pack.
             
            *Preferable structure of the pack*

            Existing MTR test suites divide tests mainly according to the server functionality: there are replication, binlog suites etc. At the same time, each of them might contain tests using features which are not mandatory for a storage engine, e.g. savepoints. It correlates with the specifics of server testing, but not so much with engine testing. For testing an engine, it makes no sense to separate functionality which an engine does not have the power to decline. For example, binlog and replication tests can just as well be a part of the basic set of tests, because they must work somehow with any engine. On the other hand, an engine can declare that it does not support partitioning, or savepoints, or transactions in general. So, it would be convenient to split the tests into corresponding categories, something like
            * storage_engine/basic
            * storage_engine/partitions
            * storage_engine/trx
            * storage_engine/savepoints

            etc. In this case, when a vendor wants to use the test pack, they can simply ignore subsets for functionality their engine does not support.

            However, it might be difficult to achieve due to implementation problems (see below).


            *Implementation*

            For the end user (in this case storage engine vendor), the top-level representation of the test pack will be an overlay of one or several MTR test suites. The underlying test suites, however, can be implemented in different ways.

            *== Undesirable implementation possibilities ==*

            Since the SE test pack will mostly consist of already existing tests, the simplest way to create it would be to copy the test and result files to the new suite(s) (to rename some, if necessary), and to have an overlay of the new suite(s). However, duplicating files will make further maintenance so costly and error-prune that we won't consider this a viable option.

            The second easiest way would be to reshuffle existing tests, collecting the ones that we need for storage engine testing and moving them into separate suite(s). While it does not have the disadvantage of the previous approach, there are many others, such as damaging existing testing, probably making further merges more difficult, etc. For example, if we move a number of files from replication suite to a storage engine suite, we would not be able to run the same replication configurations and combinations without a considerable additional effort.

            *== Acceptable implementation possibilities ==*

            I see two reasonable ways to create and provide the SE test pack to users.

            *==== 1. Set of overlays of existing test suites ====*

            After we define the set of existing tests as described above in "Test cases" section, we will create an overlay for every test suite which contains at least one test that we have chosen for our test pack. Thus, we will end up with a set of overlays. For each overlay, depending on our preferences or the number of files in the parent suite (total vs selected), we will either use the inclusive approach and create a template for include files and a wrapper for each desired test, or choose the exclusive approach and will create a list of disabled tests (all but those that we chose for the suite) and a template of an option file.

            The advantage of this approach is that it can be used with the existing MTR instrumentation, no more changes are needed (except for maybe some bugfixes). However, there are disadvantages.

            * Maintenance
            Since overlays do not stack, to be able to use the test pack, a storage engine vendor will have to actually copy the set of overlays that we provided, and make changes on their copy.
            Then, whenever a new test is added to any existing suite (which has an overlay), if the exclusive approach was previously taken and we don't want the new test to become a part of the SE pack, the vendor will have to guess about it somehow and add the test to their version of disabled.def. If the inclusive approach was taken and we want the new test to become a part of the SE pack, then again, the vendor will need to find out about it and add the wrapper for the new test.

            * Usability
            This test pack will not be usable as a regular MTR suite (or a set of suites); in other words, it could not be run without copying it into the storage engine folder, creating an opt file etc. As a side-effect, it will not be usable for release engines/builds as the packages do not have the required structure.

            * Test pack structure
            As described in the section "Preferable structure", the existing test suite structure is not optimal for storage engine testing. However, if we use this approach (overlays over existing suites), we have no choice, we'll have to maintain the same exact structure: if we found a good generic test in perfschema, or maria, or any other test suite, we will have to create an overlay of the entire suite. It also means that the vendor will not be able to create any other overlay of the same suite.

            * Noise and misuse of disabled.def
            There are ~2800 test files in MTR test suites now. Lets say we choose 700 of them for the SE pack, spread across the most of suites. It means that we will either need to create 700 wrappers (which is not too difficult, just is not pretty), or that we'll need to put 2K tests permanently onto the 'disabled' lists, which, in turn, means they will show up on test runs and flood the test output. It will also make tracking the real disabled tests (those which were disabled temporarily due to errors) more difficult.


            *==== 2. Overlay of a slice of tests ====*

            The main disadvantage of this approach is that it introduces yet another concept to already overcomplicated MTR, and requires implementation of it (although it should not be too big task).
            The draft names of the concept are "slice" or "referenced tests".

            Currently we can create suite.pm file which is able to define a list of tests to run, through re-defining list_cases subroutine. The subroutine returns the list of test case names as an array:
            ( 'test1', 'test2', ...).
            Test files, result files and other auxiliary files are looked up in the suite directory (and corresponding subfolders), as usual.

            Suppose we would have another subroutine, list_referenced_cases. Instead of the array of names, it would return pairs (not necessarily a hash as the elements might be not unique), where the 1st element is a name of a test, and the 2nd element is the name of the suite where the test is located:
            ( ( 'test1', 'main' ), ( 'test2', 'rpl' ), ... )

            MTR would read the list of references along with the 'normal' list, and would know that everything that is related to this referenced test (test file, result file, test-specific cnf/opt/combination files, etc.) should in fact be read from the referenced suite folder, while suite-specific files are still read from the current suite folder.

            There would be no merging or additional lookup places, only replacement. Lets take the example of result file lookup from http://kb.askmonty.org/en/mtr-auxiliary-files.

            Existing behavior:

            Consider a test foobar.test in the combination pair aa,bb, that is run in the overlay rty of the suite qwe, in other words, for the test that mtr prints as

            {noformat}
            qwe-rty.foobar 'aa,bb' [ pass ]
            {noformat}

            Any of the following 15 file names can be used:

               1. rty/r/foo,aa,bb.result
               2. rty/r/foo,aa,bb.rdiff
               3. qwe/r/foo,aa,bb.result
               4. qwe/r/foo,aa,bb.rdiff
               5. rty/r/foo,aa.result
               6. rty/r/foo,aa.rdiff
               7. qwe/r/foo,aa.result
               8. qwe/r/foo,aa.rdiff
               9. rty/r/foo,bb.result
              10. rty/r/foo,bb.rdiff
              11. qwe/r/foo,bb.result
              12. qwe/r/foo,bb.rdiff
              13. rty/r/foo.result
              14. rty/r/foo.rdiff
              15. qwe/r/foo.result

            _Note: is it a typo? Should there be either 'foo.test' and qwe-rty.foo, or should files be named foobar,...?_

            Now, consider that we we run the test 'foo' in the overlay 'uvw' of the suite 'opr', and the suite 'opr' references the test 'foo' as
            ( 'foo', 'qwe' )
            That is, 'uvw' is the overlay, 'opr' is the parent, and 'qwe' is the referenced suite (the real location of the test 'foo').

            We want the result to be displayed as

            {noformat}
            opr[qwe]-uvw.foo 'aa,bb' [ pass ]
            {noformat}

            And we want the components of the result file be still searched in 15 locations:

               1. uvw/r/foo,aa,bb.result
               2. uvw/r/foo,aa,bb.rdiff
               3. qwe/r/foo,aa,bb.result
               4. qwe/r/foo,aa,bb.rdiff
               5. uvw/r/foo,aa.result
               6. uvw/r/foo,aa.rdiff
               7. qwe/r/foo,aa.result
               8. qwe/r/foo,aa.rdiff
               9. uvw/r/foo,bb.result
              10. uvw/r/foo,bb.rdiff
              11. qwe/r/foo,bb.result
              12. qwe/r/foo,bb.rdiff
              13. uvw/r/foo.result
              14. uvw/r/foo.rdiff
              15. qwe/r/foo.result

            That is, for the purpose of test-specific lookups, the referenced suite fully replaces the parent suite folder (while for suite-specific search the parent suite folder is still used).

            _I suppose it could be done by adding another member (referenced suite) to My::Test and populating it accordingly. Then, if the value is defined, in some cases it would be used instead of the normal suite value. However, I didn't get to create a working version yet, so I'm just guessing._

            This addition would allow us to implement the test pack as an actual MTR test suite, with subsuites according to functional areas:
            - storage_engine/basic
            - storage_engine/transactions
            etc.

            Each subsuite would contain suite.pm which would define the list of referenced tests (and, optionally, the list of 'normal' tests if we want to add any):
            storage_engine/basic/suite.pm:

            {noformat}
            ...
            list_referenced_cases {
               ..
               return ( ( 'bool', 'main' ), ( 'bit', 'rpl' ), ... )
            };
            list_cases {
               ..
               return ( 'new_test1', 'new_test2', ... )
            };
            ...
            {noformat}

            The suite could also contain suite.opt, combinations etc.

            {quote}
            _Note: some remaining technical questions with this solution:_
            _1. what to do with disabled.def files? On one hand, it's a suite-specific file, on the other hand, it would be much more meaningful to check if a test is disabled in the referenced folder._
            _2. if we configure references as pairs of (<test name>, <suite name>), it means we can end up with duplicate test names. I would consider it a limitation for now (if both tests are desperately needed, they can be placed into different subsuites). Later we could also introduce a test nickname, if it seemed necessary._
            {quote}

            This way everyone could use the test pack as a normal MTR suite, by just adding the default storage engine to the command line and launching --suite=storage_engine/basic,storage_engine/transactions etc. (that's how it works now with nested suites, although maybe we could improve this and allow to run it as --suite=storage_engine).

            At the same time, we would provide a template of the overlay for this test suite, which could be used by storage vendors. There they would add their rdiffs, disable unwanted tests, etc., but the base contents of the test pack would be still under our control.


            *What needs to be done*

            Personally, I find the 2nd approach from the previous section more solid, but since we might not agree on this, I will outline further actions for either of the two.

            +Whichever approach we choose, 1st or 2nd:+

            * Go through the existing test files (and sometimes include files) and decide, on per-case basis, which tests are valuable enough to include them into the SE test pack. It is tedious task as there are over 3K files altogether, but it has to be done.
            {quote}
            _Additionally, we need to make notes on the tests which cannot be included as is, but might be modified without affecting the current functionality; and those which should be copied and made generic. These additional changes do not have to be implemented right away, for the initial version of the test pack it is enough to have the list of existing files; but it makes no sense to go through the tests later again._
            {quote}
            _Rough estimation: 1 min per file, 3.5K files in mysql-test, 3500 min => ~60 hours. Realistically can be more._

            * if important gaps in coverage are noticed, add the tests to fill them;
            _No estimation since the scope is unknown; in any case, if the gaps are not critical, it can be done later._

            * create a new folder under mysql-test, probably overlays;
            _Estimation: negligible_

            * create overlays/README explaining what to do with the overlay templates;
            _Estimation: 0.5 h_

            * create a new folder storage_engine under overlays (SE vendors will need to copy it under storage/<SE name>/ and rename into mysql-test).
            _Estimation: negligible_


            +If we decide to go with the 1st approach (overlays of existing test suites):+

            * for each test suite where we found valuable test cases, create a folder of the same name under overlays/storage_engine/;
            _Estimation: negligible_

            * under each overlay folder, create stubs for combinations, suite.opt, disabled.def files;
            _Estimation: 0.5 hours (need to add comments etc.)_

            * depending on the ratio between the total number of tests in the parent suite and the number of tests we selected in this suite, either add unneeded tests to the disabled.def file, or create stubs for have_<SE name>.inc and a wrapper for each needed test (e.g. if the parent suite contains 400 tests and we only need 10, it's easier to use the inclusive approach, but if we need 300, it might make more sense to disable the rest);
            {quote}
            However, since the first edition of the test pack most likely won't be complete (we might want to add more tests later), and since, as described before, propagation of new tests to already cloned sets of overlays might be problematic, we probably should in most cases go with the exclusive approach -- this way we at least keep the ability to add tests.
            {quote}
            _Rough estimation: highly depends on the resulting number of files, lets say 5h as a wild guess_


            +If we decide to go with the 2nd approach (overlay over a test suite with referenced tests):+

            * Implement the changes in MTR (I don't have LDD at the moment, but if we decide to go this way I could try to create a proof of concept);
            _Estimation: can't guess at this point, I can hardly imagine it would take more than a few hours, but I might be wrong_

            * create the new suite storage_engine under mysql-test/suite/
            _Estimation: negligible_

            * define useful categories which the selected tests can be split into;
            _Estimation: 0.5h (requires thinking)_

            * for each defined category, create a subsuite storage_engine/<category_name>
            _Estimation: negligible_

            * under each subsuite, create stubs for combinations, suite.opt files;
            _Estimation: 0.5h (comments etc.)_

            * for each subsuite, create suite.pm with the list of referenced tests (and normal tests if we added any).
            _Rough estimation: 30min per file, probably 5-6 subsuites => ~3h_

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.

            Since this set will not necessarily be a test suite in MTR terms, to avoid confusion, I will call it "[SE] *test pack*" (as opposed to "test suite" in MTR).

            *Test cases*

            The new pack will mostly consist of existing MTR tests. No big changes to existing tests are planned in scope of this task.

            Existing MTR tests can be roughly divided into 3 categories:

            * True engine-independent tests
            Tests which do not use storage engines at all. While they should be safe to run with any storage engine, they do not add value to testing an engine, so they will not be included into the SE test pack.
            Examples:
            ** main.mysql_protocols
            ** mtr.newcomb

            * Engine-specific tests
            There are numerous tests which cover functionality or behavior specific to a certain engine. Obviously, these tests should not be included into the generic test pack.
            Examples:
            ** main.myisam-blob
            ** perfschema.checksum

            * Tests which can work with different engines
            These might be either tests which rely on the default storage engine (e.g. create tables without ENGINE clause), or use the engine as a variable (mostly $engine_type). These tests are our target category.
            Examples:
            ** main.bool
            ** rpl.bit

            * Some tests are a mix -- they might use some tables of a hardcoded engine and leave some flexible. They are also important as they allow to test a mix of engines in the workflow.
            Examples:
            ** main.select
            ** rpl.rpl_mixed_mixing_engines


            If we find gaps while going through existing tests, we might have to fill them by adding new tests. There will be simple cases, for example partitioning suite provides generic inc files to test basic functionality, but actual *.test files are all engine-specific. We will need a test file which either uses an external variable, or relies on the default storage engine.
            Detecting less obvious gaps will not be this easy, but it can be an ongoing process, extended beyond the first release of the SE test pack.
             
            *Preferable structure of the pack*

            Existing MTR test suites divide tests mainly according to the server functionality: there are replication, binlog suites etc. At the same time, each of them might contain tests using features which are not mandatory for a storage engine, e.g. savepoints. It correlates with the specifics of server testing, but not so much with engine testing. For testing an engine, it makes no sense to separate functionality which an engine does not have the power to decline. For example, binlog and replication tests can just as well be a part of the basic set of tests, because they must work somehow with any engine. On the other hand, an engine can declare that it does not support partitioning, or savepoints, or transactions in general. So, it would be convenient to split the tests into corresponding categories, something like
            * storage_engine/basic
            * storage_engine/partitions
            * storage_engine/trx
            * storage_engine/savepoints

            etc. In this case, when a vendor wants to use the test pack, they can simply ignore subsets for functionality their engine does not support.

            However, it might be difficult to achieve due to implementation problems (see below).


            *Implementation*

            For the end user (in this case storage engine vendor), the top-level representation of the test pack will be an overlay of one or several MTR test suites. The underlying test suites, however, can be implemented in different ways.

            *== Undesirable implementation possibilities ==*

            Since the SE test pack will mostly consist of already existing tests, the simplest way to create it would be to copy the test and result files to the new suite(s) (to rename some, if necessary), and to have an overlay of the new suite(s). However, duplicating files will make further maintenance so costly and error-prune that we won't consider this a viable option.

            The second easiest way would be to reshuffle existing tests, collecting the ones that we need for storage engine testing and moving them into separate suite(s). While it does not have the disadvantage of the previous approach, there are many others, such as damaging existing testing, probably making further merges more difficult, etc. For example, if we move a number of files from replication suite to a storage engine suite, we would not be able to run the same replication configurations and combinations without a considerable additional effort.

            *== Acceptable implementation possibilities ==*

            I see two reasonable ways to create and provide the SE test pack to users.

            *==== 1. Set of overlays of existing test suites ====*

            After we define the set of existing tests as described above in "Test cases" section, we will create an overlay for every test suite which contains at least one test that we have chosen for our test pack. Thus, we will end up with a set of overlays. For each overlay, depending on our preferences or the number of files in the parent suite (total vs selected), we will either use the inclusive approach and create a template for include files and a wrapper for each desired test, or choose the exclusive approach and will create a list of disabled tests (all but those that we chose for the suite) and a template of an option file.

            The advantage of this approach is that it can be used with the existing MTR instrumentation, no more changes are needed (except for maybe some bugfixes). However, there are disadvantages.

            * Maintenance
            Since overlays do not stack, to be able to use the test pack, a storage engine vendor will have to actually copy the set of overlays that we provided, and make changes on their copy.
            Then, whenever a new test is added to any existing suite (which has an overlay), if the exclusive approach was previously taken and we don't want the new test to become a part of the SE pack, the vendor will have to guess about it somehow and add the test to their version of disabled.def. If the inclusive approach was taken and we want the new test to become a part of the SE pack, then again, the vendor will need to find out about it and add the wrapper for the new test.

            * Usability
            This test pack will not be usable as a regular MTR suite (or a set of suites); in other words, it could not be run without copying it into the storage engine folder, creating an opt file etc. As a side-effect, it will not be usable for release engines/builds as the packages do not have the required structure.

            * Test pack structure
            As described in the section "Preferable structure", the existing test suite structure is not optimal for storage engine testing. However, if we use this approach (overlays over existing suites), we have no choice, we'll have to maintain the same exact structure: if we found a good generic test in perfschema, or maria, or any other test suite, we will have to create an overlay of the entire suite. It also means that the vendor will not be able to create any other overlay of the same suite.

            * Noise and misuse of disabled.def
            There are ~2800 test files in MTR test suites now. Lets say we choose 700 of them for the SE pack, spread across the most of suites. It means that we will either need to create 700 wrappers (which is not too difficult, just is not pretty), or that we'll need to put 2K tests permanently onto the 'disabled' lists, which, in turn, means they will show up on test runs and flood the test output. It will also make tracking the real disabled tests (those which were disabled temporarily due to errors) more difficult.


            *==== 2. Overlay of a slice of tests ====*

            The main disadvantage of this approach is that it introduces yet another concept to already overcomplicated MTR, and requires implementation of it (although it should not be too big task).
            The draft names of the concept are "slice" or "referenced tests".

            Currently we can create suite.pm file which is able to define a list of tests to run, through re-defining list_cases subroutine. The subroutine returns the list of test case names as an array:
            ( 'test1', 'test2', ...).
            Test files, result files and other auxiliary files are looked up in the suite directory (and corresponding subfolders), as usual.

            Suppose we would have another subroutine, list_referenced_cases. Instead of the array of names, it would return pairs (not necessarily a hash as the elements might be not unique), where the 1st element is a name of a test, and the 2nd element is the name of the suite where the test is located:
            ( ( 'test1', 'main' ), ( 'test2', 'rpl' ), ... )

            MTR would read the list of references along with the 'normal' list, and would know that everything that is related to this referenced test (test file, result file, test-specific cnf/opt/combination files, etc.) should in fact be read from the referenced suite folder, while suite-specific files are still read from the current suite folder.

            There would be no merging or additional lookup places, only replacement. Lets take the example of result file lookup from http://kb.askmonty.org/en/mtr-auxiliary-files.

            Existing behavior:

            Consider a test foo.test in the combination pair aa,bb, that is run in the overlay rty of the suite qwe, in other words, for the test that mtr prints as

            {noformat}
            qwe-rty.foo 'aa,bb' [ pass ]
            {noformat}

            Any of the following 15 file names can be used:

               1. rty/r/foo,aa,bb.result
               2. rty/r/foo,aa,bb.rdiff
               3. qwe/r/foo,aa,bb.result
               4. qwe/r/foo,aa,bb.rdiff
               5. rty/r/foo,aa.result
               6. rty/r/foo,aa.rdiff
               7. qwe/r/foo,aa.result
               8. qwe/r/foo,aa.rdiff
               9. rty/r/foo,bb.result
              10. rty/r/foo,bb.rdiff
              11. qwe/r/foo,bb.result
              12. qwe/r/foo,bb.rdiff
              13. rty/r/foo.result
              14. rty/r/foo.rdiff
              15. qwe/r/foo.result

            Now, consider that we run the test 'foo' in the overlay 'uvw' of the suite 'opr', and the suite 'opr' references the test 'foo' as
            ( 'foo', 'qwe' )
            That is, 'uvw' is the overlay, 'opr' is the parent, and 'qwe' is the referenced suite (the real location of the test 'foo').

            We want the result to be displayed as

            {noformat}
            opr[qwe]-uvw.foo 'aa,bb' [ pass ]
            {noformat}

            And we want the components of the result file be still searched in 15 locations:

               1. uvw/r/foo,aa,bb.result
               2. uvw/r/foo,aa,bb.rdiff
               3. qwe/r/foo,aa,bb.result
               4. qwe/r/foo,aa,bb.rdiff
               5. uvw/r/foo,aa.result
               6. uvw/r/foo,aa.rdiff
               7. qwe/r/foo,aa.result
               8. qwe/r/foo,aa.rdiff
               9. uvw/r/foo,bb.result
              10. uvw/r/foo,bb.rdiff
              11. qwe/r/foo,bb.result
              12. qwe/r/foo,bb.rdiff
              13. uvw/r/foo.result
              14. uvw/r/foo.rdiff
              15. qwe/r/foo.result

            That is, for the purpose of test-specific lookups, the referenced suite fully replaces the parent suite folder (while for suite-specific search the parent suite folder is still used).

            _I suppose it could be done by adding another member (referenced suite) to My::Test and populating it accordingly. Then, if the value is defined, in some cases it would be used instead of the normal suite value. However, I didn't get to create a working version yet, so I'm just guessing._

            This addition would allow us to implement the test pack as an actual MTR test suite, with subsuites according to functional areas:
            - storage_engine/basic
            - storage_engine/transactions
            etc.

            Each subsuite would contain suite.pm which would define the list of referenced tests (and, optionally, the list of 'normal' tests if we want to add any):
            storage_engine/basic/suite.pm:

            {noformat}
            ...
            list_referenced_cases {
               ..
               return ( ( 'bool', 'main' ), ( 'bit', 'rpl' ), ... )
            };
            list_cases {
               ..
               return ( 'new_test1', 'new_test2', ... )
            };
            ...
            {noformat}

            The suite could also contain suite.opt, combinations etc.

            {quote}
            _Note: some remaining technical questions with this solution:_
            _1. what to do with disabled.def files? On one hand, it's a suite-specific file, on the other hand, it would be much more meaningful to check if a test is disabled in the referenced folder._
            _2. if we configure references as pairs of (<test name>, <suite name>), it means we can end up with duplicate test names. I would consider it a limitation for now (if both tests are desperately needed, they can be placed into different subsuites). Later we could also introduce a test nickname, if it seemed necessary._
            {quote}

            This way everyone could use the test pack as a normal MTR suite, by just adding the default storage engine to the command line and launching --suite=storage_engine/basic,storage_engine/transactions etc. (that's how it works now with nested suites, although maybe we could improve this and allow to run it as --suite=storage_engine).

            At the same time, we would provide a template of the overlay for this test suite, which could be used by storage vendors. There they would add their rdiffs, disable unwanted tests, etc., but the base contents of the test pack would be still under our control.


            *What needs to be done*

            Personally, I find the 2nd approach from the previous section more solid, but since we might not agree on this, I will outline further actions for either of the two.

            +Whichever approach we choose, 1st or 2nd:+

            * Go through the existing test files (and sometimes include files) and decide, on per-case basis, which tests are valuable enough to include them into the SE test pack. It is tedious task as there are over 3K files altogether, but it has to be done.
            {quote}
            _Additionally, we need to make notes on the tests which cannot be included as is, but might be modified without affecting the current functionality; and those which should be copied and made generic. These additional changes do not have to be implemented right away, for the initial version of the test pack it is enough to have the list of existing files; but it makes no sense to go through the tests later again._
            {quote}
            _Rough estimation: 1 min per file, 3.5K files in mysql-test, 3500 min => ~60 hours. Realistically can be more._

            * if important gaps in coverage are noticed, add the tests to fill them;
            _No estimation since the scope is unknown; in any case, if the gaps are not critical, it can be done later._

            * create a new folder under mysql-test, probably overlays;
            _Estimation: negligible_

            * create overlays/README explaining what to do with the overlay templates;
            _Estimation: 0.5 h_

            * create a new folder storage_engine under overlays (SE vendors will need to copy it under storage/<SE name>/ and rename into mysql-test).
            _Estimation: negligible_


            +If we decide to go with the 1st approach (overlays of existing test suites):+

            * for each test suite where we found valuable test cases, create a folder of the same name under overlays/storage_engine/;
            _Estimation: negligible_

            * under each overlay folder, create stubs for combinations, suite.opt, disabled.def files;
            _Estimation: 0.5 hours (need to add comments etc.)_

            * depending on the ratio between the total number of tests in the parent suite and the number of tests we selected in this suite, either add unneeded tests to the disabled.def file, or create stubs for have_<SE name>.inc and a wrapper for each needed test (e.g. if the parent suite contains 400 tests and we only need 10, it's easier to use the inclusive approach, but if we need 300, it might make more sense to disable the rest);
            {quote}
            However, since the first edition of the test pack most likely won't be complete (we might want to add more tests later), and since, as described before, propagation of new tests to already cloned sets of overlays might be problematic, we probably should in most cases go with the exclusive approach -- this way we at least keep the ability to add tests.
            {quote}
            _Rough estimation: highly depends on the resulting number of files, lets say 5h as a wild guess_


            +If we decide to go with the 2nd approach (overlay over a test suite with referenced tests):+

            * Implement the changes in MTR (I don't have LDD at the moment, but if we decide to go this way I could try to create a proof of concept);
            _Estimation: can't guess at this point, I can hardly imagine it would take more than a few hours, but I might be wrong_

            * create the new suite storage_engine under mysql-test/suite/
            _Estimation: negligible_

            * define useful categories which the selected tests can be split into;
            _Estimation: 0.5h (requires thinking)_

            * for each defined category, create a subsuite storage_engine/<category_name>
            _Estimation: negligible_

            * under each subsuite, create stubs for combinations, suite.opt files;
            _Estimation: 0.5h (comments etc.)_

            * for each subsuite, create suite.pm with the list of referenced tests (and normal tests if we added any).
            _Rough estimation: 30min per file, probably 5-6 subsuites => ~3h_

            elenst Elena Stepanova made changes -
            Status Reopened [ 4 ] In Progress [ 3 ]
            elenst Elena Stepanova made changes -
            Description The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.

            Since this set will not necessarily be a test suite in MTR terms, to avoid confusion, I will call it "[SE] *test pack*" (as opposed to "test suite" in MTR).

            *Test cases*

            The new pack will mostly consist of existing MTR tests. No big changes to existing tests are planned in scope of this task.

            Existing MTR tests can be roughly divided into 3 categories:

            * True engine-independent tests
            Tests which do not use storage engines at all. While they should be safe to run with any storage engine, they do not add value to testing an engine, so they will not be included into the SE test pack.
            Examples:
            ** main.mysql_protocols
            ** mtr.newcomb

            * Engine-specific tests
            There are numerous tests which cover functionality or behavior specific to a certain engine. Obviously, these tests should not be included into the generic test pack.
            Examples:
            ** main.myisam-blob
            ** perfschema.checksum

            * Tests which can work with different engines
            These might be either tests which rely on the default storage engine (e.g. create tables without ENGINE clause), or use the engine as a variable (mostly $engine_type). These tests are our target category.
            Examples:
            ** main.bool
            ** rpl.bit

            * Some tests are a mix -- they might use some tables of a hardcoded engine and leave some flexible. They are also important as they allow to test a mix of engines in the workflow.
            Examples:
            ** main.select
            ** rpl.rpl_mixed_mixing_engines


            If we find gaps while going through existing tests, we might have to fill them by adding new tests. There will be simple cases, for example partitioning suite provides generic inc files to test basic functionality, but actual *.test files are all engine-specific. We will need a test file which either uses an external variable, or relies on the default storage engine.
            Detecting less obvious gaps will not be this easy, but it can be an ongoing process, extended beyond the first release of the SE test pack.
             
            *Preferable structure of the pack*

            Existing MTR test suites divide tests mainly according to the server functionality: there are replication, binlog suites etc. At the same time, each of them might contain tests using features which are not mandatory for a storage engine, e.g. savepoints. It correlates with the specifics of server testing, but not so much with engine testing. For testing an engine, it makes no sense to separate functionality which an engine does not have the power to decline. For example, binlog and replication tests can just as well be a part of the basic set of tests, because they must work somehow with any engine. On the other hand, an engine can declare that it does not support partitioning, or savepoints, or transactions in general. So, it would be convenient to split the tests into corresponding categories, something like
            * storage_engine/basic
            * storage_engine/partitions
            * storage_engine/trx
            * storage_engine/savepoints

            etc. In this case, when a vendor wants to use the test pack, they can simply ignore subsets for functionality their engine does not support.

            However, it might be difficult to achieve due to implementation problems (see below).


            *Implementation*

            For the end user (in this case storage engine vendor), the top-level representation of the test pack will be an overlay of one or several MTR test suites. The underlying test suites, however, can be implemented in different ways.

            *== Undesirable implementation possibilities ==*

            Since the SE test pack will mostly consist of already existing tests, the simplest way to create it would be to copy the test and result files to the new suite(s) (to rename some, if necessary), and to have an overlay of the new suite(s). However, duplicating files will make further maintenance so costly and error-prune that we won't consider this a viable option.

            The second easiest way would be to reshuffle existing tests, collecting the ones that we need for storage engine testing and moving them into separate suite(s). While it does not have the disadvantage of the previous approach, there are many others, such as damaging existing testing, probably making further merges more difficult, etc. For example, if we move a number of files from replication suite to a storage engine suite, we would not be able to run the same replication configurations and combinations without a considerable additional effort.

            *== Acceptable implementation possibilities ==*

            I see two reasonable ways to create and provide the SE test pack to users.

            *==== 1. Set of overlays of existing test suites ====*

            After we define the set of existing tests as described above in "Test cases" section, we will create an overlay for every test suite which contains at least one test that we have chosen for our test pack. Thus, we will end up with a set of overlays. For each overlay, depending on our preferences or the number of files in the parent suite (total vs selected), we will either use the inclusive approach and create a template for include files and a wrapper for each desired test, or choose the exclusive approach and will create a list of disabled tests (all but those that we chose for the suite) and a template of an option file.

            The advantage of this approach is that it can be used with the existing MTR instrumentation, no more changes are needed (except for maybe some bugfixes). However, there are disadvantages.

            * Maintenance
            Since overlays do not stack, to be able to use the test pack, a storage engine vendor will have to actually copy the set of overlays that we provided, and make changes on their copy.
            Then, whenever a new test is added to any existing suite (which has an overlay), if the exclusive approach was previously taken and we don't want the new test to become a part of the SE pack, the vendor will have to guess about it somehow and add the test to their version of disabled.def. If the inclusive approach was taken and we want the new test to become a part of the SE pack, then again, the vendor will need to find out about it and add the wrapper for the new test.

            * Usability
            This test pack will not be usable as a regular MTR suite (or a set of suites); in other words, it could not be run without copying it into the storage engine folder, creating an opt file etc. As a side-effect, it will not be usable for release engines/builds as the packages do not have the required structure.

            * Test pack structure
            As described in the section "Preferable structure", the existing test suite structure is not optimal for storage engine testing. However, if we use this approach (overlays over existing suites), we have no choice, we'll have to maintain the same exact structure: if we found a good generic test in perfschema, or maria, or any other test suite, we will have to create an overlay of the entire suite. It also means that the vendor will not be able to create any other overlay of the same suite.

            * Noise and misuse of disabled.def
            There are ~2800 test files in MTR test suites now. Lets say we choose 700 of them for the SE pack, spread across the most of suites. It means that we will either need to create 700 wrappers (which is not too difficult, just is not pretty), or that we'll need to put 2K tests permanently onto the 'disabled' lists, which, in turn, means they will show up on test runs and flood the test output. It will also make tracking the real disabled tests (those which were disabled temporarily due to errors) more difficult.


            *==== 2. Overlay of a slice of tests ====*

            The main disadvantage of this approach is that it introduces yet another concept to already overcomplicated MTR, and requires implementation of it (although it should not be too big task).
            The draft names of the concept are "slice" or "referenced tests".

            Currently we can create suite.pm file which is able to define a list of tests to run, through re-defining list_cases subroutine. The subroutine returns the list of test case names as an array:
            ( 'test1', 'test2', ...).
            Test files, result files and other auxiliary files are looked up in the suite directory (and corresponding subfolders), as usual.

            Suppose we would have another subroutine, list_referenced_cases. Instead of the array of names, it would return pairs (not necessarily a hash as the elements might be not unique), where the 1st element is a name of a test, and the 2nd element is the name of the suite where the test is located:
            ( ( 'test1', 'main' ), ( 'test2', 'rpl' ), ... )

            MTR would read the list of references along with the 'normal' list, and would know that everything that is related to this referenced test (test file, result file, test-specific cnf/opt/combination files, etc.) should in fact be read from the referenced suite folder, while suite-specific files are still read from the current suite folder.

            There would be no merging or additional lookup places, only replacement. Lets take the example of result file lookup from http://kb.askmonty.org/en/mtr-auxiliary-files.

            Existing behavior:

            Consider a test foo.test in the combination pair aa,bb, that is run in the overlay rty of the suite qwe, in other words, for the test that mtr prints as

            {noformat}
            qwe-rty.foo 'aa,bb' [ pass ]
            {noformat}

            Any of the following 15 file names can be used:

               1. rty/r/foo,aa,bb.result
               2. rty/r/foo,aa,bb.rdiff
               3. qwe/r/foo,aa,bb.result
               4. qwe/r/foo,aa,bb.rdiff
               5. rty/r/foo,aa.result
               6. rty/r/foo,aa.rdiff
               7. qwe/r/foo,aa.result
               8. qwe/r/foo,aa.rdiff
               9. rty/r/foo,bb.result
              10. rty/r/foo,bb.rdiff
              11. qwe/r/foo,bb.result
              12. qwe/r/foo,bb.rdiff
              13. rty/r/foo.result
              14. rty/r/foo.rdiff
              15. qwe/r/foo.result

            Now, consider that we run the test 'foo' in the overlay 'uvw' of the suite 'opr', and the suite 'opr' references the test 'foo' as
            ( 'foo', 'qwe' )
            That is, 'uvw' is the overlay, 'opr' is the parent, and 'qwe' is the referenced suite (the real location of the test 'foo').

            We want the result to be displayed as

            {noformat}
            opr[qwe]-uvw.foo 'aa,bb' [ pass ]
            {noformat}

            And we want the components of the result file be still searched in 15 locations:

               1. uvw/r/foo,aa,bb.result
               2. uvw/r/foo,aa,bb.rdiff
               3. qwe/r/foo,aa,bb.result
               4. qwe/r/foo,aa,bb.rdiff
               5. uvw/r/foo,aa.result
               6. uvw/r/foo,aa.rdiff
               7. qwe/r/foo,aa.result
               8. qwe/r/foo,aa.rdiff
               9. uvw/r/foo,bb.result
              10. uvw/r/foo,bb.rdiff
              11. qwe/r/foo,bb.result
              12. qwe/r/foo,bb.rdiff
              13. uvw/r/foo.result
              14. uvw/r/foo.rdiff
              15. qwe/r/foo.result

            That is, for the purpose of test-specific lookups, the referenced suite fully replaces the parent suite folder (while for suite-specific search the parent suite folder is still used).

            _I suppose it could be done by adding another member (referenced suite) to My::Test and populating it accordingly. Then, if the value is defined, in some cases it would be used instead of the normal suite value. However, I didn't get to create a working version yet, so I'm just guessing._

            This addition would allow us to implement the test pack as an actual MTR test suite, with subsuites according to functional areas:
            - storage_engine/basic
            - storage_engine/transactions
            etc.

            Each subsuite would contain suite.pm which would define the list of referenced tests (and, optionally, the list of 'normal' tests if we want to add any):
            storage_engine/basic/suite.pm:

            {noformat}
            ...
            list_referenced_cases {
               ..
               return ( ( 'bool', 'main' ), ( 'bit', 'rpl' ), ... )
            };
            list_cases {
               ..
               return ( 'new_test1', 'new_test2', ... )
            };
            ...
            {noformat}

            The suite could also contain suite.opt, combinations etc.

            {quote}
            _Note: some remaining technical questions with this solution:_
            _1. what to do with disabled.def files? On one hand, it's a suite-specific file, on the other hand, it would be much more meaningful to check if a test is disabled in the referenced folder._
            _2. if we configure references as pairs of (<test name>, <suite name>), it means we can end up with duplicate test names. I would consider it a limitation for now (if both tests are desperately needed, they can be placed into different subsuites). Later we could also introduce a test nickname, if it seemed necessary._
            {quote}

            This way everyone could use the test pack as a normal MTR suite, by just adding the default storage engine to the command line and launching --suite=storage_engine/basic,storage_engine/transactions etc. (that's how it works now with nested suites, although maybe we could improve this and allow to run it as --suite=storage_engine).

            At the same time, we would provide a template of the overlay for this test suite, which could be used by storage vendors. There they would add their rdiffs, disable unwanted tests, etc., but the base contents of the test pack would be still under our control.


            *What needs to be done*

            Personally, I find the 2nd approach from the previous section more solid, but since we might not agree on this, I will outline further actions for either of the two.

            +Whichever approach we choose, 1st or 2nd:+

            * Go through the existing test files (and sometimes include files) and decide, on per-case basis, which tests are valuable enough to include them into the SE test pack. It is tedious task as there are over 3K files altogether, but it has to be done.
            {quote}
            _Additionally, we need to make notes on the tests which cannot be included as is, but might be modified without affecting the current functionality; and those which should be copied and made generic. These additional changes do not have to be implemented right away, for the initial version of the test pack it is enough to have the list of existing files; but it makes no sense to go through the tests later again._
            {quote}
            _Rough estimation: 1 min per file, 3.5K files in mysql-test, 3500 min => ~60 hours. Realistically can be more._

            * if important gaps in coverage are noticed, add the tests to fill them;
            _No estimation since the scope is unknown; in any case, if the gaps are not critical, it can be done later._

            * create a new folder under mysql-test, probably overlays;
            _Estimation: negligible_

            * create overlays/README explaining what to do with the overlay templates;
            _Estimation: 0.5 h_

            * create a new folder storage_engine under overlays (SE vendors will need to copy it under storage/<SE name>/ and rename into mysql-test).
            _Estimation: negligible_


            +If we decide to go with the 1st approach (overlays of existing test suites):+

            * for each test suite where we found valuable test cases, create a folder of the same name under overlays/storage_engine/;
            _Estimation: negligible_

            * under each overlay folder, create stubs for combinations, suite.opt, disabled.def files;
            _Estimation: 0.5 hours (need to add comments etc.)_

            * depending on the ratio between the total number of tests in the parent suite and the number of tests we selected in this suite, either add unneeded tests to the disabled.def file, or create stubs for have_<SE name>.inc and a wrapper for each needed test (e.g. if the parent suite contains 400 tests and we only need 10, it's easier to use the inclusive approach, but if we need 300, it might make more sense to disable the rest);
            {quote}
            However, since the first edition of the test pack most likely won't be complete (we might want to add more tests later), and since, as described before, propagation of new tests to already cloned sets of overlays might be problematic, we probably should in most cases go with the exclusive approach -- this way we at least keep the ability to add tests.
            {quote}
            _Rough estimation: highly depends on the resulting number of files, lets say 5h as a wild guess_


            +If we decide to go with the 2nd approach (overlay over a test suite with referenced tests):+

            * Implement the changes in MTR (I don't have LDD at the moment, but if we decide to go this way I could try to create a proof of concept);
            _Estimation: can't guess at this point, I can hardly imagine it would take more than a few hours, but I might be wrong_

            * create the new suite storage_engine under mysql-test/suite/
            _Estimation: negligible_

            * define useful categories which the selected tests can be split into;
            _Estimation: 0.5h (requires thinking)_

            * for each defined category, create a subsuite storage_engine/<category_name>
            _Estimation: negligible_

            * under each subsuite, create stubs for combinations, suite.opt files;
            _Estimation: 0.5h (comments etc.)_

            * for each subsuite, create suite.pm with the list of referenced tests (and normal tests if we added any).
            _Rough estimation: 30min per file, probably 5-6 subsuites => ~3h_

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.

            *The problem and solution*

            Existing MTR tests are not very suitable for the purpose of 3rd-party storage engine testing in several ways.

            1. Due to the nature of mysqltest tool, they have very strict requirements in order to test output. Since a new storage engine in many cases is likely to produce a slightly different output (even if it's not important in terms of results), the tests start failing, which causes a lot of false positives.

            To solve it, we will use the functionality developed in scope of MDEV-30. The new suite will contain test files and base result files, while every engine will only need rdiffs for the tests where its output is different from the base result (but still valid). The base results will be synthetic, meaning that they might come from different engines, whichever engine provides the most correctly looking result. So, every engine will most likely require some rdiff files, but their number will be very different. For example, MyISAM will need very few rdiff files, while CSV engine will require many more, because it does not support NULL-able columns, and its table creation/modification statements, as well as SHOW TABLE output, will be different. Still, this way a storage engine vendor would not need to modify test files or result files.

            2. Existing MTR test suites are massive and are mostly created to test various regressions. They are not organized the way that makes it easy to choose a subset of tests which can be essential for a storage engine. So, even to test a relatively small functional scope, one needs to run a long set of tests, which can be quite inefficient.
            Note: There is the test suite ''engines'' provided with MySQL which is supposed to focus on engine capabilities, but it is also long, contains tests which are not related to engines (e.g. create/drop an empty database), and suffers from the other problems described here.

            We will create a new test suite with new tests, which we will try to make as short and engine-focused as possible. The tests are not supposed to provide extensive and deep functional testing, but to check the basic functionality, typical for MySQL storage engines.

            3. Currently most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail, so ALTER TABLE will not be tested at all; or, if a test contains at least one UPDATE statement, and the engine does not support UPDATE (like ARCHIVE), the whole test will, again, fail.

            We will create a set of very small, almost atomic, basic tests, which will use as very little feature combination as possible. This basic set (main set for the suite) is supposed to be run first, and help the storage engine vendor to determine currently available feature set.
            Small tests create some overhead comparing to bigger ones, but considering the benefits of them being applicable in many more cases, the cost does not seem high.
            On top of this, we will have more complicated tests as a sub-suite (feature integration suite), where we will provide comments about which features are expected, so if the test fails, the vendor can easily understand why it does not work. For example, we will have a test for very basic SELECT and another test for index creation, and then, if both of them pass, it makes sense to try a bigger select test which includes various index combinations.
            We will also have other sub-suites which will cover some big parts of functionality which not all engines are expected to support: transactions, partitioning, etc.

            4. Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            CREATE TABLE t (i INT)
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like connect string for FEDERATED engine. Currently, there is no way to fix that apart from copying the tests and modifying them manually.

            We will provide the storage engine vendor/maintainer with a possibility to define such very basic options, which we will use in table creation. Along with the engine name, they will be able to set (optionally) column properties and table properties specific for the engine. In the most cases it will be enough to keep them empty, as they are by default.

            *Tier 1*

            Simple atomic tests for basic functionality: create table, table properties, alter table, insert, update, delete, replace, select, index, index types, check table, analyze table, etc. These tests are supposed to be very short, mostly several statements each, to cover the syntax. Of course, most of tests still require at least several basic features, e.g. CREATE and DROP table.

            *Tier 2*

            Tests which combine different functionality: e.g. to test simple INSERT, we do not need any indexes, but to test INSERT IGNORE, we do. So, insert test and index tests are a part of tier-1, and insert_ignore only makes sense of both of them work.

            *Tier 3*

            More complex functionality: transactions (including locking, XA, etc.), partitioning, binary logging/replication, fulltext search, etc.

            *Test structure*

            The test suite should be organized accordingly. All tier-1 tests will be the main part of the suite, living directly in the suite directory. Tier-3 and partially tier-2 tests will be placed in sub-suites (there should not be too many of those, probably 3-4: transactions, partitions, feature_integration, replication).

            We will also provide templates for define_engine.inc and define_engine.opt files, disabled.def file which should be later customized for the engine, and README file. All of these will be located in the test suite folder.

            Summary Generic storage engine test pack Generic storage engine test suite
            elenst Elena Stepanova made changes -
            Description The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.

            *The problem and solution*

            Existing MTR tests are not very suitable for the purpose of 3rd-party storage engine testing in several ways.

            1. Due to the nature of mysqltest tool, they have very strict requirements in order to test output. Since a new storage engine in many cases is likely to produce a slightly different output (even if it's not important in terms of results), the tests start failing, which causes a lot of false positives.

            To solve it, we will use the functionality developed in scope of MDEV-30. The new suite will contain test files and base result files, while every engine will only need rdiffs for the tests where its output is different from the base result (but still valid). The base results will be synthetic, meaning that they might come from different engines, whichever engine provides the most correctly looking result. So, every engine will most likely require some rdiff files, but their number will be very different. For example, MyISAM will need very few rdiff files, while CSV engine will require many more, because it does not support NULL-able columns, and its table creation/modification statements, as well as SHOW TABLE output, will be different. Still, this way a storage engine vendor would not need to modify test files or result files.

            2. Existing MTR test suites are massive and are mostly created to test various regressions. They are not organized the way that makes it easy to choose a subset of tests which can be essential for a storage engine. So, even to test a relatively small functional scope, one needs to run a long set of tests, which can be quite inefficient.
            Note: There is the test suite ''engines'' provided with MySQL which is supposed to focus on engine capabilities, but it is also long, contains tests which are not related to engines (e.g. create/drop an empty database), and suffers from the other problems described here.

            We will create a new test suite with new tests, which we will try to make as short and engine-focused as possible. The tests are not supposed to provide extensive and deep functional testing, but to check the basic functionality, typical for MySQL storage engines.

            3. Currently most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail, so ALTER TABLE will not be tested at all; or, if a test contains at least one UPDATE statement, and the engine does not support UPDATE (like ARCHIVE), the whole test will, again, fail.

            We will create a set of very small, almost atomic, basic tests, which will use as very little feature combination as possible. This basic set (main set for the suite) is supposed to be run first, and help the storage engine vendor to determine currently available feature set.
            Small tests create some overhead comparing to bigger ones, but considering the benefits of them being applicable in many more cases, the cost does not seem high.
            On top of this, we will have more complicated tests as a sub-suite (feature integration suite), where we will provide comments about which features are expected, so if the test fails, the vendor can easily understand why it does not work. For example, we will have a test for very basic SELECT and another test for index creation, and then, if both of them pass, it makes sense to try a bigger select test which includes various index combinations.
            We will also have other sub-suites which will cover some big parts of functionality which not all engines are expected to support: transactions, partitioning, etc.

            4. Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            CREATE TABLE t (i INT)
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like connect string for FEDERATED engine. Currently, there is no way to fix that apart from copying the tests and modifying them manually.

            We will provide the storage engine vendor/maintainer with a possibility to define such very basic options, which we will use in table creation. Along with the engine name, they will be able to set (optionally) column properties and table properties specific for the engine. In the most cases it will be enough to keep them empty, as they are by default.

            *Tier 1*

            Simple atomic tests for basic functionality: create table, table properties, alter table, insert, update, delete, replace, select, index, index types, check table, analyze table, etc. These tests are supposed to be very short, mostly several statements each, to cover the syntax. Of course, most of tests still require at least several basic features, e.g. CREATE and DROP table.

            *Tier 2*

            Tests which combine different functionality: e.g. to test simple INSERT, we do not need any indexes, but to test INSERT IGNORE, we do. So, insert test and index tests are a part of tier-1, and insert_ignore only makes sense of both of them work.

            *Tier 3*

            More complex functionality: transactions (including locking, XA, etc.), partitioning, binary logging/replication, fulltext search, etc.

            *Test structure*

            The test suite should be organized accordingly. All tier-1 tests will be the main part of the suite, living directly in the suite directory. Tier-3 and partially tier-2 tests will be placed in sub-suites (there should not be too many of those, probably 3-4: transactions, partitions, feature_integration, replication).

            We will also provide templates for define_engine.inc and define_engine.opt files, disabled.def file which should be later customized for the engine, and README file. All of these will be located in the test suite folder.

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.

            *The problem and solution*

            Existing MTR tests are not very suitable for the purpose of 3rd-party storage engine testing in several ways.

            1. Due to the nature of mysqltest tool, they have very strict requirements in order to test output. Since a new storage engine in many cases is likely to produce a slightly different output (even if it's not important in terms of results), the tests start failing, which causes a lot of false positives.

            To solve it, we will use the functionality developed in scope of MDEV-30. The new suite will contain test files and base result files, while every engine will only need rdiffs for the tests where its output is different from the base result (but still valid). The base results will be synthetic, meaning that they might come from different engines, whichever engine provides the most correctly looking result. So, every engine will most likely require some rdiff files, but their number will be very different. For example, MyISAM will need very few rdiff files, while CSV engine will require many more, because it does not support NULL-able columns, and its table creation/modification statements, as well as SHOW TABLE output, will be different. Still, this way a storage engine vendor would not need to modify test files or result files.

            2. Existing MTR test suites are massive and are mostly created to test various regressions. They are not organized the way that makes it easy to choose a subset of tests which can be essential for a storage engine. So, even to test a relatively small functional scope, one needs to run a long set of tests, which can be quite inefficient.
            Note: There is the test suite ''engines'' provided with MySQL which is supposed to focus on engine capabilities, but it is also long, contains tests which are not related to engines (e.g. create/drop an empty database), and suffers from the other problems described here.

            We will create a new test suite with new tests, which we will try to make as short and engine-focused as possible. The tests are not supposed to provide extensive and deep functional testing, but to check the basic functionality, typical for MySQL storage engines.

            3. Currently most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail, so ALTER TABLE will not be tested at all; or, if a test contains at least one UPDATE statement, and the engine does not support UPDATE (like ARCHIVE), the whole test will, again, fail.

            We will create a set of very small, almost atomic, basic tests, which will use as very little feature combination as possible. This basic set (main set for the suite) is supposed to be run first, and help the storage engine vendor to determine currently available feature set.
            Small tests create some overhead comparing to bigger ones, but considering the benefits of them being applicable in many more cases, the cost does not seem high.
            On top of this, we will have more complicated tests (feature integration tests). These tests will call include files which will check certain variables, e.g. $support_keys, $support_update, etc. All variables are set to TRUE by default, but a vendor can unset them in define_engine.inc file. They are not required to do so, but it will help to minimize configuration effort -- e.g. if an engine does not support indexes, instead of checking results and disabling dozens of tests one by one through disabled.def, the maintainer can instead set $support_indexes=0, and all tests which use indexes will be skipped. If it's not clear at the beginning which features are supported and which are not, the tests can be run with all defaults and adjusted based on results.

            We will also have other sub-suites which will cover some big parts of functionality which not all engines are expected to support: transactions, partitioning, etc.

            4. Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            CREATE TABLE t (i INT)
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like connect string for FEDERATED engine. Currently, there is no way to fix that apart from copying the tests and modifying them manually.

            We will provide the storage engine vendor/maintainer with a possibility to define such very basic options, which we will use in table creation. Along with the engine name, they will be able to set (optionally) column properties and table properties specific for the engine. In the most cases it will be enough to keep them empty, as they are by default.

            *Tier 1*

            Simple atomic tests for basic functionality: create table, table properties, alter table, insert, update, delete, replace, select, index, index types, check table, analyze table, etc. These tests are supposed to be very short, mostly several statements each, to cover the syntax. Of course, most of tests still require at least several basic features, e.g. CREATE and DROP table.

            *Tier 2*

            Tests which combine different functionality: e.g. to test simple INSERT, we do not need any indexes, but to test INSERT IGNORE, we do. So, insert test and index tests are a part of tier-1, and insert_ignore only makes sense of both of them work.

            *Tier 3*

            More complex functionality: transactions (including locking, XA, etc.), partitioning, binary logging/replication, fulltext search, etc.

            *Test structure*

            The test suite should be organized accordingly. All tier-1 tests will be the main part of the suite, living directly in the suite directory. Tier-3 and partially tier-2 tests will be placed in sub-suites (there should not be too many of those, probably 3-4: transactions, partitions, feature_integration, replication).

            We will also provide templates for define_engine.inc and define_engine.opt files, disabled.def file which should be later customized for the engine, and README file. All of these will be located in the test suite folder.

            elenst Elena Stepanova made changes -
            Description The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.

            *The problem and solution*

            Existing MTR tests are not very suitable for the purpose of 3rd-party storage engine testing in several ways.

            1. Due to the nature of mysqltest tool, they have very strict requirements in order to test output. Since a new storage engine in many cases is likely to produce a slightly different output (even if it's not important in terms of results), the tests start failing, which causes a lot of false positives.

            To solve it, we will use the functionality developed in scope of MDEV-30. The new suite will contain test files and base result files, while every engine will only need rdiffs for the tests where its output is different from the base result (but still valid). The base results will be synthetic, meaning that they might come from different engines, whichever engine provides the most correctly looking result. So, every engine will most likely require some rdiff files, but their number will be very different. For example, MyISAM will need very few rdiff files, while CSV engine will require many more, because it does not support NULL-able columns, and its table creation/modification statements, as well as SHOW TABLE output, will be different. Still, this way a storage engine vendor would not need to modify test files or result files.

            2. Existing MTR test suites are massive and are mostly created to test various regressions. They are not organized the way that makes it easy to choose a subset of tests which can be essential for a storage engine. So, even to test a relatively small functional scope, one needs to run a long set of tests, which can be quite inefficient.
            Note: There is the test suite ''engines'' provided with MySQL which is supposed to focus on engine capabilities, but it is also long, contains tests which are not related to engines (e.g. create/drop an empty database), and suffers from the other problems described here.

            We will create a new test suite with new tests, which we will try to make as short and engine-focused as possible. The tests are not supposed to provide extensive and deep functional testing, but to check the basic functionality, typical for MySQL storage engines.

            3. Currently most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail, so ALTER TABLE will not be tested at all; or, if a test contains at least one UPDATE statement, and the engine does not support UPDATE (like ARCHIVE), the whole test will, again, fail.

            We will create a set of very small, almost atomic, basic tests, which will use as very little feature combination as possible. This basic set (main set for the suite) is supposed to be run first, and help the storage engine vendor to determine currently available feature set.
            Small tests create some overhead comparing to bigger ones, but considering the benefits of them being applicable in many more cases, the cost does not seem high.
            On top of this, we will have more complicated tests (feature integration tests). These tests will call include files which will check certain variables, e.g. $support_keys, $support_update, etc. All variables are set to TRUE by default, but a vendor can unset them in define_engine.inc file. They are not required to do so, but it will help to minimize configuration effort -- e.g. if an engine does not support indexes, instead of checking results and disabling dozens of tests one by one through disabled.def, the maintainer can instead set $support_indexes=0, and all tests which use indexes will be skipped. If it's not clear at the beginning which features are supported and which are not, the tests can be run with all defaults and adjusted based on results.

            We will also have other sub-suites which will cover some big parts of functionality which not all engines are expected to support: transactions, partitioning, etc.

            4. Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            CREATE TABLE t (i INT)
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like connect string for FEDERATED engine. Currently, there is no way to fix that apart from copying the tests and modifying them manually.

            We will provide the storage engine vendor/maintainer with a possibility to define such very basic options, which we will use in table creation. Along with the engine name, they will be able to set (optionally) column properties and table properties specific for the engine. In the most cases it will be enough to keep them empty, as they are by default.

            *Tier 1*

            Simple atomic tests for basic functionality: create table, table properties, alter table, insert, update, delete, replace, select, index, index types, check table, analyze table, etc. These tests are supposed to be very short, mostly several statements each, to cover the syntax. Of course, most of tests still require at least several basic features, e.g. CREATE and DROP table.

            *Tier 2*

            Tests which combine different functionality: e.g. to test simple INSERT, we do not need any indexes, but to test INSERT IGNORE, we do. So, insert test and index tests are a part of tier-1, and insert_ignore only makes sense of both of them work.

            *Tier 3*

            More complex functionality: transactions (including locking, XA, etc.), partitioning, binary logging/replication, fulltext search, etc.

            *Test structure*

            The test suite should be organized accordingly. All tier-1 tests will be the main part of the suite, living directly in the suite directory. Tier-3 and partially tier-2 tests will be placed in sub-suites (there should not be too many of those, probably 3-4: transactions, partitions, feature_integration, replication).

            We will also provide templates for define_engine.inc and define_engine.opt files, disabled.def file which should be later customized for the engine, and README file. All of these will be located in the test suite folder.

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.

            *The problem and solution*

            Existing MTR tests are not very suitable for testing of 3rd-party storage engines.

            1. Traditional MTR/mysqltest have very strict requirements to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            To solve it, we will use the functionality developed in scope of MDEV-30. The new suite will contain test files and base result files, while engines will only need rdiffs for the tests where the output is different from the base result. The set of base results will be synthetic, meaning that they might come from different engines, whichever engine provides the most generic result. Thus every engine will most likely require some rdiff files, but their number will be different.

            2. Existing MTR test suites are massive and are mostly created to test regressions. They are not organized the way that makes it easy to execute a subset of tests which can be essential for a storage engine. So, even to test a relatively small functional scope, one needs to run a long set of tests, which is inefficient.
            Note: There is the test suite ''engines'' provided with MySQL packages, which is supposed to focus on engine capabilities, but it is also long, contains tests not related to engines (e.g. create/drop an empty database), and suffers from other shortages described here.

            We will create a new set of tests, which we will try to make as short and engine-focused as possible. The tests are not supposed to provide extensive and deep functional testing, but to check the basic functionality, typical for MySQL storage engines.

            3. Currently most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail, so ALTER TABLE will not be tested at all.

            We will create a set of variables which will define configuration of the engine (and hence the test suite). Each variable will indicate whether the corresponding functional area is support (or is supposed to be supported), or not: $support_update, $support_keys, $support_nullable_columns, $support_fulltext_search, etc.
            When we need to use a feature in a test, we will check the value of the corresponding variable. If we find out that the feature is not supported, and it's a target feature for the test, the entire test will be skipped. If the feature is just used to test something else, a part of the test which requires it will be disabled, and the corresponding message will be printed. Thus, even if an engine does not support all of "service" features, it can still have a part of the test which is essential.

            Example.

            Suppose we are testing TRUNCATE TABLE.
            First, we check whether truncate is configured as a supported feature.
            If so, we run a basic test: create a table, insert rows, truncate it, etc.
            Then, we also want to test that truncate resets auto-increment value for a table, as expected.
            We check that the engine supports auto-increment columns.
            If it does, we create a table with an auto-increment column and test how the value gets reset.
            If auto-increment is not supported, we print a message about it and proceed to the next part of the test.
            When the test is executed for an engine which does not support auto-increment columns, it will fail, but not with an error while creating a table (as it would happen if we didn't do the check), but with a simple mismatch for which the engine maintainer just needs to add an rdiff; and the rest of the test will still be executed for the engine.

            All variables will be set to TRUE by default. Users (engine maintainers) don't *have to* modify them, it is just a convenience feature to increase the test coverage by allowing tests to be executed partially, and to simplify the maintenance (e.g. for an engine which does not support indexes, instead of disabling one by one dozens of tests which use indexes, it is easier to set $support_keys to FALSE (0)).

            Apart from basic functional tests, we will also have sub-suites which will cover some big parts of functionality which not all engines are expected to support: transactions, partitioning, etc.

            4. Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            CREATE TABLE t (i INT)
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like connect string for FEDERATED engine. Currently, there is no way to fix that apart from copying the tests and modifying them manually.

            We will provide the storage engine vendor/maintainer with a possibility to define such basic options, which we will use in table creation. Along with the engine name, they will be able to set (optionally) column properties and table properties specific for the engine. In the most cases it will be enough to keep them empty, as they are by default.

            *Tier 1*

            Simple atomic tests for basic functionality: create table, table properties, alter table, insert, update, delete, replace, select, index, index types, check table, analyze table, etc. These tests are supposed to be short, mostly several statements each, to cover the syntax. Of course, most of tests still require at least several basic features, e.g. CREATE and DROP table.

            *Tier 2*

            Tests which combine different functionality: e.g. to test simple INSERT, we do not need any indexes, but to test INSERT IGNORE, we do. So, insert test and index tests are a part of tier-1, and insert_ignore only makes sense of both of them work.

            *Tier 3*

            More complex functionality: transactions (including locking, XA, etc.), partitioning, binary logging/replication, fulltext search, etc.

            *Test structure*

            The test suite should be organized accordingly. All tier-1 tests will be the main part of the suite, living directly in the suite directory. Tier-3 and partially tier-2 tests will be placed in sub-suites (there should not be too many of those, probably 3-4: transactions, partitions, replication, ...).

            We will also provide templates for define_engine.inc and suite.opt files, disabled.def file which should be later customized for the engine, and README file. All of these will be located in the test suite folder.

            Raw approximation of current coverage on my local tree, on the example of running the suite on MyISAM. It might be inaccurate, but gives the idea (mainly for my future reference)

            The script runs gcov with function summaries for handler.cc, handler.h, ha_<engine>.cc, ha_engine.h, and for each function stores the number of calls as reported by gcov. If the function names and parameter lists are identical in handler and ha_<engine>, the function is considered the same; so if it's called in ha_<engine> but not in the handler, it is not reported in 'Not called'.

            Summary from the attached list:
            Called in handler (135)
            Called in myisam (74)
            Not called (122)

            "Not called" are the ones I will be working on next.

            elenst Elena Stepanova added a comment - Raw approximation of current coverage on my local tree, on the example of running the suite on MyISAM. It might be inaccurate, but gives the idea (mainly for my future reference) The script runs gcov with function summaries for handler.cc, handler.h, ha_<engine>.cc, ha_engine.h, and for each function stores the number of calls as reported by gcov. If the function names and parameter lists are identical in handler and ha_<engine>, the function is considered the same; so if it's called in ha_<engine> but not in the handler, it is not reported in 'Not called'. Summary from the attached list: Called in handler (135) Called in myisam (74) Not called (122) "Not called" are the ones I will be working on next.
            elenst Elena Stepanova made changes -
            Attachment handler.coverage [ 10900 ]
            elenst Elena Stepanova made changes -
            Description The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.

            *The problem and solution*

            Existing MTR tests are not very suitable for testing of 3rd-party storage engines.

            1. Traditional MTR/mysqltest have very strict requirements to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            To solve it, we will use the functionality developed in scope of MDEV-30. The new suite will contain test files and base result files, while engines will only need rdiffs for the tests where the output is different from the base result. The set of base results will be synthetic, meaning that they might come from different engines, whichever engine provides the most generic result. Thus every engine will most likely require some rdiff files, but their number will be different.

            2. Existing MTR test suites are massive and are mostly created to test regressions. They are not organized the way that makes it easy to execute a subset of tests which can be essential for a storage engine. So, even to test a relatively small functional scope, one needs to run a long set of tests, which is inefficient.
            Note: There is the test suite ''engines'' provided with MySQL packages, which is supposed to focus on engine capabilities, but it is also long, contains tests not related to engines (e.g. create/drop an empty database), and suffers from other shortages described here.

            We will create a new set of tests, which we will try to make as short and engine-focused as possible. The tests are not supposed to provide extensive and deep functional testing, but to check the basic functionality, typical for MySQL storage engines.

            3. Currently most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail, so ALTER TABLE will not be tested at all.

            We will create a set of variables which will define configuration of the engine (and hence the test suite). Each variable will indicate whether the corresponding functional area is support (or is supposed to be supported), or not: $support_update, $support_keys, $support_nullable_columns, $support_fulltext_search, etc.
            When we need to use a feature in a test, we will check the value of the corresponding variable. If we find out that the feature is not supported, and it's a target feature for the test, the entire test will be skipped. If the feature is just used to test something else, a part of the test which requires it will be disabled, and the corresponding message will be printed. Thus, even if an engine does not support all of "service" features, it can still have a part of the test which is essential.

            Example.

            Suppose we are testing TRUNCATE TABLE.
            First, we check whether truncate is configured as a supported feature.
            If so, we run a basic test: create a table, insert rows, truncate it, etc.
            Then, we also want to test that truncate resets auto-increment value for a table, as expected.
            We check that the engine supports auto-increment columns.
            If it does, we create a table with an auto-increment column and test how the value gets reset.
            If auto-increment is not supported, we print a message about it and proceed to the next part of the test.
            When the test is executed for an engine which does not support auto-increment columns, it will fail, but not with an error while creating a table (as it would happen if we didn't do the check), but with a simple mismatch for which the engine maintainer just needs to add an rdiff; and the rest of the test will still be executed for the engine.

            All variables will be set to TRUE by default. Users (engine maintainers) don't *have to* modify them, it is just a convenience feature to increase the test coverage by allowing tests to be executed partially, and to simplify the maintenance (e.g. for an engine which does not support indexes, instead of disabling one by one dozens of tests which use indexes, it is easier to set $support_keys to FALSE (0)).

            Apart from basic functional tests, we will also have sub-suites which will cover some big parts of functionality which not all engines are expected to support: transactions, partitioning, etc.

            4. Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            CREATE TABLE t (i INT)
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like connect string for FEDERATED engine. Currently, there is no way to fix that apart from copying the tests and modifying them manually.

            We will provide the storage engine vendor/maintainer with a possibility to define such basic options, which we will use in table creation. Along with the engine name, they will be able to set (optionally) column properties and table properties specific for the engine. In the most cases it will be enough to keep them empty, as they are by default.

            *Tier 1*

            Simple atomic tests for basic functionality: create table, table properties, alter table, insert, update, delete, replace, select, index, index types, check table, analyze table, etc. These tests are supposed to be short, mostly several statements each, to cover the syntax. Of course, most of tests still require at least several basic features, e.g. CREATE and DROP table.

            *Tier 2*

            Tests which combine different functionality: e.g. to test simple INSERT, we do not need any indexes, but to test INSERT IGNORE, we do. So, insert test and index tests are a part of tier-1, and insert_ignore only makes sense of both of them work.

            *Tier 3*

            More complex functionality: transactions (including locking, XA, etc.), partitioning, binary logging/replication, fulltext search, etc.

            *Test structure*

            The test suite should be organized accordingly. All tier-1 tests will be the main part of the suite, living directly in the suite directory. Tier-3 and partially tier-2 tests will be placed in sub-suites (there should not be too many of those, probably 3-4: transactions, partitions, replication, ...).

            We will also provide templates for define_engine.inc and suite.opt files, disabled.def file which should be later customized for the engine, and README file. All of these will be located in the test suite folder.
            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.

            *The problem and solution*

            Existing MTR tests are not very suitable for testing of 3rd-party storage engines.

            1. Traditional MTR/mysqltest have very strict requirements to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            To solve it, we will use the functionality developed in scope of MDEV-30. The new suite will contain test files and base result files, while engines will only need rdiffs for the tests where the output is different from the base result. The set of base results will be synthetic, meaning that they might come from different engines, whichever engine provides the most generic result. Thus every engine will most likely require some rdiff files, although their number will be different.

            2. Existing MTR test suites are massive and are mostly created to test regressions. They are not organized the way that makes it easy to execute a subset of tests which can be essential for a storage engine. So, even to test a relatively small functional scope, one needs to run a long set of tests, which is inefficient.
            Note: There is the test suite ''engines'' provided with MySQL packages, which is supposed to focus on engine capabilities, but it is also long, contains tests not related to engines (e.g. create/drop an empty database), and suffers from other shortages described here.

            We will create a new set of tests, which we will try to make as short and engine-focused as possible. The tests are not supposed to provide extensive and deep functional testing, but to check the basic functionality, typical for MySQL storage engines.

            3. Currently most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail, so ALTER TABLE will not be tested at all.

            We will create a set of variables which will define configuration of the engine (and hence the test suite). Each variable will indicate whether the corresponding functional area is supported (or is supposed to be supported), or not: $support_update, $support_keys, $support_nullable_columns, $support_fulltext_search, etc.
            When we need to use a feature in a test, we will check the value of the corresponding variable. If we find out that the feature is not supported, and it's a target feature for the test, the entire test will be skipped. If the feature is just used to test something else, a part of the test which requires it will be disabled, and the corresponding message will be printed. Thus, even if an engine does not support all of "service" features, it can still have a part of the test which is essential.

            Example.

            Suppose we are testing TRUNCATE TABLE.
            First, we check whether truncate is configured as a supported feature.
            If so, we run a basic test: create a table, insert rows, truncate it, etc.
            Then, we also want to test that truncate resets auto-increment value for a table, as expected.
            We check that the engine supports auto-increment columns.
            If it does, we create a table with an auto-increment column and test how the value gets reset.
            If auto-increment is not supported, we print a message about it and proceed to the next part of the test.
            When the test is executed for an engine which does not support auto-increment columns, it will fail, but not with an error while creating a table (as it would happen if we didn't do the check), but with a simple mismatch for which the engine maintainer just needs to add an rdiff; and the rest of the test will still be executed for the engine.

            All variables will be set to TRUE by default. Users (engine maintainers) don't *have to* modify them, it is just a convenience feature to increase the test coverage by allowing tests to be executed partially, and to simplify the maintenance (e.g. for an engine which does not support indexes, instead of disabling one by one dozens of tests which use indexes, it is easier to set $support_keys to FALSE (0)).

            Apart from basic functional tests, we will also have sub-suites which will cover some big parts of functionality which not all engines are expected to support: transactions (''trx'' subsuite), partitioning (''partitions'' subsuite), and maybe some more.

            4. Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            CREATE TABLE t (i INT)
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like connect string for FEDERATED engine. Currently, there is no way to fix that apart from copying the tests and modifying them manually.

            We will provide the storage engine vendor/maintainer with a possibility to define such basic options, which we will use in table creation. Along with the engine name, they will be able to set (optionally) column properties and table properties specific for the engine. In the most cases it will be enough to keep them empty, as they are by default.

            In even more complicated scenario, in order to create a table properly, additional actions need to be performed; e.g. to create a functional MERGE table, we need to create an underlying MyISAM table; and if we want to alter a MERGE table, we should alter the underlying table(s) accordingly. To allow the test suite work for such engines too, we move CREATE and ALTER table operations into an include file which all tests will call whenever needed. A user can redefine create_table.inc and alter_table.inc in their overlay (to copy the default files and modify them as needed).


            *Tier 1*

            Simple atomic tests for basic functionality: create table, table properties, alter table, insert, update, delete, replace, select, index, index types, check table, analyze table, etc. These tests are supposed to be short, mostly several statements each, to cover the syntax. Of course, most of tests still require at least several basic features, e.g. CREATE and DROP table.

            *Tier 2*

            Tests which combine different functionality: e.g. to test simple INSERT, we do not need any indexes, but to test INSERT IGNORE, we do. So, insert test and index tests are a part of tier-1, and insert_ignore only makes sense of both of them work.

            *Tier 3*

            More complex functionality: transactions (including locking, XA, etc.), partitioning, binary logging/replication, fulltext search, etc.

            *Test structure*

            The test suite should be organized accordingly. All tier-1 tests will be the main part of the suite, living directly in the suite directory. Tier-3 and partially tier-2 tests will be placed in sub-suites: ''trx'' for normal and XA transactions, ''partitions'' for partitioning tests. Binary logging will be tested via combinations.

            We will also provide templates for define_engine.inc and suite.opt files, disabled.def file which should be later customized for the engine, and README file. All of these will be located in the test suite folder.

            ====================

            h3. Usage instructions

            We assume the storage engine is located under {{<basedir>/storage/<engine>}} and has been built

            * create {{<basedir>/storage/<engine>/mysql-test}} folder if does not exist yet

            * create {{<basedir>/storage/<engine>/mysql-test/storage_engine}} folder

            * copy {{<basedir>/mysql-test/suite/storage_engine/define_engine.inc}} to {{<basedir>/storage/<engine>/mysql-test/storage_engine/define_engine.inc}}

            * edit the copied version of {{define_engine.inc}}:
            ** mandatory: set {{ENGINE}} variable to the proper engine name (so that it can be found by name in {{information_schema.engines}} );
            ** optional: if you know that your engine requires specific column or table options, set them in {{$default_col_opts}} (for non-indexed columns), {{$default_col_indexed_opts}} (for indexed columns), and {{$default_tbl_opts}} (table options), e.g.:
            {noformat}
                  let $default_col_opts = /*!NOT NULL*/;
                  let $default_col_indexed_opts = /*!NOT NULL*/;
                  let $default_tbl_opts = /*!INSERT_METHOD=LAST*/;
            {noformat}
                Keep them inside the comment so they are correctly recognized and masked in test output, otherwise you will have to deal with mismatches.
            ** optional: look through {{$support_*}} variables and set to 0 those that correspond to features which are not supposed to be supported (at the moment, or ever) -- it will simplify investigating failures;
            ** optional: set {{$support_transactions}}, {{$support_xa}} and {{$support_savepoints}} variables to correct values if you know them (they are commented at the end of the template file);
              - conditional: if the engine requires some pre-configuration in order to execute test flow properly, e.g. creation of a database, add it at the end of {{define_engine.inc}} as normal SQL;
             
            * conditional: if the engine is not loaded by default and/or requires additional options to start, copy {{<basedir>/<mysql-test>/suite/storage_engine/suite.opt}} to {{<basedir>/storage/<engine>/mysql-test/storage_engine/suite.opt}} and edit it, adding options as {noformat}--<option_name>[=<option_value>]{noformat} (e.g. if the engine comes as a plugin, you will at least add {{--plugin-load=<library name>}}). If the engine has its own lock timeouts, set them to low values, it will decrease duration of some tests;
              
            * conditional: if the engine requires some additional actions in order to create a table, copy {{<basedir>/<mysql-test>/suite/storage_engine/create_table.inc}} to {{<basedir>/storage/<engine>/mysql-test/storage_engine/create_engine.inc}} and edit the file as needed. E.g. for MERGE tables, we need to create a MyISAM table and then to use it in the UNION list for the MERGE table; do the same for alter_table if needed ({{alter_table.inc}});
              
            * conditional: if you added creation of any objects in either {{define_engine.inc}} or {{create_table.inc}} or {{alter_table.inc}}, copy {{<basedir>/<mysql-test>/suite/storage_engine/cleanup_engine.inc}} file to {{<basedir>/storage/<engine>/mysql-test/storage_engine/cleanup_engine.inc}} and edit it accordingly, removing all created objects;
              
            * {{cd}} to {{<basedir>/<mysql-test>}} and run
            {noformat}perl mysql-test-run.pl --suite=storage_engine-<engine> 1st{noformat}
              If the test failed with anything other than "Result content mismatch" or "Result length mismatch", inspect the error and make modifications to the configuration above, then repeat the test.
              If the test failed with either "Result content mismatch" or "Result length mismatch", inspect the difference; if the difference is not expected, keep fixing configuration. If the difference is expected, execute
            {noformat}diff -u <basedir>/mysql-test/suite/storage_engine/1st.result <basedir>/mysql-test/suite/storage_engine/1st.reject > <basedir>/storage/<engine>/mysql-test/storage_engine/1st.rdiff {noformat}
              Then run the test again, it should pass.

            * after you got 1st test case pass, you can execute the whole test suite (main part of it). Run
            {noformat}perl mysql-test-run.pl --suite=storage_engine-<engine>{noformat}

              Analyze results the same way you did for 1st test: if a test fails with an error other than a mismatch, fix configuration; worst case scenario, create <basedir>/storage/<engine>/mysql-test/storage_engine/disabled.def file and list the test as
            {noformat}<test name> : <reason for disabling it>{noformat}
              If a test case fails with a mismatch, and the mismatch is expected, create an rdiff file for it.
              
              Examples of expected mismatches:
              = an engine might not support a feature, but not fail on it, e.g. on executing {{REPAIR TABLE}} for some engines the output will say that repair is not supported. In this case you can either turn off the feature through {{$support_*}} variable if there is one, or disable the test through disabled.def, or create an rdiff file. In general, if the execution time is not critical, the recommended approach is to create rdiff and let the test run, as disabling it might reduce the coverage.
              
            * when you are satisfied with the results of storage_engine suite, if the engine supports partitioning, create folder {{<basedir>/storage/<engine>/mysql-test/storage_engine/partitions}}, copy your suite.opt and edit it if needed, and run the subsuite as
            {noformat}perl mysql-test-run.pl --suite=storage_engine/partitions-<engine>{noformat}
              and analyze the results.
              
            * do the same with trx subsuite (create folder {{<basedir>/storage/<engine>/mysql-test/storage_engine/trx}}, copy and edit suite.opt, run as
            {noformat}perl mysql-test-run.pl --suite=storage_engine/trx-<engine>{noformat}
              
            elenst Elena Stepanova made changes -
            Status In Progress [ 3 ] Open [ 1 ]

            In lines with what was agreed on the Maria call 2012-06-05 the next steps are:

            • Elena will add some configurations
            • As soon as 5.5.25 is released Axel will push this suite to 5.5.26.
            • Monty will add the suite also to 10.0.0. He wanted to move around/add some tests anyway.
            • Once 5.5.26 goes live Colin will blog about this feature and notify all the storage engine developers about the availability of this tool.
            ratzpo Rasmus Johansson (Inactive) added a comment - In lines with what was agreed on the Maria call 2012-06-05 the next steps are: Elena will add some configurations As soon as 5.5.25 is released Axel will push this suite to 5.5.26. Monty will add the suite also to 10.0.0. He wanted to move around/add some tests anyway. Once 5.5.26 goes live Colin will blog about this feature and notify all the storage engine developers about the availability of this tool.
            elenst Elena Stepanova made changes -
            Description The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.

            *The problem and solution*

            Existing MTR tests are not very suitable for testing of 3rd-party storage engines.

            1. Traditional MTR/mysqltest have very strict requirements to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            To solve it, we will use the functionality developed in scope of MDEV-30. The new suite will contain test files and base result files, while engines will only need rdiffs for the tests where the output is different from the base result. The set of base results will be synthetic, meaning that they might come from different engines, whichever engine provides the most generic result. Thus every engine will most likely require some rdiff files, although their number will be different.

            2. Existing MTR test suites are massive and are mostly created to test regressions. They are not organized the way that makes it easy to execute a subset of tests which can be essential for a storage engine. So, even to test a relatively small functional scope, one needs to run a long set of tests, which is inefficient.
            Note: There is the test suite ''engines'' provided with MySQL packages, which is supposed to focus on engine capabilities, but it is also long, contains tests not related to engines (e.g. create/drop an empty database), and suffers from other shortages described here.

            We will create a new set of tests, which we will try to make as short and engine-focused as possible. The tests are not supposed to provide extensive and deep functional testing, but to check the basic functionality, typical for MySQL storage engines.

            3. Currently most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail, so ALTER TABLE will not be tested at all.

            We will create a set of variables which will define configuration of the engine (and hence the test suite). Each variable will indicate whether the corresponding functional area is supported (or is supposed to be supported), or not: $support_update, $support_keys, $support_nullable_columns, $support_fulltext_search, etc.
            When we need to use a feature in a test, we will check the value of the corresponding variable. If we find out that the feature is not supported, and it's a target feature for the test, the entire test will be skipped. If the feature is just used to test something else, a part of the test which requires it will be disabled, and the corresponding message will be printed. Thus, even if an engine does not support all of "service" features, it can still have a part of the test which is essential.

            Example.

            Suppose we are testing TRUNCATE TABLE.
            First, we check whether truncate is configured as a supported feature.
            If so, we run a basic test: create a table, insert rows, truncate it, etc.
            Then, we also want to test that truncate resets auto-increment value for a table, as expected.
            We check that the engine supports auto-increment columns.
            If it does, we create a table with an auto-increment column and test how the value gets reset.
            If auto-increment is not supported, we print a message about it and proceed to the next part of the test.
            When the test is executed for an engine which does not support auto-increment columns, it will fail, but not with an error while creating a table (as it would happen if we didn't do the check), but with a simple mismatch for which the engine maintainer just needs to add an rdiff; and the rest of the test will still be executed for the engine.

            All variables will be set to TRUE by default. Users (engine maintainers) don't *have to* modify them, it is just a convenience feature to increase the test coverage by allowing tests to be executed partially, and to simplify the maintenance (e.g. for an engine which does not support indexes, instead of disabling one by one dozens of tests which use indexes, it is easier to set $support_keys to FALSE (0)).

            Apart from basic functional tests, we will also have sub-suites which will cover some big parts of functionality which not all engines are expected to support: transactions (''trx'' subsuite), partitioning (''partitions'' subsuite), and maybe some more.

            4. Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            CREATE TABLE t (i INT)
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like connect string for FEDERATED engine. Currently, there is no way to fix that apart from copying the tests and modifying them manually.

            We will provide the storage engine vendor/maintainer with a possibility to define such basic options, which we will use in table creation. Along with the engine name, they will be able to set (optionally) column properties and table properties specific for the engine. In the most cases it will be enough to keep them empty, as they are by default.

            In even more complicated scenario, in order to create a table properly, additional actions need to be performed; e.g. to create a functional MERGE table, we need to create an underlying MyISAM table; and if we want to alter a MERGE table, we should alter the underlying table(s) accordingly. To allow the test suite work for such engines too, we move CREATE and ALTER table operations into an include file which all tests will call whenever needed. A user can redefine create_table.inc and alter_table.inc in their overlay (to copy the default files and modify them as needed).


            *Tier 1*

            Simple atomic tests for basic functionality: create table, table properties, alter table, insert, update, delete, replace, select, index, index types, check table, analyze table, etc. These tests are supposed to be short, mostly several statements each, to cover the syntax. Of course, most of tests still require at least several basic features, e.g. CREATE and DROP table.

            *Tier 2*

            Tests which combine different functionality: e.g. to test simple INSERT, we do not need any indexes, but to test INSERT IGNORE, we do. So, insert test and index tests are a part of tier-1, and insert_ignore only makes sense of both of them work.

            *Tier 3*

            More complex functionality: transactions (including locking, XA, etc.), partitioning, binary logging/replication, fulltext search, etc.

            *Test structure*

            The test suite should be organized accordingly. All tier-1 tests will be the main part of the suite, living directly in the suite directory. Tier-3 and partially tier-2 tests will be placed in sub-suites: ''trx'' for normal and XA transactions, ''partitions'' for partitioning tests. Binary logging will be tested via combinations.

            We will also provide templates for define_engine.inc and suite.opt files, disabled.def file which should be later customized for the engine, and README file. All of these will be located in the test suite folder.

            ====================

            h3. Usage instructions

            We assume the storage engine is located under {{<basedir>/storage/<engine>}} and has been built

            * create {{<basedir>/storage/<engine>/mysql-test}} folder if does not exist yet

            * create {{<basedir>/storage/<engine>/mysql-test/storage_engine}} folder

            * copy {{<basedir>/mysql-test/suite/storage_engine/define_engine.inc}} to {{<basedir>/storage/<engine>/mysql-test/storage_engine/define_engine.inc}}

            * edit the copied version of {{define_engine.inc}}:
            ** mandatory: set {{ENGINE}} variable to the proper engine name (so that it can be found by name in {{information_schema.engines}} );
            ** optional: if you know that your engine requires specific column or table options, set them in {{$default_col_opts}} (for non-indexed columns), {{$default_col_indexed_opts}} (for indexed columns), and {{$default_tbl_opts}} (table options), e.g.:
            {noformat}
                  let $default_col_opts = /*!NOT NULL*/;
                  let $default_col_indexed_opts = /*!NOT NULL*/;
                  let $default_tbl_opts = /*!INSERT_METHOD=LAST*/;
            {noformat}
                Keep them inside the comment so they are correctly recognized and masked in test output, otherwise you will have to deal with mismatches.
            ** optional: look through {{$support_*}} variables and set to 0 those that correspond to features which are not supposed to be supported (at the moment, or ever) -- it will simplify investigating failures;
            ** optional: set {{$support_transactions}}, {{$support_xa}} and {{$support_savepoints}} variables to correct values if you know them (they are commented at the end of the template file);
              - conditional: if the engine requires some pre-configuration in order to execute test flow properly, e.g. creation of a database, add it at the end of {{define_engine.inc}} as normal SQL;
             
            * conditional: if the engine is not loaded by default and/or requires additional options to start, copy {{<basedir>/<mysql-test>/suite/storage_engine/suite.opt}} to {{<basedir>/storage/<engine>/mysql-test/storage_engine/suite.opt}} and edit it, adding options as {noformat}--<option_name>[=<option_value>]{noformat} (e.g. if the engine comes as a plugin, you will at least add {{--plugin-load=<library name>}}). If the engine has its own lock timeouts, set them to low values, it will decrease duration of some tests;
              
            * conditional: if the engine requires some additional actions in order to create a table, copy {{<basedir>/<mysql-test>/suite/storage_engine/create_table.inc}} to {{<basedir>/storage/<engine>/mysql-test/storage_engine/create_engine.inc}} and edit the file as needed. E.g. for MERGE tables, we need to create a MyISAM table and then to use it in the UNION list for the MERGE table; do the same for alter_table if needed ({{alter_table.inc}});
              
            * conditional: if you added creation of any objects in either {{define_engine.inc}} or {{create_table.inc}} or {{alter_table.inc}}, copy {{<basedir>/<mysql-test>/suite/storage_engine/cleanup_engine.inc}} file to {{<basedir>/storage/<engine>/mysql-test/storage_engine/cleanup_engine.inc}} and edit it accordingly, removing all created objects;
              
            * {{cd}} to {{<basedir>/<mysql-test>}} and run
            {noformat}perl mysql-test-run.pl --suite=storage_engine-<engine> 1st{noformat}
              If the test failed with anything other than "Result content mismatch" or "Result length mismatch", inspect the error and make modifications to the configuration above, then repeat the test.
              If the test failed with either "Result content mismatch" or "Result length mismatch", inspect the difference; if the difference is not expected, keep fixing configuration. If the difference is expected, execute
            {noformat}diff -u <basedir>/mysql-test/suite/storage_engine/1st.result <basedir>/mysql-test/suite/storage_engine/1st.reject > <basedir>/storage/<engine>/mysql-test/storage_engine/1st.rdiff {noformat}
              Then run the test again, it should pass.

            * after you got 1st test case pass, you can execute the whole test suite (main part of it). Run
            {noformat}perl mysql-test-run.pl --suite=storage_engine-<engine>{noformat}

              Analyze results the same way you did for 1st test: if a test fails with an error other than a mismatch, fix configuration; worst case scenario, create <basedir>/storage/<engine>/mysql-test/storage_engine/disabled.def file and list the test as
            {noformat}<test name> : <reason for disabling it>{noformat}
              If a test case fails with a mismatch, and the mismatch is expected, create an rdiff file for it.
              
              Examples of expected mismatches:
              = an engine might not support a feature, but not fail on it, e.g. on executing {{REPAIR TABLE}} for some engines the output will say that repair is not supported. In this case you can either turn off the feature through {{$support_*}} variable if there is one, or disable the test through disabled.def, or create an rdiff file. In general, if the execution time is not critical, the recommended approach is to create rdiff and let the test run, as disabling it might reduce the coverage.
              
            * when you are satisfied with the results of storage_engine suite, if the engine supports partitioning, create folder {{<basedir>/storage/<engine>/mysql-test/storage_engine/partitions}}, copy your suite.opt and edit it if needed, and run the subsuite as
            {noformat}perl mysql-test-run.pl --suite=storage_engine/partitions-<engine>{noformat}
              and analyze the results.
              
            * do the same with trx subsuite (create folder {{<basedir>/storage/<engine>/mysql-test/storage_engine/trx}}, copy and edit suite.opt, run as
            {noformat}perl mysql-test-run.pl --suite=storage_engine/trx-<engine>{noformat}
              
            h2. Goal

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            The suite it not supposed to provide exhaustive engine testing, and certainly cannot test non-standard engine-specific features (due to its very nature of being agnostic to the engine under test), but rather perform a relatively quick evaluation of standard functionality expected from a storage engine.

            h2. Problems to solve

            Existing MTR tests are not very suitable for running on different storage engines.

            h3. Problem 1: Varying result files


            Traditional MTR/mysqltest have very strict requirements in regard to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            Example:

            {noformat}
            OPTIMIZE TABLE t1;

            # One engine might in certain situations say

            Table Op Msg_type Msg_text
            test.t1 optimize status Table is already up to date

            # Another always says

            Table Op Msg_type Msg_text
            test.t1 optimize status OK
            {noformat}

            Neither is wrong, and yet if we have one variant in the result file, for another engine the test will fail.

            Till recently the only solution was to copy the whole suite somewhere and recreate result files, thus introducing usually unwanted code duplication.

            h4. Solution

            To solve this problem, we will use functionality developed in scope of MDEV-30.
            With this task, the original test results can be adapted to the storage engine by creating diff files (basically, patches). These patches are stored in the engine folder rather than in the MTR suite folder, so no changes in the original test suite is performed.


            h3. Problem 2: Unsupported features

            Most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a traditional MTR test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail -- not just produce a result mismatch which we could patch, but actually fail in the middle of execution, -- so we will have to stop running it, and ALTER TABLE will not be tested at all; and the same will happen to any test which uses keys, even if it's used briefly for an unimportant part of the test.

            This problem is even bigger than the first one, because not only does it currently require copying the test suite and maintaining a separate set of *result* files, but also modifying the test files themselves; and often modifications will be extensive enough to rule out even the possibility of future merges with the original test suite.

            h4. Solution

            We will solve this problem by allowing all tests to run through intermediate failures to the end (unless the server crashes or a syntax error in the test itself occurs). Thus, the tests will also produce mismatches, and the engine maintainer will be able to decide whether the test is really inapplicable and needs to be disabled, or the partial failures are acceptable and can be stored in the result diff file.

            Example:

            We are testing that a table option is supported by the engine (merely is accepted and stored); lets say AUTO_INCREMENT. We create a table with the option and check that it is not lost; but we also want to see that it works with ALTER; so, we add an alter table statement where the option is modified.

            Now, if we try to run such a test with FEDERATED engine, it will break on ALTER -- not because the option is unsupported, but because ALTER is. In the storage engine test suite, the test will continue to the end and will produce a mismatch (because the ALTER statement will cause an error). Then, someone who tunes the suite for FEDERATED engine, can see the difference, decide that it's reasonable, since ALTER is not supposed to work, and create an rdiff file. Thus, we do not skip the test for the table option entirely, but simply limit the testing to functionality the engine supports.

            The tests will try to be smart in their internal logic, but within reason. When a statement is likely to fail (e.g. a feature is often unsupported), the test will check the result of the statement and will produce a verbose error message, including versions why it could happen. Also, if a failure happened on table creation, or in some other key moment, so that a part of test becomes useless (if a table was not created, there is no point at trying to alter it, etc.), the test will ignore a part of the flow, and proceed to the next part. But it does not happen always, the checks are only performed when the probability of a failure is reasonably high.
            Of course, if a part of the test is bypassed, it will also cause a mismatch, so the maintainer will be aware of that and will be able to check why it happens.

            Additionally, some tests will check the value of the default index (see below notes about configuration), and if it is unset, certain tests will be completely skipped, because every part of them requires indexes (e.g. index.test).

            For the functionality described in the ENGINES table (transactions, savepoints, XA), if a test requires one of the features and it is shown as unsupported in the table, the test will produce a warning. But unlike with the indexes, it will not be skipped, because we assume that for a new engine information in the ENGINES table might be inaccurate.


            h3. Problem 3: Different primitives

            Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            {{CREATE TABLE t (i INT)}}
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like a connection string for FEDERATED engine. Since there is, obviously, no connection string in generic tests, every table creation for FEDERATED will fail, and nothing will be tested. Currently, there is no way to work around that apart from copying the tests and modifying them manually.

            In even more complicated scenario, in order to create a table properly, additional actions need to be performed; e.g. to create a functional MERGE table, we need to also create an underlying MyISAM table; and if we want to alter a MERGE table, we should alter the underlying table(s) accordingly.

            h4. Solution

            We will provide the engine maintainer with several tools to tune the suite for their engine.

            * Some variables can be set to configure the basic test behavior:
            ** the engine name (to be used in {{CREATE TABLE}} and be masked in {{SHOW CREATE TABLE}});
            ** default column options (when any are required, e.g. NOT NULL for CSV);
            ** default table options (when any are required, e.g. connection for FEDERATED);
            ** default index (INDEX, UNIQUE INDEX, PRIMARY KEY, whichever supported, or a special index);
            ** default types (int type, char type, in case standard ones are not supported);

            * if any actions need to be performed before/after each test, they can be described in include files which are always executed (e.g. creation of a server for FEDERATED before a test, and dropping it after a test);

            * if a server requires non-standard procedures for table creation or modification, they can be tuned in include files which tests to use to perform these actions (CREATE and ALTER table are not run directly in the tests, only by calling the include files, so they are configurable);

            * the engine can have its own set of disabled files, so that there is no need to provide only selected test names on the MTR command line -- the whole suite can be run, and unnecessary files can be disabled through the list;

            * server options can be configured;

            * non-default combinations can be configured for the engine;

            * subsuites can be completely ignored for the engine, so that if there is no interest in running partitioning tests or transactional tests, they can be simply omitted without any effort.

            For more details, see the 'Tuning' section.

            h3. Bugs filed while working on the suite:

            LP:973039 / MDEV-211 (Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK)
            LP:976104 / MDEV-216 (Assertion `0' failed in my_message_sql on UPDATE IGNORE, or unknown error on release build)
            LP:990187 / MDEV-237 (Assertion `share->reopen == 1' failed at maria_extra on ADD PARTITION)
            LP:994275 / MDEV-248 (Assertion `real->type() == Item::FIELD_ITEM' failed in add_not_null_conds(JOIN*))
            LP:994854 / MDEV-254 (Server hangs on updating an XtraDB table after FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT)
            LP:997397 (TRUNCATE on a partitioned Aria table does not reset AUTO_INCREMENT)
            MDEV-365 (Assertion `block->type == PAGECACHE_EMPTY_PAGE ...' failed in pagecache_read on ADD PARTITION)
            MDEV-366 (Assertion `share->reopen == 1' failed in maria_extra on DROP TABLE which is locked twice)
            MDEV-388 (Creating a federated table with a non-existing server returns a random error code)
            MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            MySQL:64888 (Inconsistent behavior of dynamic concurrent_insert values)
            MySQL:64892 (Session-level low_priority_updates does not work for INSERT)
            MySQL:65146 (WITH CONSISTENT SNAPSHOT does not work with isolation level SERIALIZABLE)
            MySQL:65225 (InnoDB miscalculates auto-increment after changing auto_increment_increment)
            MySQL:65429 / LP:1004910 (Assertion failure block->page.space == page_get_space_id(page_align(ptr)))
            MySQL:65431 / LP:1005052 (Error 1430 while selecting from a federated table with index on a bit column)
            MySQL:65846 (INSERT DELAYED on a BLACKHOLE table hangs forever)
            MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)

            _The list might be incomplete_


            h2. Tuning

            Please note that the test suite is synthetic. It means that it contains tests for a set of features which (the whole set) is not supported by any known engine. The same is true for result files -- they are taken from different engines, depending on which produce the most sensible result, and in rare cases are even artificial. This means that there is no engine which would pass the whole suite without any tuning -- not even MyISAM, which contributed to the result files the most.


            h3. Assumptions

            We presume that the tests are set on a source tree build (which is reasonable, considering that the main goal is to help with storage engine development). We want to configure the tests for some engine OurENGINE.

            The storage engine code is located in {{<basedir>/storage/<ourengine>}} folder.

            The storage_engine test suite is located in {{<basedir>/mysql-test/suite/storage_engine}} folder and contains subsuites in subfolders of the corresponding names (currently {{<basedir>/mysql-test/suite/storage_engine/parts}} and {{<basedir>/mysql-test/suite/storage_engine/trx}}.


            h3. Common tuning steps

            1. Create {{<basedir>/storage/ourengine/mysql-test/storage_engine}} folder.

            2. Copy {{<basedir>/mysql-test/suite/storage_engine/define_engine.inc}} to {{<basedir>/storage/ourengine/mysql-test/storage_engine}} and edit the file (at the very least set ENGINE variable; check other variables which you find there, and modify as needed.

            3. If the engine requires any additional server options (e.g. to load the engine, or to tune it, or both), create {{<basedir>/storage/ourengine/mysql-test/storage_engine/suite.opt}} file and add the options there.

            4. If you know in advance that the engine requires additional steps before a test, add them at the end of {{define_engine.inc}}.

            5. If you created any SQL objects in {{define_engine.inc}}, create file {{<basedir>/storage/ourengine/mysql-test/storage_engine/cleanup_engine.inc}}, or copy a stub from {{<basedir>/mysql-test/suite/storage_engine}}, and add the logic to drop the objects.

            6. If you know in advance that the engine requires non-standard table creation and/or modification procedure, copy {{<basedir>/mysql-test/suite/storage_engine/create_table.inc}} and {{<basedir>/mysql-test/suite/storage_engine/alter_table.inc}} into {{<basedir>/storage/ourengine/mysql-test/storage_engine/}} and modify them as needed.

            7. Try to run the 1st test:
            {{perl ./mtr --suite=storage_engine-ourengine 1st}}

            8. If the test produces a mismatch, analyze it and decide whether table creation or test options or server options require more tuning, or the difference is expected.

            9. If the difference is expected, create an rdiff file as
            {{ diff -u {{<basedir>/mysql-test/suite/storage_engine/1st.result <basedir>/mysql-test/suite/storage_engine/1st.reject > <basedir>/storage/ourengine/mysql-test/storage_engine/1st.rdiff}}

            10. When the 1st test passes, run the whole suite:

            {{perl ./mtr --suite=storage_engine-ourengine --force --max-test-fail=0}}

            11. Analyze failures, modify parameters or include files as needed, create rdiff files.

            12. If any tests requires specific non-standard server/engine options, create files {{<testname>.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine}}.

            13. If any tests have to be skipped, add them to {{<basedir>/storage/ourengine/mysql-test/storage_engine/disabled.def}}.

            14. When you are satisfied with the results of storage_engine suite, proceed to the subsuites. If you are interested in running partitions tests, create folder {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            15. Repeat step 3, if needed, only now create {{suite.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}.

            16. Run the subsuite as {{perl ./mtr --suite=storage_engine/parts-ourengine --force --max-test-fail=0}}

            17. Repeat steps 11-13, only now create files in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            18. If you want to run transactional tests, repeat steps 14-17 for trx subsuite.

            19. To execute the whole set of tests, run {{perl ./mtr --suite=storage_engine-ourengine,storage_engine/*-ourengine}}

            h2. Examples

            <coming soon>

            h3. Easy level: MyISAM

            <coming soon>

            h3. Intermediate level: InnoDB plugin

            <coming soon>

            h3. Advanced level: MERGE

            <coming soon>

            elenst Elena Stepanova made changes -
            Assignee Elena Stepanova [ elenst ] Axel Schwenke [ axel ]
            elenst Elena Stepanova made changes -
            Description h2. Goal

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            The suite it not supposed to provide exhaustive engine testing, and certainly cannot test non-standard engine-specific features (due to its very nature of being agnostic to the engine under test), but rather perform a relatively quick evaluation of standard functionality expected from a storage engine.

            h2. Problems to solve

            Existing MTR tests are not very suitable for running on different storage engines.

            h3. Problem 1: Varying result files


            Traditional MTR/mysqltest have very strict requirements in regard to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            Example:

            {noformat}
            OPTIMIZE TABLE t1;

            # One engine might in certain situations say

            Table Op Msg_type Msg_text
            test.t1 optimize status Table is already up to date

            # Another always says

            Table Op Msg_type Msg_text
            test.t1 optimize status OK
            {noformat}

            Neither is wrong, and yet if we have one variant in the result file, for another engine the test will fail.

            Till recently the only solution was to copy the whole suite somewhere and recreate result files, thus introducing usually unwanted code duplication.

            h4. Solution

            To solve this problem, we will use functionality developed in scope of MDEV-30.
            With this task, the original test results can be adapted to the storage engine by creating diff files (basically, patches). These patches are stored in the engine folder rather than in the MTR suite folder, so no changes in the original test suite is performed.


            h3. Problem 2: Unsupported features

            Most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a traditional MTR test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail -- not just produce a result mismatch which we could patch, but actually fail in the middle of execution, -- so we will have to stop running it, and ALTER TABLE will not be tested at all; and the same will happen to any test which uses keys, even if it's used briefly for an unimportant part of the test.

            This problem is even bigger than the first one, because not only does it currently require copying the test suite and maintaining a separate set of *result* files, but also modifying the test files themselves; and often modifications will be extensive enough to rule out even the possibility of future merges with the original test suite.

            h4. Solution

            We will solve this problem by allowing all tests to run through intermediate failures to the end (unless the server crashes or a syntax error in the test itself occurs). Thus, the tests will also produce mismatches, and the engine maintainer will be able to decide whether the test is really inapplicable and needs to be disabled, or the partial failures are acceptable and can be stored in the result diff file.

            Example:

            We are testing that a table option is supported by the engine (merely is accepted and stored); lets say AUTO_INCREMENT. We create a table with the option and check that it is not lost; but we also want to see that it works with ALTER; so, we add an alter table statement where the option is modified.

            Now, if we try to run such a test with FEDERATED engine, it will break on ALTER -- not because the option is unsupported, but because ALTER is. In the storage engine test suite, the test will continue to the end and will produce a mismatch (because the ALTER statement will cause an error). Then, someone who tunes the suite for FEDERATED engine, can see the difference, decide that it's reasonable, since ALTER is not supposed to work, and create an rdiff file. Thus, we do not skip the test for the table option entirely, but simply limit the testing to functionality the engine supports.

            The tests will try to be smart in their internal logic, but within reason. When a statement is likely to fail (e.g. a feature is often unsupported), the test will check the result of the statement and will produce a verbose error message, including versions why it could happen. Also, if a failure happened on table creation, or in some other key moment, so that a part of test becomes useless (if a table was not created, there is no point at trying to alter it, etc.), the test will ignore a part of the flow, and proceed to the next part. But it does not happen always, the checks are only performed when the probability of a failure is reasonably high.
            Of course, if a part of the test is bypassed, it will also cause a mismatch, so the maintainer will be aware of that and will be able to check why it happens.

            Additionally, some tests will check the value of the default index (see below notes about configuration), and if it is unset, certain tests will be completely skipped, because every part of them requires indexes (e.g. index.test).

            For the functionality described in the ENGINES table (transactions, savepoints, XA), if a test requires one of the features and it is shown as unsupported in the table, the test will produce a warning. But unlike with the indexes, it will not be skipped, because we assume that for a new engine information in the ENGINES table might be inaccurate.


            h3. Problem 3: Different primitives

            Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            {{CREATE TABLE t (i INT)}}
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like a connection string for FEDERATED engine. Since there is, obviously, no connection string in generic tests, every table creation for FEDERATED will fail, and nothing will be tested. Currently, there is no way to work around that apart from copying the tests and modifying them manually.

            In even more complicated scenario, in order to create a table properly, additional actions need to be performed; e.g. to create a functional MERGE table, we need to also create an underlying MyISAM table; and if we want to alter a MERGE table, we should alter the underlying table(s) accordingly.

            h4. Solution

            We will provide the engine maintainer with several tools to tune the suite for their engine.

            * Some variables can be set to configure the basic test behavior:
            ** the engine name (to be used in {{CREATE TABLE}} and be masked in {{SHOW CREATE TABLE}});
            ** default column options (when any are required, e.g. NOT NULL for CSV);
            ** default table options (when any are required, e.g. connection for FEDERATED);
            ** default index (INDEX, UNIQUE INDEX, PRIMARY KEY, whichever supported, or a special index);
            ** default types (int type, char type, in case standard ones are not supported);

            * if any actions need to be performed before/after each test, they can be described in include files which are always executed (e.g. creation of a server for FEDERATED before a test, and dropping it after a test);

            * if a server requires non-standard procedures for table creation or modification, they can be tuned in include files which tests to use to perform these actions (CREATE and ALTER table are not run directly in the tests, only by calling the include files, so they are configurable);

            * the engine can have its own set of disabled files, so that there is no need to provide only selected test names on the MTR command line -- the whole suite can be run, and unnecessary files can be disabled through the list;

            * server options can be configured;

            * non-default combinations can be configured for the engine;

            * subsuites can be completely ignored for the engine, so that if there is no interest in running partitioning tests or transactional tests, they can be simply omitted without any effort.

            For more details, see the 'Tuning' section.

            h3. Bugs filed while working on the suite:

            LP:973039 / MDEV-211 (Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK)
            LP:976104 / MDEV-216 (Assertion `0' failed in my_message_sql on UPDATE IGNORE, or unknown error on release build)
            LP:990187 / MDEV-237 (Assertion `share->reopen == 1' failed at maria_extra on ADD PARTITION)
            LP:994275 / MDEV-248 (Assertion `real->type() == Item::FIELD_ITEM' failed in add_not_null_conds(JOIN*))
            LP:994854 / MDEV-254 (Server hangs on updating an XtraDB table after FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT)
            LP:997397 (TRUNCATE on a partitioned Aria table does not reset AUTO_INCREMENT)
            MDEV-365 (Assertion `block->type == PAGECACHE_EMPTY_PAGE ...' failed in pagecache_read on ADD PARTITION)
            MDEV-366 (Assertion `share->reopen == 1' failed in maria_extra on DROP TABLE which is locked twice)
            MDEV-388 (Creating a federated table with a non-existing server returns a random error code)
            MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            MySQL:64888 (Inconsistent behavior of dynamic concurrent_insert values)
            MySQL:64892 (Session-level low_priority_updates does not work for INSERT)
            MySQL:65146 (WITH CONSISTENT SNAPSHOT does not work with isolation level SERIALIZABLE)
            MySQL:65225 (InnoDB miscalculates auto-increment after changing auto_increment_increment)
            MySQL:65429 / LP:1004910 (Assertion failure block->page.space == page_get_space_id(page_align(ptr)))
            MySQL:65431 / LP:1005052 (Error 1430 while selecting from a federated table with index on a bit column)
            MySQL:65846 (INSERT DELAYED on a BLACKHOLE table hangs forever)
            MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)

            _The list might be incomplete_


            h2. Tuning

            Please note that the test suite is synthetic. It means that it contains tests for a set of features which (the whole set) is not supported by any known engine. The same is true for result files -- they are taken from different engines, depending on which produce the most sensible result, and in rare cases are even artificial. This means that there is no engine which would pass the whole suite without any tuning -- not even MyISAM, which contributed to the result files the most.


            h3. Assumptions

            We presume that the tests are set on a source tree build (which is reasonable, considering that the main goal is to help with storage engine development). We want to configure the tests for some engine OurENGINE.

            The storage engine code is located in {{<basedir>/storage/<ourengine>}} folder.

            The storage_engine test suite is located in {{<basedir>/mysql-test/suite/storage_engine}} folder and contains subsuites in subfolders of the corresponding names (currently {{<basedir>/mysql-test/suite/storage_engine/parts}} and {{<basedir>/mysql-test/suite/storage_engine/trx}}.


            h3. Common tuning steps

            1. Create {{<basedir>/storage/ourengine/mysql-test/storage_engine}} folder.

            2. Copy {{<basedir>/mysql-test/suite/storage_engine/define_engine.inc}} to {{<basedir>/storage/ourengine/mysql-test/storage_engine}} and edit the file (at the very least set ENGINE variable; check other variables which you find there, and modify as needed.

            3. If the engine requires any additional server options (e.g. to load the engine, or to tune it, or both), create {{<basedir>/storage/ourengine/mysql-test/storage_engine/suite.opt}} file and add the options there.

            4. If you know in advance that the engine requires additional steps before a test, add them at the end of {{define_engine.inc}}.

            5. If you created any SQL objects in {{define_engine.inc}}, create file {{<basedir>/storage/ourengine/mysql-test/storage_engine/cleanup_engine.inc}}, or copy a stub from {{<basedir>/mysql-test/suite/storage_engine}}, and add the logic to drop the objects.

            6. If you know in advance that the engine requires non-standard table creation and/or modification procedure, copy {{<basedir>/mysql-test/suite/storage_engine/create_table.inc}} and {{<basedir>/mysql-test/suite/storage_engine/alter_table.inc}} into {{<basedir>/storage/ourengine/mysql-test/storage_engine/}} and modify them as needed.

            7. Try to run the 1st test:
            {{perl ./mtr --suite=storage_engine-ourengine 1st}}

            8. If the test produces a mismatch, analyze it and decide whether table creation or test options or server options require more tuning, or the difference is expected.

            9. If the difference is expected, create an rdiff file as
            {{ diff -u {{<basedir>/mysql-test/suite/storage_engine/1st.result <basedir>/mysql-test/suite/storage_engine/1st.reject > <basedir>/storage/ourengine/mysql-test/storage_engine/1st.rdiff}}

            10. When the 1st test passes, run the whole suite:

            {{perl ./mtr --suite=storage_engine-ourengine --force --max-test-fail=0}}

            11. Analyze failures, modify parameters or include files as needed, create rdiff files.

            12. If any tests requires specific non-standard server/engine options, create files {{<testname>.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine}}.

            13. If any tests have to be skipped, add them to {{<basedir>/storage/ourengine/mysql-test/storage_engine/disabled.def}}.

            14. When you are satisfied with the results of storage_engine suite, proceed to the subsuites. If you are interested in running partitions tests, create folder {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            15. Repeat step 3, if needed, only now create {{suite.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}.

            16. Run the subsuite as {{perl ./mtr --suite=storage_engine/parts-ourengine --force --max-test-fail=0}}

            17. Repeat steps 11-13, only now create files in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            18. If you want to run transactional tests, repeat steps 14-17 for trx subsuite.

            19. To execute the whole set of tests, run {{perl ./mtr --suite=storage_engine-ourengine,storage_engine/*-ourengine}}

            h2. Examples

            <coming soon>

            h3. Easy level: MyISAM

            <coming soon>

            h3. Intermediate level: InnoDB plugin

            <coming soon>

            h3. Advanced level: MERGE

            <coming soon>


            - [Goal|#goal]
            - [Problems to solve|#problems_to_solve]
            -- [Problem 1: Varying result files|#problem_1]
            --- [Solution|#solution_1]
            -- [Problem 2: Unsupported features|#problem_2]
            --- [Solution|#solution_2]
            -- [Problem 3: Varying result files|#problem_3]
            --- [Solution|#solution_3]
            -- [Filed bugs|#bugs]
            - [Tuning|#tuning]
            -- [Assumptions|#assumptions]
            -- [Common tuning steps|#common_steps]
            -- [Examples|#examples]
            --- [MyISAM|#myisam]
            --- [InnoDB plugin|#innodb_plugin]
            --- [MERGE|#merge]

            {anchor:goal}
            h2. Goal

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            The suite it not supposed to provide exhaustive engine testing, and certainly cannot test non-standard engine-specific features (due to its very nature of being agnostic to the engine under test), but rather perform a relatively quick evaluation of standard functionality expected from a storage engine.

            {anchor:problems_to_solve}
            h2. Problems to solve

            Existing MTR tests are not very suitable for running on different storage engines.

            {anchor:problem_1}
            h3. Problem 1: Varying result files

            Traditional MTR/mysqltest have very strict requirements in regard to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            Example:

            {noformat}
            OPTIMIZE TABLE t1;

            # One engine might in certain situations say

            Table Op Msg_type Msg_text
            test.t1 optimize status Table is already up to date

            # Another always says

            Table Op Msg_type Msg_text
            test.t1 optimize status OK
            {noformat}

            Neither is wrong, and yet if we have one variant in the result file, for another engine the test will fail.

            Till recently the only solution was to copy the whole suite somewhere and recreate result files, thus introducing usually unwanted code duplication.

            {anchor:solution_1}
            h4. Solution

            To solve this problem, we will use functionality developed in scope of MDEV-30.
            With this task, the original test results can be adapted to the storage engine by creating diff files (basically, patches). These patches are stored in the engine folder rather than in the MTR suite folder, so no changes in the original test suite is performed.

            {anchor:problem_2}
            h3. Problem 2: Unsupported features

            Most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a traditional MTR test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail -- not just produce a result mismatch which we could patch, but actually fail in the middle of execution, -- so we will have to stop running it, and ALTER TABLE will not be tested at all; and the same will happen to any test which uses keys, even if it's used briefly for an unimportant part of the test.

            This problem is even bigger than the first one, because not only does it currently require copying the test suite and maintaining a separate set of *result* files, but also modifying the test files themselves; and often modifications will be extensive enough to rule out even the possibility of future merges with the original test suite.

            {anchor:solution_2}
            h4. Solution

            We will solve this problem by allowing all tests to run through intermediate failures to the end (unless the server crashes or a syntax error in the test itself occurs). Thus, the tests will also produce mismatches, and the engine maintainer will be able to decide whether the test is really inapplicable and needs to be disabled, or the partial failures are acceptable and can be stored in the result diff file.

            Example:

            We are testing that a table option is supported by the engine (merely is accepted and stored); lets say AUTO_INCREMENT. We create a table with the option and check that it is not lost; but we also want to see that it works with ALTER; so, we add an alter table statement where the option is modified.

            Now, if we try to run such a test with FEDERATED engine, it will break on ALTER -- not because the option is unsupported, but because ALTER is. In the storage engine test suite, the test will continue to the end and will produce a mismatch (because the ALTER statement will cause an error). Then, someone who tunes the suite for FEDERATED engine, can see the difference, decide that it's reasonable, since ALTER is not supposed to work, and create an rdiff file. Thus, we do not skip the test for the table option entirely, but simply limit the testing to functionality the engine supports.

            The tests will try to be smart in their internal logic, but within reason. When a statement is likely to fail (e.g. a feature is often unsupported), the test will check the result of the statement and will produce a verbose error message, including versions why it could happen. Also, if a failure happened on table creation, or in some other key moment, so that a part of test becomes useless (if a table was not created, there is no point at trying to alter it, etc.), the test will ignore a part of the flow, and proceed to the next part. But it does not happen always, the checks are only performed when the probability of a failure is reasonably high.
            Of course, if a part of the test is bypassed, it will also cause a mismatch, so the maintainer will be aware of that and will be able to check why it happens.

            Additionally, some tests will check the value of the default index (see below notes about configuration), and if it is unset, certain tests will be completely skipped, because every part of them requires indexes (e.g. index.test).

            For the functionality described in the ENGINES table (transactions, savepoints, XA), if a test requires one of the features and it is shown as unsupported in the table, the test will produce a warning. But unlike with the indexes, it will not be skipped, because we assume that for a new engine information in the ENGINES table might be inaccurate.

            {anchor:problem_3}
            h3. Problem 3: Different primitives

            Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            {{CREATE TABLE t (i INT)}}
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like a connection string for FEDERATED engine. Since there is, obviously, no connection string in generic tests, every table creation for FEDERATED will fail, and nothing will be tested. Currently, there is no way to work around that apart from copying the tests and modifying them manually.

            In even more complicated scenario, in order to create a table properly, additional actions need to be performed; e.g. to create a functional MERGE table, we need to also create an underlying MyISAM table; and if we want to alter a MERGE table, we should alter the underlying table(s) accordingly.

            {anchor:solution_3}
            h4. Solution

            We will provide the engine maintainer with several tools to tune the suite for their engine.

            * Some variables can be set to configure the basic test behavior:
            ** the engine name (to be used in {{CREATE TABLE}} and be masked in {{SHOW CREATE TABLE}});
            ** default column options (when any are required, e.g. NOT NULL for CSV);
            ** default table options (when any are required, e.g. connection for FEDERATED);
            ** default index (INDEX, UNIQUE INDEX, PRIMARY KEY, whichever supported, or a special index);
            ** default types (int type, char type, in case standard ones are not supported);

            * if any actions need to be performed before/after each test, they can be described in include files which are always executed (e.g. creation of a server for FEDERATED before a test, and dropping it after a test);

            * if a server requires non-standard procedures for table creation or modification, they can be tuned in include files which tests to use to perform these actions (CREATE and ALTER table are not run directly in the tests, only by calling the include files, so they are configurable);

            * the engine can have its own set of disabled files, so that there is no need to provide only selected test names on the MTR command line -- the whole suite can be run, and unnecessary files can be disabled through the list;

            * server options can be configured;

            * non-default combinations can be configured for the engine;

            * subsuites can be completely ignored for the engine, so that if there is no interest in running partitioning tests or transactional tests, they can be simply omitted without any effort.

            For more details, see the 'Tuning' section.

            {anchor:bugs}
            h3. Bugs filed while working on the suite:

            LP:973039 / MDEV-211 (Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK)
            LP:976104 / MDEV-216 (Assertion `0' failed in my_message_sql on UPDATE IGNORE, or unknown error on release build)
            LP:990187 / MDEV-237 (Assertion `share->reopen == 1' failed at maria_extra on ADD PARTITION)
            LP:994275 / MDEV-248 (Assertion `real->type() == Item::FIELD_ITEM' failed in add_not_null_conds(JOIN*))
            LP:994854 / MDEV-254 (Server hangs on updating an XtraDB table after FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT)
            LP:997397 (TRUNCATE on a partitioned Aria table does not reset AUTO_INCREMENT)
            MDEV-365 (Assertion `block->type == PAGECACHE_EMPTY_PAGE ...' failed in pagecache_read on ADD PARTITION)
            MDEV-366 (Assertion `share->reopen == 1' failed in maria_extra on DROP TABLE which is locked twice)
            MDEV-388 (Creating a federated table with a non-existing server returns a random error code)
            MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            MySQL:64888 (Inconsistent behavior of dynamic concurrent_insert values)
            MySQL:64892 (Session-level low_priority_updates does not work for INSERT)
            MySQL:65146 (WITH CONSISTENT SNAPSHOT does not work with isolation level SERIALIZABLE)
            MySQL:65225 (InnoDB miscalculates auto-increment after changing auto_increment_increment)
            MySQL:65429 / LP:1004910 (Assertion failure block->page.space == page_get_space_id(page_align(ptr)))
            MySQL:65431 / LP:1005052 (Error 1430 while selecting from a federated table with index on a bit column)
            MySQL:65846 (INSERT DELAYED on a BLACKHOLE table hangs forever)
            MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)

            _The list might be incomplete_

            {anchor:tuning}
            h2. Tuning

            Please note that the test suite is synthetic. It means that it contains tests for a set of features which (the whole set) is not supported by any known engine. The same is true for result files -- they are taken from different engines, depending on which produce the most sensible result, and in rare cases are even artificial. This means that there is no engine which would pass the whole suite without any tuning -- not even MyISAM, which contributed to the result files the most.

            {anchor:assumptions}
            h3. Assumptions

            We presume that the tests are set on a source tree build (which is reasonable, considering that the main goal is to help with storage engine development). We want to configure the tests for some engine OurENGINE.

            The storage engine code is located in {{<basedir>/storage/<ourengine>}} folder.

            The storage_engine test suite is located in {{<basedir>/mysql-test/suite/storage_engine}} folder and contains subsuites in subfolders of the corresponding names (currently {{<basedir>/mysql-test/suite/storage_engine/parts}} and {{<basedir>/mysql-test/suite/storage_engine/trx}}.

            {anchor:common_steps}
            h3. Common tuning steps

            1. Create {{<basedir>/storage/ourengine/mysql-test/storage_engine}} folder.

            2. Copy {{<basedir>/mysql-test/suite/storage_engine/define_engine.inc}} to {{<basedir>/storage/ourengine/mysql-test/storage_engine}} and edit the file (at the very least set ENGINE variable; check other variables which you find there, and modify as needed.

            3. If the engine requires any additional server options (e.g. to load the engine, or to tune it, or both), create {{<basedir>/storage/ourengine/mysql-test/storage_engine/suite.opt}} file and add the options there.

            4. If you know in advance that the engine requires additional steps before a test, add them at the end of {{define_engine.inc}}.

            5. If you created any SQL objects in {{define_engine.inc}}, create file {{<basedir>/storage/ourengine/mysql-test/storage_engine/cleanup_engine.inc}}, or copy a stub from {{<basedir>/mysql-test/suite/storage_engine}}, and add the logic to drop the objects.

            6. If you know in advance that the engine requires non-standard table creation and/or modification procedure, copy {{<basedir>/mysql-test/suite/storage_engine/create_table.inc}} and {{<basedir>/mysql-test/suite/storage_engine/alter_table.inc}} into {{<basedir>/storage/ourengine/mysql-test/storage_engine/}} and modify them as needed.

            7. Try to run the 1st test:
            {{perl ./mtr --suite=storage_engine-ourengine 1st}}

            8. If the test produces a mismatch, analyze it and decide whether table creation or test options or server options require more tuning, or the difference is expected.

            9. If the difference is expected, create an rdiff file as
            {{ diff -u {{<basedir>/mysql-test/suite/storage_engine/1st.result <basedir>/mysql-test/suite/storage_engine/1st.reject > <basedir>/storage/ourengine/mysql-test/storage_engine/1st.rdiff}}

            10. When the 1st test passes, run the whole suite:

            {{perl ./mtr --suite=storage_engine-ourengine --force --max-test-fail=0}}

            11. Analyze failures, modify parameters or include files as needed, create rdiff files.

            12. If any tests requires specific non-standard server/engine options, create files {{<testname>.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine}}.

            13. If any tests have to be skipped, add them to {{<basedir>/storage/ourengine/mysql-test/storage_engine/disabled.def}}.

            14. When you are satisfied with the results of storage_engine suite, proceed to the subsuites. If you are interested in running partitions tests, create folder {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            15. Repeat step 3, if needed, only now create {{suite.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}.

            16. Run the subsuite as {{perl ./mtr --suite=storage_engine/parts-ourengine --force --max-test-fail=0}}

            17. Repeat steps 11-13, only now create files in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            18. If you want to run transactional tests, repeat steps 14-17 for trx subsuite.

            19. To execute the whole set of tests, run {{perl ./mtr --suite=storage_engine-ourengine,storage_engine/*-ourengine}}

            h2. Examples

            Below are some real-life experiences of tuning the test suite for engines that currently come with MariaDB. Actual exact steps are always unique and depend on the storage engine (otherwise it wouldn't be tuning). In some cases only very basic actions are required; in others, it takes more work. In general, the more a storage engine is "different", the trickier is the task.


            h3. Easy level: MyISAM

            Lets see how to make the suite work for a relatively standard engine, in terms of behavior similar to main MySQL engines.
            We will create an overlay for MyISAM.
            _Note: "overlay" is a term introduced by MDEV\-30, and it basically means a test suite or set of suites adapted for a certain engine_

            {{cd <basedir>/mysql-test}}
            {{mkdir -p ../storage/myisam/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/myisam/mysql-test/storage_engine/}}

            Edit the copied version of {{define_engine.inc}} to set {{ENGINE}} to MyISAM:

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = MyISAM;
             #
             ################################
             #
            {noformat}

            All other parameters look good: MyISAM does not require any specific table or column options, and it supports non-unique indexes, INT and CHAR types. These all are defaults (you can see it in {{define_engine.inc}}).

            Also, we do not need any additional server options to activate the engine, since MyISAM is always there.

            So, now we can try to run the 1st test:

            {noformat}
            perl ./mtr --suite=storage_engine-myisam 1st

            ...

            storage_engine-myisam.1st [ pass ] 20
            {noformat}

            The first test passed. Okay, now we can run the whole suite. Some tests will fail, this is expected; we need to see the results
            so we can decide whether we should accept the difference, or disable the test, or patch the code.
            So, we will run it with {{\-\-force}} and {{\-\-max-test-fail=0}}, to see all at once (you might also want to redirect the output to a file, because you will need it):

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine-myisam

            ...

            Spent 42.193 of 64 seconds executing testcases

            Completed: Failed 7/99 tests, 92.93% were successful.

            Failing test(s): storage_engine-myisam.alter_tablespace storage_engine-myisam.check_table storage_engine-myisam.foreign_keys storage_engine-myisam.index_type_hash storage_engine-myisam.show_engine storage_engine-myisam.tbl_opt_insert_method storage_engine-myisam.tbl_opt_union
            {noformat}

            _Note: Total execution time depends on the engine and on the machine. For MyISAM it's pretty fast, for other engines might take longer._

            7 failing tests on the first iteration is extremely good, of course it won't be that bright with other engines. But it's MyISAM, what did you expect...

            (Please keep in mind that the suite results are synthetic, meaning that they belong to different engines, and in rare cases are even artificial, e.g. when the only available engine supporting the functionality currently has a bug. So, no engine is expected to pass all tests without tuning. Not even MyISAM.)

            Now it's time to analyze results.

            Result handling in this suite is somewhat different from standard MTR tests. Unless you managed to crash the server or to hit a syntax error in the test itself, all results will be mismatches -- that is, a test will not abort on a failed statement, but instead will try to proceed. The big benefit of it is that if your engine does not support everything, the tests are still usable, you'll just need to approve the difference in the results (by creating an rdiff file).

            Of course, it also means that the tests produce more noise than usual -- e.g. if your engine does not support ALTER, a standard MTR test would break on the first ALTER statement, while ours will continue and will allow you to use the logic, but will also produce a bunch of mismatches due to failing statements. Internal logic in tests does its best to make it cleaner, but still, the noise is expected.

            So, with this knowledge, lets find the failures and go through them one by one.
            Tip: if you saved the output to a file, failures can easily be found by {{' fail '}} search string (without quote marks).

            The first failing test is *alter_tablespace*. Well, naturally -- no tablespaces for MyISAM. But lets look at the output.

            Mismatch says that some stuff is missing, and instead the test produces this:

            {noformat}
            +ERROR HY000: Table storage engine for 't1' doesn't have this option
            +# ERROR: Statement ended with errno 1031, errname ER_ILLEGAL_HA (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
            +# The statement|command finished with ER_ILLEGAL_HA.
            +# Tablespace operations or the syntax or the mix could be unsupported.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Since MyISAM is not supposed to support tablespaces, changing the engine code is not an option. You can decide whether to disable the test entirely, or to create an rdiff file for it. Unless you have really tough restrictions on suite execution time, I recommend keeping tests and creating rdiffs. Even if you don't care about the result, the test, running as a regression check, will at least show that this very first statement has not suddenly started crashing the server, or something unexpectedly changed in the behavior.

            Creating an rdiff file is simple:

            {{diff -u suite/storage_engine/alter_tablespace.result suite/storage_engine/alter_tablespace.reject > ../storage/myisam/mysql-test/storage_engine/alter_tablespace.rdiff}}

            Next failed test: *check_table*.

            Its difference is simple:

            {noformat}
             INSERT INTO t1 (a,b) VALUES (6,'f');
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a,b) VALUES (7,'g');
             INSERT INTO t2 (a,b) VALUES (8,'h');
             CHECK TABLE t2, t1 MEDIUM;
            @@ -52,7 +52,7 @@
             INSERT INTO t1 (a) VALUES (17),(120),(132);
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a) VALUES (801),(900),(7714);
             CHECK TABLE t1 MEDIUM;
             Table Op Msg_type Msg_text
            {noformat}

            No harm if the engine realizes that the table is up to date and says so; adding a diff.

            {{diff -u suite/storage_engine/check_table.result suite/storage_engine/check_table.reject > ../storage/myisam/mysql-test/storage_engine/check_table.rdiff}}


            Next failing test: *foreign_keys*

            Just like alter_tablespace, it produces a lot of messages about possibly unsupported functionality, which is natural, since MyISAM doesn't have foreign keys. Adding a diff.


            Next failing test: *index_type_hash*

            It produces mismatches where HASH type is replaced by BTREE type:

            {noformat}
             SHOW KEYS IN t1;
             Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
            -t1 1 a 1 a # # NULL NULL # HASH
            +t1 1 a 1 a # # NULL NULL # BTREE
            {noformat}

            Since MyISAM doesn't support HASH type, it is fine. Adding a diff.

            Next failing test: *show_engine*

            This test is optimistic and expects that SHOW ENGINE <engine name> STATUS command returns _something_ -- that is, the test obfuscates the output, but expects that it contains a row. For MyISAM it is not the case, hence the diff looks like a missing row:

            {noformat}
            @@ -4,7 +4,6 @@
             SHOW ENGINE <STORAGE_ENGINE> STATUS;
             Type Name Status
            -<STORAGE_ENGINE> ### Engine status, can be long and changeable ###
             # For SHOW MUTEX even the number of lines is volatile, so the result logging is disabled,
            {noformat}

            Adding a diff.


            Next failing tests: *tbl_opt_insert_method*, *tbl_opt_union*

            MyISAM does not use the table option INSERT_METHOD, it's the MERGE engine thing. But table creation does not normally fail on unsupported options, they are simply ignored. That's what we see here:

            {noformat}
            @@ -5,7 +5,7 @@
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL,
               `b` char(8) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1 INSERT_METHOD=FIRST
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
            {noformat}

            SHOW ENGINE does not show the option. This is fine, adding rdiff. Exactly the same for UNION.

            These were all 7 failures.

            Now it's time to take care of subsuites. Currently there are two of them: {{parts}} (stands for 'partitions'), and {{trx}} (stands for 'transactions').

            MyISAM definitely supports partitioning, so lets try them first.

            (we are still in {{<basedir>/mysql-test}})

            {{mkdir ../storage/myisam/mysql-test/storage_engine/parts}}

            This will show MTR that our engine is interested in the {{storage_engine/parts}} subsuite.

            No additional parameters or tricks should be needed for MyISAM to run partition tests, since the suite already sets {{--partition}} option. So, we'll just run it:

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/parts-myisam

            ...

            Spent 1.168 of 5 seconds executing testcases

            Completed: All 8 tests were successful.
            {noformat}

            _Note: For now, it is a very basic suite, only contains a few tests and literally takes several seconds_

            All good, tests passed, nothing needs to be done here.

            Finally, there is the transactions subsuite (also contains tests for XA and snapshots). It is also very basic, but we know that MyISAM doesn't support transactions, XA and snapshots, so results won't be this pretty. Maybe we don't even need to run it, but why not try.

            {{mkdir ../storage/myisam/mysql-test/storage_engine/trx}}

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/trx-myisam

            Spent 0.000 of 10 seconds executing testcases

            Completed: Failed 13/13 tests, 0.00% were successful.

            Failing test(s): storage_engine/trx-myisam.cons_snapshot_repeatable_read storage_engine/trx-myisam.cons_snapshot_serializable storage_engine/trx-myisam.delete storage_engine/trx-myisam.insert storage_engine/trx-myisam.level_read_committed storage_engine/trx-myisam.level_read_uncommitted storage_engine/trx-myisam.level_repeatable_read storage_engine/trx-myisam.level_serializable storage_engine/trx-myisam.select_for_update storage_engine/trx-myisam.select_lock_in_share_mode storage_engine/trx-myisam.update storage_engine/trx-myisam.xa storage_engine/trx-myisam.xa_recovery
            {noformat}

            The results are mess, as expected. All differences start with an extra warning (sometimes more than one, as the same is produced for snapshots and XA):

            {noformat}
            +# -- WARNING ----------------------------------------------------------------
            +# According to I_S.ENGINES, MyISAM does not support transactions.
            +# If it is true, the test will most likely fail; you can
            +# either create an rdiff file, or add the test to disabled.def.
            +# If transactions should be supported, check the data in Information Schema.
            +# ---------------------------------------------------------------------------
            {noformat}

            The rest is a mix of wrong results (missing rows, extra rows, different rows), warnings about rollback not working, etc. With 3rd-party engines, it would be up to a maintainer whether to add diffs for all tests or just ignore the whole thing since transactions are not supported. If you choose to ignore, simply delete the newly created {{../storage/<your_engine>/mysql-test/storage_engine/trx}}, and the subsuite won't be executed for the engine. I choose to keep it, so I'll add rdiffs for all 13 files, it's not that much work, after all.

            {noformat}
            diff -u suite/storage_engine/trx/cons_snapshot_repeatable_read.result suite/storage_engine/trx/cons_snapshot_repeatable_read.reject > ../storage/myisam/mysql-test/storage_engine/trx/cons_snapshot_repeatable_read.rdiff
            ...
            diff -u suite/storage_engine/trx/xa_recovery.result suite/storage_engine/trx/xa_recovery.reject > ../storage/myisam/mysql-test/storage_engine/trx/xa_recovery.rdiff
            {noformat}

            _Tip: If you decide to do the same, take time to go through the reject files and see that your engine works as expected, under the circumstances; sometimes making things do what they are not normally expected to do reveals hidden problems._


            Now we can run everything together:

            {noformat}

            perl ./mtr --suite=storage_engine-myisam,storage_engine/*-myisam

            ...

            Spent 46.249 of 70 seconds executing testcases

            Completed: All 120 tests were successful.
            {noformat}

            That's all. Now just stay out of failures.




            h3. Intermediate level: InnoDB plugin

            <coming soon>

            h3. Advanced level: MERGE

            <coming soon>
            elenst Elena Stepanova made changes -
            Description
            - [Goal|#goal]
            - [Problems to solve|#problems_to_solve]
            -- [Problem 1: Varying result files|#problem_1]
            --- [Solution|#solution_1]
            -- [Problem 2: Unsupported features|#problem_2]
            --- [Solution|#solution_2]
            -- [Problem 3: Varying result files|#problem_3]
            --- [Solution|#solution_3]
            -- [Filed bugs|#bugs]
            - [Tuning|#tuning]
            -- [Assumptions|#assumptions]
            -- [Common tuning steps|#common_steps]
            -- [Examples|#examples]
            --- [MyISAM|#myisam]
            --- [InnoDB plugin|#innodb_plugin]
            --- [MERGE|#merge]

            {anchor:goal}
            h2. Goal

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            The suite it not supposed to provide exhaustive engine testing, and certainly cannot test non-standard engine-specific features (due to its very nature of being agnostic to the engine under test), but rather perform a relatively quick evaluation of standard functionality expected from a storage engine.

            {anchor:problems_to_solve}
            h2. Problems to solve

            Existing MTR tests are not very suitable for running on different storage engines.

            {anchor:problem_1}
            h3. Problem 1: Varying result files

            Traditional MTR/mysqltest have very strict requirements in regard to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            Example:

            {noformat}
            OPTIMIZE TABLE t1;

            # One engine might in certain situations say

            Table Op Msg_type Msg_text
            test.t1 optimize status Table is already up to date

            # Another always says

            Table Op Msg_type Msg_text
            test.t1 optimize status OK
            {noformat}

            Neither is wrong, and yet if we have one variant in the result file, for another engine the test will fail.

            Till recently the only solution was to copy the whole suite somewhere and recreate result files, thus introducing usually unwanted code duplication.

            {anchor:solution_1}
            h4. Solution

            To solve this problem, we will use functionality developed in scope of MDEV-30.
            With this task, the original test results can be adapted to the storage engine by creating diff files (basically, patches). These patches are stored in the engine folder rather than in the MTR suite folder, so no changes in the original test suite is performed.

            {anchor:problem_2}
            h3. Problem 2: Unsupported features

            Most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a traditional MTR test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail -- not just produce a result mismatch which we could patch, but actually fail in the middle of execution, -- so we will have to stop running it, and ALTER TABLE will not be tested at all; and the same will happen to any test which uses keys, even if it's used briefly for an unimportant part of the test.

            This problem is even bigger than the first one, because not only does it currently require copying the test suite and maintaining a separate set of *result* files, but also modifying the test files themselves; and often modifications will be extensive enough to rule out even the possibility of future merges with the original test suite.

            {anchor:solution_2}
            h4. Solution

            We will solve this problem by allowing all tests to run through intermediate failures to the end (unless the server crashes or a syntax error in the test itself occurs). Thus, the tests will also produce mismatches, and the engine maintainer will be able to decide whether the test is really inapplicable and needs to be disabled, or the partial failures are acceptable and can be stored in the result diff file.

            Example:

            We are testing that a table option is supported by the engine (merely is accepted and stored); lets say AUTO_INCREMENT. We create a table with the option and check that it is not lost; but we also want to see that it works with ALTER; so, we add an alter table statement where the option is modified.

            Now, if we try to run such a test with FEDERATED engine, it will break on ALTER -- not because the option is unsupported, but because ALTER is. In the storage engine test suite, the test will continue to the end and will produce a mismatch (because the ALTER statement will cause an error). Then, someone who tunes the suite for FEDERATED engine, can see the difference, decide that it's reasonable, since ALTER is not supposed to work, and create an rdiff file. Thus, we do not skip the test for the table option entirely, but simply limit the testing to functionality the engine supports.

            The tests will try to be smart in their internal logic, but within reason. When a statement is likely to fail (e.g. a feature is often unsupported), the test will check the result of the statement and will produce a verbose error message, including versions why it could happen. Also, if a failure happened on table creation, or in some other key moment, so that a part of test becomes useless (if a table was not created, there is no point at trying to alter it, etc.), the test will ignore a part of the flow, and proceed to the next part. But it does not happen always, the checks are only performed when the probability of a failure is reasonably high.
            Of course, if a part of the test is bypassed, it will also cause a mismatch, so the maintainer will be aware of that and will be able to check why it happens.

            Additionally, some tests will check the value of the default index (see below notes about configuration), and if it is unset, certain tests will be completely skipped, because every part of them requires indexes (e.g. index.test).

            For the functionality described in the ENGINES table (transactions, savepoints, XA), if a test requires one of the features and it is shown as unsupported in the table, the test will produce a warning. But unlike with the indexes, it will not be skipped, because we assume that for a new engine information in the ENGINES table might be inaccurate.

            {anchor:problem_3}
            h3. Problem 3: Different primitives

            Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            {{CREATE TABLE t (i INT)}}
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like a connection string for FEDERATED engine. Since there is, obviously, no connection string in generic tests, every table creation for FEDERATED will fail, and nothing will be tested. Currently, there is no way to work around that apart from copying the tests and modifying them manually.

            In even more complicated scenario, in order to create a table properly, additional actions need to be performed; e.g. to create a functional MERGE table, we need to also create an underlying MyISAM table; and if we want to alter a MERGE table, we should alter the underlying table(s) accordingly.

            {anchor:solution_3}
            h4. Solution

            We will provide the engine maintainer with several tools to tune the suite for their engine.

            * Some variables can be set to configure the basic test behavior:
            ** the engine name (to be used in {{CREATE TABLE}} and be masked in {{SHOW CREATE TABLE}});
            ** default column options (when any are required, e.g. NOT NULL for CSV);
            ** default table options (when any are required, e.g. connection for FEDERATED);
            ** default index (INDEX, UNIQUE INDEX, PRIMARY KEY, whichever supported, or a special index);
            ** default types (int type, char type, in case standard ones are not supported);

            * if any actions need to be performed before/after each test, they can be described in include files which are always executed (e.g. creation of a server for FEDERATED before a test, and dropping it after a test);

            * if a server requires non-standard procedures for table creation or modification, they can be tuned in include files which tests to use to perform these actions (CREATE and ALTER table are not run directly in the tests, only by calling the include files, so they are configurable);

            * the engine can have its own set of disabled files, so that there is no need to provide only selected test names on the MTR command line -- the whole suite can be run, and unnecessary files can be disabled through the list;

            * server options can be configured;

            * non-default combinations can be configured for the engine;

            * subsuites can be completely ignored for the engine, so that if there is no interest in running partitioning tests or transactional tests, they can be simply omitted without any effort.

            For more details, see the 'Tuning' section.

            {anchor:bugs}
            h3. Bugs filed while working on the suite:

            LP:973039 / MDEV-211 (Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK)
            LP:976104 / MDEV-216 (Assertion `0' failed in my_message_sql on UPDATE IGNORE, or unknown error on release build)
            LP:990187 / MDEV-237 (Assertion `share->reopen == 1' failed at maria_extra on ADD PARTITION)
            LP:994275 / MDEV-248 (Assertion `real->type() == Item::FIELD_ITEM' failed in add_not_null_conds(JOIN*))
            LP:994854 / MDEV-254 (Server hangs on updating an XtraDB table after FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT)
            LP:997397 (TRUNCATE on a partitioned Aria table does not reset AUTO_INCREMENT)
            MDEV-365 (Assertion `block->type == PAGECACHE_EMPTY_PAGE ...' failed in pagecache_read on ADD PARTITION)
            MDEV-366 (Assertion `share->reopen == 1' failed in maria_extra on DROP TABLE which is locked twice)
            MDEV-388 (Creating a federated table with a non-existing server returns a random error code)
            MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            MySQL:64888 (Inconsistent behavior of dynamic concurrent_insert values)
            MySQL:64892 (Session-level low_priority_updates does not work for INSERT)
            MySQL:65146 (WITH CONSISTENT SNAPSHOT does not work with isolation level SERIALIZABLE)
            MySQL:65225 (InnoDB miscalculates auto-increment after changing auto_increment_increment)
            MySQL:65429 / LP:1004910 (Assertion failure block->page.space == page_get_space_id(page_align(ptr)))
            MySQL:65431 / LP:1005052 (Error 1430 while selecting from a federated table with index on a bit column)
            MySQL:65846 (INSERT DELAYED on a BLACKHOLE table hangs forever)
            MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)

            _The list might be incomplete_

            {anchor:tuning}
            h2. Tuning

            Please note that the test suite is synthetic. It means that it contains tests for a set of features which (the whole set) is not supported by any known engine. The same is true for result files -- they are taken from different engines, depending on which produce the most sensible result, and in rare cases are even artificial. This means that there is no engine which would pass the whole suite without any tuning -- not even MyISAM, which contributed to the result files the most.

            {anchor:assumptions}
            h3. Assumptions

            We presume that the tests are set on a source tree build (which is reasonable, considering that the main goal is to help with storage engine development). We want to configure the tests for some engine OurENGINE.

            The storage engine code is located in {{<basedir>/storage/<ourengine>}} folder.

            The storage_engine test suite is located in {{<basedir>/mysql-test/suite/storage_engine}} folder and contains subsuites in subfolders of the corresponding names (currently {{<basedir>/mysql-test/suite/storage_engine/parts}} and {{<basedir>/mysql-test/suite/storage_engine/trx}}.

            {anchor:common_steps}
            h3. Common tuning steps

            1. Create {{<basedir>/storage/ourengine/mysql-test/storage_engine}} folder.

            2. Copy {{<basedir>/mysql-test/suite/storage_engine/define_engine.inc}} to {{<basedir>/storage/ourengine/mysql-test/storage_engine}} and edit the file (at the very least set ENGINE variable; check other variables which you find there, and modify as needed.

            3. If the engine requires any additional server options (e.g. to load the engine, or to tune it, or both), create {{<basedir>/storage/ourengine/mysql-test/storage_engine/suite.opt}} file and add the options there.

            4. If you know in advance that the engine requires additional steps before a test, add them at the end of {{define_engine.inc}}.

            5. If you created any SQL objects in {{define_engine.inc}}, create file {{<basedir>/storage/ourengine/mysql-test/storage_engine/cleanup_engine.inc}}, or copy a stub from {{<basedir>/mysql-test/suite/storage_engine}}, and add the logic to drop the objects.

            6. If you know in advance that the engine requires non-standard table creation and/or modification procedure, copy {{<basedir>/mysql-test/suite/storage_engine/create_table.inc}} and {{<basedir>/mysql-test/suite/storage_engine/alter_table.inc}} into {{<basedir>/storage/ourengine/mysql-test/storage_engine/}} and modify them as needed.

            7. Try to run the 1st test:
            {{perl ./mtr --suite=storage_engine-ourengine 1st}}

            8. If the test produces a mismatch, analyze it and decide whether table creation or test options or server options require more tuning, or the difference is expected.

            9. If the difference is expected, create an rdiff file as
            {{ diff -u {{<basedir>/mysql-test/suite/storage_engine/1st.result <basedir>/mysql-test/suite/storage_engine/1st.reject > <basedir>/storage/ourengine/mysql-test/storage_engine/1st.rdiff}}

            10. When the 1st test passes, run the whole suite:

            {{perl ./mtr --suite=storage_engine-ourengine --force --max-test-fail=0}}

            11. Analyze failures, modify parameters or include files as needed, create rdiff files.

            12. If any tests requires specific non-standard server/engine options, create files {{<testname>.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine}}.

            13. If any tests have to be skipped, add them to {{<basedir>/storage/ourengine/mysql-test/storage_engine/disabled.def}}.

            14. When you are satisfied with the results of storage_engine suite, proceed to the subsuites. If you are interested in running partitions tests, create folder {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            15. Repeat step 3, if needed, only now create {{suite.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}.

            16. Run the subsuite as {{perl ./mtr --suite=storage_engine/parts-ourengine --force --max-test-fail=0}}

            17. Repeat steps 11-13, only now create files in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            18. If you want to run transactional tests, repeat steps 14-17 for trx subsuite.

            19. To execute the whole set of tests, run {{perl ./mtr --suite=storage_engine-ourengine,storage_engine/*-ourengine}}

            h2. Examples

            Below are some real-life experiences of tuning the test suite for engines that currently come with MariaDB. Actual exact steps are always unique and depend on the storage engine (otherwise it wouldn't be tuning). In some cases only very basic actions are required; in others, it takes more work. In general, the more a storage engine is "different", the trickier is the task.


            h3. Easy level: MyISAM

            Lets see how to make the suite work for a relatively standard engine, in terms of behavior similar to main MySQL engines.
            We will create an overlay for MyISAM.
            _Note: "overlay" is a term introduced by MDEV\-30, and it basically means a test suite or set of suites adapted for a certain engine_

            {{cd <basedir>/mysql-test}}
            {{mkdir -p ../storage/myisam/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/myisam/mysql-test/storage_engine/}}

            Edit the copied version of {{define_engine.inc}} to set {{ENGINE}} to MyISAM:

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = MyISAM;
             #
             ################################
             #
            {noformat}

            All other parameters look good: MyISAM does not require any specific table or column options, and it supports non-unique indexes, INT and CHAR types. These all are defaults (you can see it in {{define_engine.inc}}).

            Also, we do not need any additional server options to activate the engine, since MyISAM is always there.

            So, now we can try to run the 1st test:

            {noformat}
            perl ./mtr --suite=storage_engine-myisam 1st

            ...

            storage_engine-myisam.1st [ pass ] 20
            {noformat}

            The first test passed. Okay, now we can run the whole suite. Some tests will fail, this is expected; we need to see the results
            so we can decide whether we should accept the difference, or disable the test, or patch the code.
            So, we will run it with {{\-\-force}} and {{\-\-max-test-fail=0}}, to see all at once (you might also want to redirect the output to a file, because you will need it):

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine-myisam

            ...

            Spent 42.193 of 64 seconds executing testcases

            Completed: Failed 7/99 tests, 92.93% were successful.

            Failing test(s): storage_engine-myisam.alter_tablespace storage_engine-myisam.check_table storage_engine-myisam.foreign_keys storage_engine-myisam.index_type_hash storage_engine-myisam.show_engine storage_engine-myisam.tbl_opt_insert_method storage_engine-myisam.tbl_opt_union
            {noformat}

            _Note: Total execution time depends on the engine and on the machine. For MyISAM it's pretty fast, for other engines might take longer._

            7 failing tests on the first iteration is extremely good, of course it won't be that bright with other engines. But it's MyISAM, what did you expect...

            (Please keep in mind that the suite results are synthetic, meaning that they belong to different engines, and in rare cases are even artificial, e.g. when the only available engine supporting the functionality currently has a bug. So, no engine is expected to pass all tests without tuning. Not even MyISAM.)

            Now it's time to analyze results.

            Result handling in this suite is somewhat different from standard MTR tests. Unless you managed to crash the server or to hit a syntax error in the test itself, all results will be mismatches -- that is, a test will not abort on a failed statement, but instead will try to proceed. The big benefit of it is that if your engine does not support everything, the tests are still usable, you'll just need to approve the difference in the results (by creating an rdiff file).

            Of course, it also means that the tests produce more noise than usual -- e.g. if your engine does not support ALTER, a standard MTR test would break on the first ALTER statement, while ours will continue and will allow you to use the logic, but will also produce a bunch of mismatches due to failing statements. Internal logic in tests does its best to make it cleaner, but still, the noise is expected.

            So, with this knowledge, lets find the failures and go through them one by one.
            Tip: if you saved the output to a file, failures can easily be found by {{' fail '}} search string (without quote marks).

            The first failing test is *alter_tablespace*. Well, naturally -- no tablespaces for MyISAM. But lets look at the output.

            Mismatch says that some stuff is missing, and instead the test produces this:

            {noformat}
            +ERROR HY000: Table storage engine for 't1' doesn't have this option
            +# ERROR: Statement ended with errno 1031, errname ER_ILLEGAL_HA (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
            +# The statement|command finished with ER_ILLEGAL_HA.
            +# Tablespace operations or the syntax or the mix could be unsupported.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Since MyISAM is not supposed to support tablespaces, changing the engine code is not an option. You can decide whether to disable the test entirely, or to create an rdiff file for it. Unless you have really tough restrictions on suite execution time, I recommend keeping tests and creating rdiffs. Even if you don't care about the result, the test, running as a regression check, will at least show that this very first statement has not suddenly started crashing the server, or something unexpectedly changed in the behavior.

            Creating an rdiff file is simple:

            {{diff -u suite/storage_engine/alter_tablespace.result suite/storage_engine/alter_tablespace.reject > ../storage/myisam/mysql-test/storage_engine/alter_tablespace.rdiff}}

            Next failed test: *check_table*.

            Its difference is simple:

            {noformat}
             INSERT INTO t1 (a,b) VALUES (6,'f');
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a,b) VALUES (7,'g');
             INSERT INTO t2 (a,b) VALUES (8,'h');
             CHECK TABLE t2, t1 MEDIUM;
            @@ -52,7 +52,7 @@
             INSERT INTO t1 (a) VALUES (17),(120),(132);
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a) VALUES (801),(900),(7714);
             CHECK TABLE t1 MEDIUM;
             Table Op Msg_type Msg_text
            {noformat}

            No harm if the engine realizes that the table is up to date and says so; adding a diff.

            {{diff -u suite/storage_engine/check_table.result suite/storage_engine/check_table.reject > ../storage/myisam/mysql-test/storage_engine/check_table.rdiff}}


            Next failing test: *foreign_keys*

            Just like alter_tablespace, it produces a lot of messages about possibly unsupported functionality, which is natural, since MyISAM doesn't have foreign keys. Adding a diff.


            Next failing test: *index_type_hash*

            It produces mismatches where HASH type is replaced by BTREE type:

            {noformat}
             SHOW KEYS IN t1;
             Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
            -t1 1 a 1 a # # NULL NULL # HASH
            +t1 1 a 1 a # # NULL NULL # BTREE
            {noformat}

            Since MyISAM doesn't support HASH type, it is fine. Adding a diff.

            Next failing test: *show_engine*

            This test is optimistic and expects that SHOW ENGINE <engine name> STATUS command returns _something_ -- that is, the test obfuscates the output, but expects that it contains a row. For MyISAM it is not the case, hence the diff looks like a missing row:

            {noformat}
            @@ -4,7 +4,6 @@
             SHOW ENGINE <STORAGE_ENGINE> STATUS;
             Type Name Status
            -<STORAGE_ENGINE> ### Engine status, can be long and changeable ###
             # For SHOW MUTEX even the number of lines is volatile, so the result logging is disabled,
            {noformat}

            Adding a diff.


            Next failing tests: *tbl_opt_insert_method*, *tbl_opt_union*

            MyISAM does not use the table option INSERT_METHOD, it's the MERGE engine thing. But table creation does not normally fail on unsupported options, they are simply ignored. That's what we see here:

            {noformat}
            @@ -5,7 +5,7 @@
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL,
               `b` char(8) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1 INSERT_METHOD=FIRST
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
            {noformat}

            SHOW ENGINE does not show the option. This is fine, adding rdiff. Exactly the same for UNION.

            These were all 7 failures.

            Now it's time to take care of subsuites. Currently there are two of them: {{parts}} (stands for 'partitions'), and {{trx}} (stands for 'transactions').

            MyISAM definitely supports partitioning, so lets try them first.

            (we are still in {{<basedir>/mysql-test}})

            {{mkdir ../storage/myisam/mysql-test/storage_engine/parts}}

            This will show MTR that our engine is interested in the {{storage_engine/parts}} subsuite.

            No additional parameters or tricks should be needed for MyISAM to run partition tests, since the suite already sets {{--partition}} option. So, we'll just run it:

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/parts-myisam

            ...

            Spent 1.168 of 5 seconds executing testcases

            Completed: All 8 tests were successful.
            {noformat}

            _Note: For now, it is a very basic suite, only contains a few tests and literally takes several seconds_

            All good, tests passed, nothing needs to be done here.

            Finally, there is the transactions subsuite (also contains tests for XA and snapshots). It is also very basic, but we know that MyISAM doesn't support transactions, XA and snapshots, so results won't be this pretty. Maybe we don't even need to run it, but why not try.

            {{mkdir ../storage/myisam/mysql-test/storage_engine/trx}}

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/trx-myisam

            Spent 0.000 of 10 seconds executing testcases

            Completed: Failed 13/13 tests, 0.00% were successful.

            Failing test(s): storage_engine/trx-myisam.cons_snapshot_repeatable_read storage_engine/trx-myisam.cons_snapshot_serializable storage_engine/trx-myisam.delete storage_engine/trx-myisam.insert storage_engine/trx-myisam.level_read_committed storage_engine/trx-myisam.level_read_uncommitted storage_engine/trx-myisam.level_repeatable_read storage_engine/trx-myisam.level_serializable storage_engine/trx-myisam.select_for_update storage_engine/trx-myisam.select_lock_in_share_mode storage_engine/trx-myisam.update storage_engine/trx-myisam.xa storage_engine/trx-myisam.xa_recovery
            {noformat}

            The results are mess, as expected. All differences start with an extra warning (sometimes more than one, as the same is produced for snapshots and XA):

            {noformat}
            +# -- WARNING ----------------------------------------------------------------
            +# According to I_S.ENGINES, MyISAM does not support transactions.
            +# If it is true, the test will most likely fail; you can
            +# either create an rdiff file, or add the test to disabled.def.
            +# If transactions should be supported, check the data in Information Schema.
            +# ---------------------------------------------------------------------------
            {noformat}

            The rest is a mix of wrong results (missing rows, extra rows, different rows), warnings about rollback not working, etc. With 3rd-party engines, it would be up to a maintainer whether to add diffs for all tests or just ignore the whole thing since transactions are not supported. If you choose to ignore, simply delete the newly created {{../storage/<your_engine>/mysql-test/storage_engine/trx}}, and the subsuite won't be executed for the engine. I choose to keep it, so I'll add rdiffs for all 13 files, it's not that much work, after all.

            {noformat}
            diff -u suite/storage_engine/trx/cons_snapshot_repeatable_read.result suite/storage_engine/trx/cons_snapshot_repeatable_read.reject > ../storage/myisam/mysql-test/storage_engine/trx/cons_snapshot_repeatable_read.rdiff
            ...
            diff -u suite/storage_engine/trx/xa_recovery.result suite/storage_engine/trx/xa_recovery.reject > ../storage/myisam/mysql-test/storage_engine/trx/xa_recovery.rdiff
            {noformat}

            _Tip: If you decide to do the same, take time to go through the reject files and see that your engine works as expected, under the circumstances; sometimes making things do what they are not normally expected to do reveals hidden problems._


            Now we can run everything together:

            {noformat}

            perl ./mtr --suite=storage_engine-myisam,storage_engine/*-myisam

            ...

            Spent 46.249 of 70 seconds executing testcases

            Completed: All 120 tests were successful.
            {noformat}

            That's all. Now just stay out of failures.




            h3. Intermediate level: InnoDB plugin

            <coming soon>

            h3. Advanced level: MERGE

            <coming soon>
            [Goal|#goal]
            [Problems to solve|#problems]
            [... Problem 1: Varying result files|#problem1]
            [...... Solution|#solution1]
            [... Problem 2: Unsupported features|#problem2]
            [...... Solution|#solution2]
            [... Problem 3: Varying result files|#problem3]
            [...... Solution|#solution3]
            [... Filed bugs|#bugs]
            [Tuning|#tuning]
            [... Assumptions|#assumptions]
            [... Common tuning steps|#common]
            [... Examples|#examples]
            [...... MyISAM|#myisam]
            [...... InnoDB plugin|#innodb]
            [...... MERGE|#merge]

            {anchor:goal}
            h2. Goal

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            The suite it not supposed to provide exhaustive engine testing, and certainly cannot test non-standard engine-specific features (due to its very nature of being agnostic to the engine under test), but rather perform a relatively quick evaluation of standard functionality expected from a storage engine.

            {anchor:problems}
            h2. Problems to solve

            Existing MTR tests are not very suitable for running on different storage engines.

            {anchor:problem1}
            h3. Problem 1: Varying result files

            Traditional MTR/mysqltest have very strict requirements in regard to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            Example:

            {noformat}
            OPTIMIZE TABLE t1;

            # One engine might in certain situations say

            Table Op Msg_type Msg_text
            test.t1 optimize status Table is already up to date

            # Another always says

            Table Op Msg_type Msg_text
            test.t1 optimize status OK
            {noformat}

            Neither is wrong, and yet if we have one variant in the result file, for another engine the test will fail.

            Till recently the only solution was to copy the whole suite somewhere and recreate result files, thus introducing usually unwanted code duplication.

            {anchor:solution1}
            h4. Solution

            To solve this problem, we will use functionality developed in scope of MDEV-30.
            With this task, the original test results can be adapted to the storage engine by creating diff files (basically, patches). These patches are stored in the engine folder rather than in the MTR suite folder, so no changes in the original test suite is performed.

            {anchor:problem2}
            h3. Problem 2: Unsupported features

            Most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a traditional MTR test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail -- not just produce a result mismatch which we could patch, but actually fail in the middle of execution, -- so we will have to stop running it, and ALTER TABLE will not be tested at all; and the same will happen to any test which uses keys, even if it's used briefly for an unimportant part of the test.

            This problem is even bigger than the first one, because not only does it currently require copying the test suite and maintaining a separate set of *result* files, but also modifying the test files themselves; and often modifications will be extensive enough to rule out even the possibility of future merges with the original test suite.

            {anchor:solution2}
            h4. Solution

            We will solve this problem by allowing all tests to run through intermediate failures to the end (unless the server crashes or a syntax error in the test itself occurs). Thus, the tests will also produce mismatches, and the engine maintainer will be able to decide whether the test is really inapplicable and needs to be disabled, or the partial failures are acceptable and can be stored in the result diff file.

            Example:

            We are testing that a table option is supported by the engine (merely is accepted and stored); lets say AUTO_INCREMENT. We create a table with the option and check that it is not lost; but we also want to see that it works with ALTER; so, we add an alter table statement where the option is modified.

            Now, if we try to run such a test with FEDERATED engine, it will break on ALTER -- not because the option is unsupported, but because ALTER is. In the storage engine test suite, the test will continue to the end and will produce a mismatch (because the ALTER statement will cause an error). Then, someone who tunes the suite for FEDERATED engine, can see the difference, decide that it's reasonable, since ALTER is not supposed to work, and create an rdiff file. Thus, we do not skip the test for the table option entirely, but simply limit the testing to functionality the engine supports.

            The tests will try to be smart in their internal logic, but within reason. When a statement is likely to fail (e.g. a feature is often unsupported), the test will check the result of the statement and will produce a verbose error message, including versions why it could happen. Also, if a failure happened on table creation, or in some other key moment, so that a part of test becomes useless (if a table was not created, there is no point at trying to alter it, etc.), the test will ignore a part of the flow, and proceed to the next part. But it does not happen always, the checks are only performed when the probability of a failure is reasonably high.
            Of course, if a part of the test is bypassed, it will also cause a mismatch, so the maintainer will be aware of that and will be able to check why it happens.

            Additionally, some tests will check the value of the default index (see below notes about configuration), and if it is unset, certain tests will be completely skipped, because every part of them requires indexes (e.g. index.test).

            For the functionality described in the ENGINES table (transactions, savepoints, XA), if a test requires one of the features and it is shown as unsupported in the table, the test will produce a warning. But unlike with the indexes, it will not be skipped, because we assume that for a new engine information in the ENGINES table might be inaccurate.

            {anchor:problem3}
            h3. Problem 3: Different primitives

            Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            {{CREATE TABLE t (i INT)}}
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like a connection string for FEDERATED engine. Since there is, obviously, no connection string in generic tests, every table creation for FEDERATED will fail, and nothing will be tested. Currently, there is no way to work around that apart from copying the tests and modifying them manually.

            In even more complicated scenario, in order to create a table properly, additional actions need to be performed; e.g. to create a functional MERGE table, we need to also create an underlying MyISAM table; and if we want to alter a MERGE table, we should alter the underlying table(s) accordingly.

            {anchor:solution3}
            h4. Solution

            We will provide the engine maintainer with several tools to tune the suite for their engine.

            * Some variables can be set to configure the basic test behavior:
            ** the engine name (to be used in {{CREATE TABLE}} and be masked in {{SHOW CREATE TABLE}});
            ** default column options (when any are required, e.g. NOT NULL for CSV);
            ** default table options (when any are required, e.g. connection for FEDERATED);
            ** default index (INDEX, UNIQUE INDEX, PRIMARY KEY, whichever supported, or a special index);
            ** default types (int type, char type, in case standard ones are not supported);

            * if any actions need to be performed before/after each test, they can be described in include files which are always executed (e.g. creation of a server for FEDERATED before a test, and dropping it after a test);

            * if a server requires non-standard procedures for table creation or modification, they can be tuned in include files which tests to use to perform these actions (CREATE and ALTER table are not run directly in the tests, only by calling the include files, so they are configurable);

            * the engine can have its own set of disabled files, so that there is no need to provide only selected test names on the MTR command line -- the whole suite can be run, and unnecessary files can be disabled through the list;

            * server options can be configured;

            * non-default combinations can be configured for the engine;

            * subsuites can be completely ignored for the engine, so that if there is no interest in running partitioning tests or transactional tests, they can be simply omitted without any effort.

            For more details, see the 'Tuning' section.

            {anchor:bugs}
            h3. Bugs filed while working on the suite:

            LP:973039 / MDEV-211 (Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK)
            LP:976104 / MDEV-216 (Assertion `0' failed in my_message_sql on UPDATE IGNORE, or unknown error on release build)
            LP:990187 / MDEV-237 (Assertion `share->reopen == 1' failed at maria_extra on ADD PARTITION)
            LP:994275 / MDEV-248 (Assertion `real->type() == Item::FIELD_ITEM' failed in add_not_null_conds(JOIN*))
            LP:994854 / MDEV-254 (Server hangs on updating an XtraDB table after FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT)
            LP:997397 (TRUNCATE on a partitioned Aria table does not reset AUTO_INCREMENT)
            MDEV-365 (Assertion `block->type == PAGECACHE_EMPTY_PAGE ...' failed in pagecache_read on ADD PARTITION)
            MDEV-366 (Assertion `share->reopen == 1' failed in maria_extra on DROP TABLE which is locked twice)
            MDEV-388 (Creating a federated table with a non-existing server returns a random error code)
            MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            MySQL:64888 (Inconsistent behavior of dynamic concurrent_insert values)
            MySQL:64892 (Session-level low_priority_updates does not work for INSERT)
            MySQL:65146 (WITH CONSISTENT SNAPSHOT does not work with isolation level SERIALIZABLE)
            MySQL:65225 (InnoDB miscalculates auto-increment after changing auto_increment_increment)
            MySQL:65429 / LP:1004910 (Assertion failure block->page.space == page_get_space_id(page_align(ptr)))
            MySQL:65431 / LP:1005052 (Error 1430 while selecting from a federated table with index on a bit column)
            MySQL:65846 (INSERT DELAYED on a BLACKHOLE table hangs forever)
            MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)

            _The list might be incomplete_

            {anchor:tuning}
            h2. Tuning

            Please note that the test suite is synthetic. It means that it contains tests for a set of features which (the whole set) is not supported by any known engine. The same is true for result files -- they are taken from different engines, depending on which produce the most sensible result, and in rare cases are even artificial. This means that there is no engine which would pass the whole suite without any tuning -- not even MyISAM, which contributed to the result files the most.

            {anchor:assumptions}
            h3. Assumptions

            We presume that the tests are set on a source tree build (which is reasonable, considering that the main goal is to help with storage engine development). We want to configure the tests for some engine OurENGINE.

            The storage engine code is located in {{<basedir>/storage/<ourengine>}} folder.

            The storage_engine test suite is located in {{<basedir>/mysql-test/suite/storage_engine}} folder and contains subsuites in subfolders of the corresponding names (currently {{<basedir>/mysql-test/suite/storage_engine/parts}} and {{<basedir>/mysql-test/suite/storage_engine/trx}}.

            {anchor:common}
            h3. Common tuning steps

            1. Create {{<basedir>/storage/ourengine/mysql-test/storage_engine}} folder.

            2. Copy {{<basedir>/mysql-test/suite/storage_engine/define_engine.inc}} to {{<basedir>/storage/ourengine/mysql-test/storage_engine}} and edit the file (at the very least set ENGINE variable; check other variables which you find there, and modify as needed.

            3. If the engine requires any additional server options (e.g. to load the engine, or to tune it, or both), create {{<basedir>/storage/ourengine/mysql-test/storage_engine/suite.opt}} file and add the options there.

            4. If you know in advance that the engine requires additional steps before a test, add them at the end of {{define_engine.inc}}.

            5. If you created any SQL objects in {{define_engine.inc}}, create file {{<basedir>/storage/ourengine/mysql-test/storage_engine/cleanup_engine.inc}}, or copy a stub from {{<basedir>/mysql-test/suite/storage_engine}}, and add the logic to drop the objects.

            6. If you know in advance that the engine requires non-standard table creation and/or modification procedure, copy {{<basedir>/mysql-test/suite/storage_engine/create_table.inc}} and {{<basedir>/mysql-test/suite/storage_engine/alter_table.inc}} into {{<basedir>/storage/ourengine/mysql-test/storage_engine/}} and modify them as needed.

            7. Try to run the 1st test:
            {{perl ./mtr --suite=storage_engine-ourengine 1st}}

            8. If the test produces a mismatch, analyze it and decide whether table creation or test options or server options require more tuning, or the difference is expected.

            9. If the difference is expected, create an rdiff file as
            {{ diff -u {{<basedir>/mysql-test/suite/storage_engine/1st.result <basedir>/mysql-test/suite/storage_engine/1st.reject > <basedir>/storage/ourengine/mysql-test/storage_engine/1st.rdiff}}

            10. When the 1st test passes, run the whole suite:

            {{perl ./mtr --suite=storage_engine-ourengine --force --max-test-fail=0}}

            11. Analyze failures, modify parameters or include files as needed, create rdiff files.

            12. If any tests requires specific non-standard server/engine options, create files {{<testname>.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine}}.

            13. If any tests have to be skipped, add them to {{<basedir>/storage/ourengine/mysql-test/storage_engine/disabled.def}}.

            14. When you are satisfied with the results of storage_engine suite, proceed to the subsuites. If you are interested in running partitions tests, create folder {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            15. Repeat step 3, if needed, only now create {{suite.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}.

            16. Run the subsuite as {{perl ./mtr --suite=storage_engine/parts-ourengine --force --max-test-fail=0}}

            17. Repeat steps 11-13, only now create files in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            18. If you want to run transactional tests, repeat steps 14-17 for trx subsuite.

            19. To execute the whole set of tests, run {{perl ./mtr --suite=storage_engine-ourengine,storage_engine/*-ourengine}}

            h2. Examples

            Below are some real-life experiences of tuning the test suite for engines that currently come with MariaDB. Actual exact steps are always unique and depend on the storage engine (otherwise it wouldn't be tuning). In some cases only very basic actions are required; in others, it takes more work. In general, the more a storage engine is "different", the trickier is the task.


            h3. Easy level: MyISAM

            Lets see how to make the suite work for a relatively standard engine, in terms of behavior similar to main MySQL engines.
            We will create an overlay for MyISAM.
            _Note: "overlay" is a term introduced by MDEV\-30, and it basically means a test suite or set of suites adapted for a certain engine_

            {{cd <basedir>/mysql-test}}
            {{mkdir -p ../storage/myisam/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/myisam/mysql-test/storage_engine/}}

            Edit the copied version of {{define_engine.inc}} to set {{ENGINE}} to MyISAM:

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = MyISAM;
             #
             ################################
             #
            {noformat}

            All other parameters look good: MyISAM does not require any specific table or column options, and it supports non-unique indexes, INT and CHAR types. These all are defaults (you can see it in {{define_engine.inc}}).

            Also, we do not need any additional server options to activate the engine, since MyISAM is always there.

            So, now we can try to run the 1st test:

            {noformat}
            perl ./mtr --suite=storage_engine-myisam 1st

            ...

            storage_engine-myisam.1st [ pass ] 20
            {noformat}

            The first test passed. Okay, now we can run the whole suite. Some tests will fail, this is expected; we need to see the results
            so we can decide whether we should accept the difference, or disable the test, or patch the code.
            So, we will run it with {{\-\-force}} and {{\-\-max-test-fail=0}}, to see all at once (you might also want to redirect the output to a file, because you will need it):

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine-myisam

            ...

            Spent 42.193 of 64 seconds executing testcases

            Completed: Failed 7/99 tests, 92.93% were successful.

            Failing test(s): storage_engine-myisam.alter_tablespace storage_engine-myisam.check_table storage_engine-myisam.foreign_keys storage_engine-myisam.index_type_hash storage_engine-myisam.show_engine storage_engine-myisam.tbl_opt_insert_method storage_engine-myisam.tbl_opt_union
            {noformat}

            _Note: Total execution time depends on the engine and on the machine. For MyISAM it's pretty fast, for other engines might take longer._

            7 failing tests on the first iteration is extremely good, of course it won't be that bright with other engines. But it's MyISAM, what did you expect...

            (Please keep in mind that the suite results are synthetic, meaning that they belong to different engines, and in rare cases are even artificial, e.g. when the only available engine supporting the functionality currently has a bug. So, no engine is expected to pass all tests without tuning. Not even MyISAM.)

            Now it's time to analyze results.

            Result handling in this suite is somewhat different from standard MTR tests. Unless you managed to crash the server or to hit a syntax error in the test itself, all results will be mismatches -- that is, a test will not abort on a failed statement, but instead will try to proceed. The big benefit of it is that if your engine does not support everything, the tests are still usable, you'll just need to approve the difference in the results (by creating an rdiff file).

            Of course, it also means that the tests produce more noise than usual -- e.g. if your engine does not support ALTER, a standard MTR test would break on the first ALTER statement, while ours will continue and will allow you to use the logic, but will also produce a bunch of mismatches due to failing statements. Internal logic in tests does its best to make it cleaner, but still, the noise is expected.

            So, with this knowledge, lets find the failures and go through them one by one.
            Tip: if you saved the output to a file, failures can easily be found by {{' fail '}} search string (without quote marks).

            The first failing test is *alter_tablespace*. Well, naturally -- no tablespaces for MyISAM. But lets look at the output.

            Mismatch says that some stuff is missing, and instead the test produces this:

            {noformat}
            +ERROR HY000: Table storage engine for 't1' doesn't have this option
            +# ERROR: Statement ended with errno 1031, errname ER_ILLEGAL_HA (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
            +# The statement|command finished with ER_ILLEGAL_HA.
            +# Tablespace operations or the syntax or the mix could be unsupported.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Since MyISAM is not supposed to support tablespaces, changing the engine code is not an option. You can decide whether to disable the test entirely, or to create an rdiff file for it. Unless you have really tough restrictions on suite execution time, I recommend keeping tests and creating rdiffs. Even if you don't care about the result, the test, running as a regression check, will at least show that this very first statement has not suddenly started crashing the server, or something unexpectedly changed in the behavior.

            Creating an rdiff file is simple:

            {{diff -u suite/storage_engine/alter_tablespace.result suite/storage_engine/alter_tablespace.reject > ../storage/myisam/mysql-test/storage_engine/alter_tablespace.rdiff}}

            Next failed test: *check_table*.

            Its difference is simple:

            {noformat}
             INSERT INTO t1 (a,b) VALUES (6,'f');
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a,b) VALUES (7,'g');
             INSERT INTO t2 (a,b) VALUES (8,'h');
             CHECK TABLE t2, t1 MEDIUM;
            @@ -52,7 +52,7 @@
             INSERT INTO t1 (a) VALUES (17),(120),(132);
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a) VALUES (801),(900),(7714);
             CHECK TABLE t1 MEDIUM;
             Table Op Msg_type Msg_text
            {noformat}

            No harm if the engine realizes that the table is up to date and says so; adding a diff.

            {{diff -u suite/storage_engine/check_table.result suite/storage_engine/check_table.reject > ../storage/myisam/mysql-test/storage_engine/check_table.rdiff}}


            Next failing test: *foreign_keys*

            Just like alter_tablespace, it produces a lot of messages about possibly unsupported functionality, which is natural, since MyISAM doesn't have foreign keys. Adding a diff.


            Next failing test: *index_type_hash*

            It produces mismatches where HASH type is replaced by BTREE type:

            {noformat}
             SHOW KEYS IN t1;
             Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
            -t1 1 a 1 a # # NULL NULL # HASH
            +t1 1 a 1 a # # NULL NULL # BTREE
            {noformat}

            Since MyISAM doesn't support HASH type, it is fine. Adding a diff.

            Next failing test: *show_engine*

            This test is optimistic and expects that SHOW ENGINE <engine name> STATUS command returns _something_ -- that is, the test obfuscates the output, but expects that it contains a row. For MyISAM it is not the case, hence the diff looks like a missing row:

            {noformat}
            @@ -4,7 +4,6 @@
             SHOW ENGINE <STORAGE_ENGINE> STATUS;
             Type Name Status
            -<STORAGE_ENGINE> ### Engine status, can be long and changeable ###
             # For SHOW MUTEX even the number of lines is volatile, so the result logging is disabled,
            {noformat}

            Adding a diff.


            Next failing tests: *tbl_opt_insert_method*, *tbl_opt_union*

            MyISAM does not use the table option INSERT_METHOD, it's the MERGE engine thing. But table creation does not normally fail on unsupported options, they are simply ignored. That's what we see here:

            {noformat}
            @@ -5,7 +5,7 @@
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL,
               `b` char(8) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1 INSERT_METHOD=FIRST
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
            {noformat}

            SHOW ENGINE does not show the option. This is fine, adding rdiff. Exactly the same for UNION.

            These were all 7 failures.

            Now it's time to take care of subsuites. Currently there are two of them: {{parts}} (stands for 'partitions'), and {{trx}} (stands for 'transactions').

            MyISAM definitely supports partitioning, so lets try them first.

            (we are still in {{<basedir>/mysql-test}})

            {{mkdir ../storage/myisam/mysql-test/storage_engine/parts}}

            This will show MTR that our engine is interested in the {{storage_engine/parts}} subsuite.

            No additional parameters or tricks should be needed for MyISAM to run partition tests, since the suite already sets {{--partition}} option. So, we'll just run it:

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/parts-myisam

            ...

            Spent 1.168 of 5 seconds executing testcases

            Completed: All 8 tests were successful.
            {noformat}

            _Note: For now, it is a very basic suite, only contains a few tests and literally takes several seconds_

            All good, tests passed, nothing needs to be done here.

            Finally, there is the transactions subsuite (also contains tests for XA and snapshots). It is also very basic, but we know that MyISAM doesn't support transactions, XA and snapshots, so results won't be this pretty. Maybe we don't even need to run it, but why not try.

            {{mkdir ../storage/myisam/mysql-test/storage_engine/trx}}

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/trx-myisam

            Spent 0.000 of 10 seconds executing testcases

            Completed: Failed 13/13 tests, 0.00% were successful.

            Failing test(s): storage_engine/trx-myisam.cons_snapshot_repeatable_read storage_engine/trx-myisam.cons_snapshot_serializable storage_engine/trx-myisam.delete storage_engine/trx-myisam.insert storage_engine/trx-myisam.level_read_committed storage_engine/trx-myisam.level_read_uncommitted storage_engine/trx-myisam.level_repeatable_read storage_engine/trx-myisam.level_serializable storage_engine/trx-myisam.select_for_update storage_engine/trx-myisam.select_lock_in_share_mode storage_engine/trx-myisam.update storage_engine/trx-myisam.xa storage_engine/trx-myisam.xa_recovery
            {noformat}

            The results are mess, as expected. All differences start with an extra warning (sometimes more than one, as the same is produced for snapshots and XA):

            {noformat}
            +# -- WARNING ----------------------------------------------------------------
            +# According to I_S.ENGINES, MyISAM does not support transactions.
            +# If it is true, the test will most likely fail; you can
            +# either create an rdiff file, or add the test to disabled.def.
            +# If transactions should be supported, check the data in Information Schema.
            +# ---------------------------------------------------------------------------
            {noformat}

            The rest is a mix of wrong results (missing rows, extra rows, different rows), warnings about rollback not working, etc. With 3rd-party engines, it would be up to a maintainer whether to add diffs for all tests or just ignore the whole thing since transactions are not supported. If you choose to ignore, simply delete the newly created {{../storage/<your_engine>/mysql-test/storage_engine/trx}}, and the subsuite won't be executed for the engine. I choose to keep it, so I'll add rdiffs for all 13 files, it's not that much work, after all.

            {noformat}
            diff -u suite/storage_engine/trx/cons_snapshot_repeatable_read.result suite/storage_engine/trx/cons_snapshot_repeatable_read.reject > ../storage/myisam/mysql-test/storage_engine/trx/cons_snapshot_repeatable_read.rdiff
            ...
            diff -u suite/storage_engine/trx/xa_recovery.result suite/storage_engine/trx/xa_recovery.reject > ../storage/myisam/mysql-test/storage_engine/trx/xa_recovery.rdiff
            {noformat}

            _Tip: If you decide to do the same, take time to go through the reject files and see that your engine works as expected, under the circumstances; sometimes making things do what they are not normally expected to do reveals hidden problems._


            Now we can run everything together:

            {noformat}

            perl ./mtr --suite=storage_engine-myisam,storage_engine/*-myisam

            ...

            Spent 46.249 of 70 seconds executing testcases

            Completed: All 120 tests were successful.
            {noformat}

            That's all. Now just stay out of failures.




            h3. Intermediate level: InnoDB plugin

            <coming soon>

            h3. Advanced level: MERGE

            <coming soon>
            elenst Elena Stepanova made changes -
            Description [Goal|#goal]
            [Problems to solve|#problems]
            [... Problem 1: Varying result files|#problem1]
            [...... Solution|#solution1]
            [... Problem 2: Unsupported features|#problem2]
            [...... Solution|#solution2]
            [... Problem 3: Varying result files|#problem3]
            [...... Solution|#solution3]
            [... Filed bugs|#bugs]
            [Tuning|#tuning]
            [... Assumptions|#assumptions]
            [... Common tuning steps|#common]
            [... Examples|#examples]
            [...... MyISAM|#myisam]
            [...... InnoDB plugin|#innodb]
            [...... MERGE|#merge]

            {anchor:goal}
            h2. Goal

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            The suite it not supposed to provide exhaustive engine testing, and certainly cannot test non-standard engine-specific features (due to its very nature of being agnostic to the engine under test), but rather perform a relatively quick evaluation of standard functionality expected from a storage engine.

            {anchor:problems}
            h2. Problems to solve

            Existing MTR tests are not very suitable for running on different storage engines.

            {anchor:problem1}
            h3. Problem 1: Varying result files

            Traditional MTR/mysqltest have very strict requirements in regard to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            Example:

            {noformat}
            OPTIMIZE TABLE t1;

            # One engine might in certain situations say

            Table Op Msg_type Msg_text
            test.t1 optimize status Table is already up to date

            # Another always says

            Table Op Msg_type Msg_text
            test.t1 optimize status OK
            {noformat}

            Neither is wrong, and yet if we have one variant in the result file, for another engine the test will fail.

            Till recently the only solution was to copy the whole suite somewhere and recreate result files, thus introducing usually unwanted code duplication.

            {anchor:solution1}
            h4. Solution

            To solve this problem, we will use functionality developed in scope of MDEV-30.
            With this task, the original test results can be adapted to the storage engine by creating diff files (basically, patches). These patches are stored in the engine folder rather than in the MTR suite folder, so no changes in the original test suite is performed.

            {anchor:problem2}
            h3. Problem 2: Unsupported features

            Most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a traditional MTR test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail -- not just produce a result mismatch which we could patch, but actually fail in the middle of execution, -- so we will have to stop running it, and ALTER TABLE will not be tested at all; and the same will happen to any test which uses keys, even if it's used briefly for an unimportant part of the test.

            This problem is even bigger than the first one, because not only does it currently require copying the test suite and maintaining a separate set of *result* files, but also modifying the test files themselves; and often modifications will be extensive enough to rule out even the possibility of future merges with the original test suite.

            {anchor:solution2}
            h4. Solution

            We will solve this problem by allowing all tests to run through intermediate failures to the end (unless the server crashes or a syntax error in the test itself occurs). Thus, the tests will also produce mismatches, and the engine maintainer will be able to decide whether the test is really inapplicable and needs to be disabled, or the partial failures are acceptable and can be stored in the result diff file.

            Example:

            We are testing that a table option is supported by the engine (merely is accepted and stored); lets say AUTO_INCREMENT. We create a table with the option and check that it is not lost; but we also want to see that it works with ALTER; so, we add an alter table statement where the option is modified.

            Now, if we try to run such a test with FEDERATED engine, it will break on ALTER -- not because the option is unsupported, but because ALTER is. In the storage engine test suite, the test will continue to the end and will produce a mismatch (because the ALTER statement will cause an error). Then, someone who tunes the suite for FEDERATED engine, can see the difference, decide that it's reasonable, since ALTER is not supposed to work, and create an rdiff file. Thus, we do not skip the test for the table option entirely, but simply limit the testing to functionality the engine supports.

            The tests will try to be smart in their internal logic, but within reason. When a statement is likely to fail (e.g. a feature is often unsupported), the test will check the result of the statement and will produce a verbose error message, including versions why it could happen. Also, if a failure happened on table creation, or in some other key moment, so that a part of test becomes useless (if a table was not created, there is no point at trying to alter it, etc.), the test will ignore a part of the flow, and proceed to the next part. But it does not happen always, the checks are only performed when the probability of a failure is reasonably high.
            Of course, if a part of the test is bypassed, it will also cause a mismatch, so the maintainer will be aware of that and will be able to check why it happens.

            Additionally, some tests will check the value of the default index (see below notes about configuration), and if it is unset, certain tests will be completely skipped, because every part of them requires indexes (e.g. index.test).

            For the functionality described in the ENGINES table (transactions, savepoints, XA), if a test requires one of the features and it is shown as unsupported in the table, the test will produce a warning. But unlike with the indexes, it will not be skipped, because we assume that for a new engine information in the ENGINES table might be inaccurate.

            {anchor:problem3}
            h3. Problem 3: Different primitives

            Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            {{CREATE TABLE t (i INT)}}
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like a connection string for FEDERATED engine. Since there is, obviously, no connection string in generic tests, every table creation for FEDERATED will fail, and nothing will be tested. Currently, there is no way to work around that apart from copying the tests and modifying them manually.

            In even more complicated scenario, in order to create a table properly, additional actions need to be performed; e.g. to create a functional MERGE table, we need to also create an underlying MyISAM table; and if we want to alter a MERGE table, we should alter the underlying table(s) accordingly.

            {anchor:solution3}
            h4. Solution

            We will provide the engine maintainer with several tools to tune the suite for their engine.

            * Some variables can be set to configure the basic test behavior:
            ** the engine name (to be used in {{CREATE TABLE}} and be masked in {{SHOW CREATE TABLE}});
            ** default column options (when any are required, e.g. NOT NULL for CSV);
            ** default table options (when any are required, e.g. connection for FEDERATED);
            ** default index (INDEX, UNIQUE INDEX, PRIMARY KEY, whichever supported, or a special index);
            ** default types (int type, char type, in case standard ones are not supported);

            * if any actions need to be performed before/after each test, they can be described in include files which are always executed (e.g. creation of a server for FEDERATED before a test, and dropping it after a test);

            * if a server requires non-standard procedures for table creation or modification, they can be tuned in include files which tests to use to perform these actions (CREATE and ALTER table are not run directly in the tests, only by calling the include files, so they are configurable);

            * the engine can have its own set of disabled files, so that there is no need to provide only selected test names on the MTR command line -- the whole suite can be run, and unnecessary files can be disabled through the list;

            * server options can be configured;

            * non-default combinations can be configured for the engine;

            * subsuites can be completely ignored for the engine, so that if there is no interest in running partitioning tests or transactional tests, they can be simply omitted without any effort.

            For more details, see the 'Tuning' section.

            {anchor:bugs}
            h3. Bugs filed while working on the suite:

            LP:973039 / MDEV-211 (Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK)
            LP:976104 / MDEV-216 (Assertion `0' failed in my_message_sql on UPDATE IGNORE, or unknown error on release build)
            LP:990187 / MDEV-237 (Assertion `share->reopen == 1' failed at maria_extra on ADD PARTITION)
            LP:994275 / MDEV-248 (Assertion `real->type() == Item::FIELD_ITEM' failed in add_not_null_conds(JOIN*))
            LP:994854 / MDEV-254 (Server hangs on updating an XtraDB table after FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT)
            LP:997397 (TRUNCATE on a partitioned Aria table does not reset AUTO_INCREMENT)
            MDEV-365 (Assertion `block->type == PAGECACHE_EMPTY_PAGE ...' failed in pagecache_read on ADD PARTITION)
            MDEV-366 (Assertion `share->reopen == 1' failed in maria_extra on DROP TABLE which is locked twice)
            MDEV-388 (Creating a federated table with a non-existing server returns a random error code)
            MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            MySQL:64888 (Inconsistent behavior of dynamic concurrent_insert values)
            MySQL:64892 (Session-level low_priority_updates does not work for INSERT)
            MySQL:65146 (WITH CONSISTENT SNAPSHOT does not work with isolation level SERIALIZABLE)
            MySQL:65225 (InnoDB miscalculates auto-increment after changing auto_increment_increment)
            MySQL:65429 / LP:1004910 (Assertion failure block->page.space == page_get_space_id(page_align(ptr)))
            MySQL:65431 / LP:1005052 (Error 1430 while selecting from a federated table with index on a bit column)
            MySQL:65846 (INSERT DELAYED on a BLACKHOLE table hangs forever)
            MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)

            _The list might be incomplete_

            {anchor:tuning}
            h2. Tuning

            Please note that the test suite is synthetic. It means that it contains tests for a set of features which (the whole set) is not supported by any known engine. The same is true for result files -- they are taken from different engines, depending on which produce the most sensible result, and in rare cases are even artificial. This means that there is no engine which would pass the whole suite without any tuning -- not even MyISAM, which contributed to the result files the most.

            {anchor:assumptions}
            h3. Assumptions

            We presume that the tests are set on a source tree build (which is reasonable, considering that the main goal is to help with storage engine development). We want to configure the tests for some engine OurENGINE.

            The storage engine code is located in {{<basedir>/storage/<ourengine>}} folder.

            The storage_engine test suite is located in {{<basedir>/mysql-test/suite/storage_engine}} folder and contains subsuites in subfolders of the corresponding names (currently {{<basedir>/mysql-test/suite/storage_engine/parts}} and {{<basedir>/mysql-test/suite/storage_engine/trx}}.

            {anchor:common}
            h3. Common tuning steps

            1. Create {{<basedir>/storage/ourengine/mysql-test/storage_engine}} folder.

            2. Copy {{<basedir>/mysql-test/suite/storage_engine/define_engine.inc}} to {{<basedir>/storage/ourengine/mysql-test/storage_engine}} and edit the file (at the very least set ENGINE variable; check other variables which you find there, and modify as needed.

            3. If the engine requires any additional server options (e.g. to load the engine, or to tune it, or both), create {{<basedir>/storage/ourengine/mysql-test/storage_engine/suite.opt}} file and add the options there.

            4. If you know in advance that the engine requires additional steps before a test, add them at the end of {{define_engine.inc}}.

            5. If you created any SQL objects in {{define_engine.inc}}, create file {{<basedir>/storage/ourengine/mysql-test/storage_engine/cleanup_engine.inc}}, or copy a stub from {{<basedir>/mysql-test/suite/storage_engine}}, and add the logic to drop the objects.

            6. If you know in advance that the engine requires non-standard table creation and/or modification procedure, copy {{<basedir>/mysql-test/suite/storage_engine/create_table.inc}} and {{<basedir>/mysql-test/suite/storage_engine/alter_table.inc}} into {{<basedir>/storage/ourengine/mysql-test/storage_engine/}} and modify them as needed.

            7. Try to run the 1st test:
            {{perl ./mtr --suite=storage_engine-ourengine 1st}}

            8. If the test produces a mismatch, analyze it and decide whether table creation or test options or server options require more tuning, or the difference is expected.

            9. If the difference is expected, create an rdiff file as
            {{ diff -u {{<basedir>/mysql-test/suite/storage_engine/1st.result <basedir>/mysql-test/suite/storage_engine/1st.reject > <basedir>/storage/ourengine/mysql-test/storage_engine/1st.rdiff}}

            10. When the 1st test passes, run the whole suite:

            {{perl ./mtr --suite=storage_engine-ourengine --force --max-test-fail=0}}

            11. Analyze failures, modify parameters or include files as needed, create rdiff files.

            12. If any tests requires specific non-standard server/engine options, create files {{<testname>.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine}}.

            13. If any tests have to be skipped, add them to {{<basedir>/storage/ourengine/mysql-test/storage_engine/disabled.def}}.

            14. When you are satisfied with the results of storage_engine suite, proceed to the subsuites. If you are interested in running partitions tests, create folder {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            15. Repeat step 3, if needed, only now create {{suite.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}.

            16. Run the subsuite as {{perl ./mtr --suite=storage_engine/parts-ourengine --force --max-test-fail=0}}

            17. Repeat steps 11-13, only now create files in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            18. If you want to run transactional tests, repeat steps 14-17 for trx subsuite.

            19. To execute the whole set of tests, run {{perl ./mtr --suite=storage_engine-ourengine,storage_engine/*-ourengine}}

            h2. Examples

            Below are some real-life experiences of tuning the test suite for engines that currently come with MariaDB. Actual exact steps are always unique and depend on the storage engine (otherwise it wouldn't be tuning). In some cases only very basic actions are required; in others, it takes more work. In general, the more a storage engine is "different", the trickier is the task.


            h3. Easy level: MyISAM

            Lets see how to make the suite work for a relatively standard engine, in terms of behavior similar to main MySQL engines.
            We will create an overlay for MyISAM.
            _Note: "overlay" is a term introduced by MDEV\-30, and it basically means a test suite or set of suites adapted for a certain engine_

            {{cd <basedir>/mysql-test}}
            {{mkdir -p ../storage/myisam/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/myisam/mysql-test/storage_engine/}}

            Edit the copied version of {{define_engine.inc}} to set {{ENGINE}} to MyISAM:

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = MyISAM;
             #
             ################################
             #
            {noformat}

            All other parameters look good: MyISAM does not require any specific table or column options, and it supports non-unique indexes, INT and CHAR types. These all are defaults (you can see it in {{define_engine.inc}}).

            Also, we do not need any additional server options to activate the engine, since MyISAM is always there.

            So, now we can try to run the 1st test:

            {noformat}
            perl ./mtr --suite=storage_engine-myisam 1st

            ...

            storage_engine-myisam.1st [ pass ] 20
            {noformat}

            The first test passed. Okay, now we can run the whole suite. Some tests will fail, this is expected; we need to see the results
            so we can decide whether we should accept the difference, or disable the test, or patch the code.
            So, we will run it with {{\-\-force}} and {{\-\-max-test-fail=0}}, to see all at once (you might also want to redirect the output to a file, because you will need it):

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine-myisam

            ...

            Spent 42.193 of 64 seconds executing testcases

            Completed: Failed 7/99 tests, 92.93% were successful.

            Failing test(s): storage_engine-myisam.alter_tablespace storage_engine-myisam.check_table storage_engine-myisam.foreign_keys storage_engine-myisam.index_type_hash storage_engine-myisam.show_engine storage_engine-myisam.tbl_opt_insert_method storage_engine-myisam.tbl_opt_union
            {noformat}

            _Note: Total execution time depends on the engine and on the machine. For MyISAM it's pretty fast, for other engines might take longer._

            7 failing tests on the first iteration is extremely good, of course it won't be that bright with other engines. But it's MyISAM, what did you expect...

            (Please keep in mind that the suite results are synthetic, meaning that they belong to different engines, and in rare cases are even artificial, e.g. when the only available engine supporting the functionality currently has a bug. So, no engine is expected to pass all tests without tuning. Not even MyISAM.)

            Now it's time to analyze results.

            Result handling in this suite is somewhat different from standard MTR tests. Unless you managed to crash the server or to hit a syntax error in the test itself, all results will be mismatches -- that is, a test will not abort on a failed statement, but instead will try to proceed. The big benefit of it is that if your engine does not support everything, the tests are still usable, you'll just need to approve the difference in the results (by creating an rdiff file).

            Of course, it also means that the tests produce more noise than usual -- e.g. if your engine does not support ALTER, a standard MTR test would break on the first ALTER statement, while ours will continue and will allow you to use the logic, but will also produce a bunch of mismatches due to failing statements. Internal logic in tests does its best to make it cleaner, but still, the noise is expected.

            So, with this knowledge, lets find the failures and go through them one by one.
            Tip: if you saved the output to a file, failures can easily be found by {{' fail '}} search string (without quote marks).

            The first failing test is *alter_tablespace*. Well, naturally -- no tablespaces for MyISAM. But lets look at the output.

            Mismatch says that some stuff is missing, and instead the test produces this:

            {noformat}
            +ERROR HY000: Table storage engine for 't1' doesn't have this option
            +# ERROR: Statement ended with errno 1031, errname ER_ILLEGAL_HA (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
            +# The statement|command finished with ER_ILLEGAL_HA.
            +# Tablespace operations or the syntax or the mix could be unsupported.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Since MyISAM is not supposed to support tablespaces, changing the engine code is not an option. You can decide whether to disable the test entirely, or to create an rdiff file for it. Unless you have really tough restrictions on suite execution time, I recommend keeping tests and creating rdiffs. Even if you don't care about the result, the test, running as a regression check, will at least show that this very first statement has not suddenly started crashing the server, or something unexpectedly changed in the behavior.

            Creating an rdiff file is simple:

            {{diff -u suite/storage_engine/alter_tablespace.result suite/storage_engine/alter_tablespace.reject > ../storage/myisam/mysql-test/storage_engine/alter_tablespace.rdiff}}

            Next failed test: *check_table*.

            Its difference is simple:

            {noformat}
             INSERT INTO t1 (a,b) VALUES (6,'f');
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a,b) VALUES (7,'g');
             INSERT INTO t2 (a,b) VALUES (8,'h');
             CHECK TABLE t2, t1 MEDIUM;
            @@ -52,7 +52,7 @@
             INSERT INTO t1 (a) VALUES (17),(120),(132);
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a) VALUES (801),(900),(7714);
             CHECK TABLE t1 MEDIUM;
             Table Op Msg_type Msg_text
            {noformat}

            No harm if the engine realizes that the table is up to date and says so; adding a diff.

            {{diff -u suite/storage_engine/check_table.result suite/storage_engine/check_table.reject > ../storage/myisam/mysql-test/storage_engine/check_table.rdiff}}


            Next failing test: *foreign_keys*

            Just like alter_tablespace, it produces a lot of messages about possibly unsupported functionality, which is natural, since MyISAM doesn't have foreign keys. Adding a diff.


            Next failing test: *index_type_hash*

            It produces mismatches where HASH type is replaced by BTREE type:

            {noformat}
             SHOW KEYS IN t1;
             Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
            -t1 1 a 1 a # # NULL NULL # HASH
            +t1 1 a 1 a # # NULL NULL # BTREE
            {noformat}

            Since MyISAM doesn't support HASH type, it is fine. Adding a diff.

            Next failing test: *show_engine*

            This test is optimistic and expects that SHOW ENGINE <engine name> STATUS command returns _something_ -- that is, the test obfuscates the output, but expects that it contains a row. For MyISAM it is not the case, hence the diff looks like a missing row:

            {noformat}
            @@ -4,7 +4,6 @@
             SHOW ENGINE <STORAGE_ENGINE> STATUS;
             Type Name Status
            -<STORAGE_ENGINE> ### Engine status, can be long and changeable ###
             # For SHOW MUTEX even the number of lines is volatile, so the result logging is disabled,
            {noformat}

            Adding a diff.


            Next failing tests: *tbl_opt_insert_method*, *tbl_opt_union*

            MyISAM does not use the table option INSERT_METHOD, it's the MERGE engine thing. But table creation does not normally fail on unsupported options, they are simply ignored. That's what we see here:

            {noformat}
            @@ -5,7 +5,7 @@
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL,
               `b` char(8) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1 INSERT_METHOD=FIRST
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
            {noformat}

            SHOW ENGINE does not show the option. This is fine, adding rdiff. Exactly the same for UNION.

            These were all 7 failures.

            Now it's time to take care of subsuites. Currently there are two of them: {{parts}} (stands for 'partitions'), and {{trx}} (stands for 'transactions').

            MyISAM definitely supports partitioning, so lets try them first.

            (we are still in {{<basedir>/mysql-test}})

            {{mkdir ../storage/myisam/mysql-test/storage_engine/parts}}

            This will show MTR that our engine is interested in the {{storage_engine/parts}} subsuite.

            No additional parameters or tricks should be needed for MyISAM to run partition tests, since the suite already sets {{--partition}} option. So, we'll just run it:

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/parts-myisam

            ...

            Spent 1.168 of 5 seconds executing testcases

            Completed: All 8 tests were successful.
            {noformat}

            _Note: For now, it is a very basic suite, only contains a few tests and literally takes several seconds_

            All good, tests passed, nothing needs to be done here.

            Finally, there is the transactions subsuite (also contains tests for XA and snapshots). It is also very basic, but we know that MyISAM doesn't support transactions, XA and snapshots, so results won't be this pretty. Maybe we don't even need to run it, but why not try.

            {{mkdir ../storage/myisam/mysql-test/storage_engine/trx}}

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/trx-myisam

            Spent 0.000 of 10 seconds executing testcases

            Completed: Failed 13/13 tests, 0.00% were successful.

            Failing test(s): storage_engine/trx-myisam.cons_snapshot_repeatable_read storage_engine/trx-myisam.cons_snapshot_serializable storage_engine/trx-myisam.delete storage_engine/trx-myisam.insert storage_engine/trx-myisam.level_read_committed storage_engine/trx-myisam.level_read_uncommitted storage_engine/trx-myisam.level_repeatable_read storage_engine/trx-myisam.level_serializable storage_engine/trx-myisam.select_for_update storage_engine/trx-myisam.select_lock_in_share_mode storage_engine/trx-myisam.update storage_engine/trx-myisam.xa storage_engine/trx-myisam.xa_recovery
            {noformat}

            The results are mess, as expected. All differences start with an extra warning (sometimes more than one, as the same is produced for snapshots and XA):

            {noformat}
            +# -- WARNING ----------------------------------------------------------------
            +# According to I_S.ENGINES, MyISAM does not support transactions.
            +# If it is true, the test will most likely fail; you can
            +# either create an rdiff file, or add the test to disabled.def.
            +# If transactions should be supported, check the data in Information Schema.
            +# ---------------------------------------------------------------------------
            {noformat}

            The rest is a mix of wrong results (missing rows, extra rows, different rows), warnings about rollback not working, etc. With 3rd-party engines, it would be up to a maintainer whether to add diffs for all tests or just ignore the whole thing since transactions are not supported. If you choose to ignore, simply delete the newly created {{../storage/<your_engine>/mysql-test/storage_engine/trx}}, and the subsuite won't be executed for the engine. I choose to keep it, so I'll add rdiffs for all 13 files, it's not that much work, after all.

            {noformat}
            diff -u suite/storage_engine/trx/cons_snapshot_repeatable_read.result suite/storage_engine/trx/cons_snapshot_repeatable_read.reject > ../storage/myisam/mysql-test/storage_engine/trx/cons_snapshot_repeatable_read.rdiff
            ...
            diff -u suite/storage_engine/trx/xa_recovery.result suite/storage_engine/trx/xa_recovery.reject > ../storage/myisam/mysql-test/storage_engine/trx/xa_recovery.rdiff
            {noformat}

            _Tip: If you decide to do the same, take time to go through the reject files and see that your engine works as expected, under the circumstances; sometimes making things do what they are not normally expected to do reveals hidden problems._


            Now we can run everything together:

            {noformat}

            perl ./mtr --suite=storage_engine-myisam,storage_engine/*-myisam

            ...

            Spent 46.249 of 70 seconds executing testcases

            Completed: All 120 tests were successful.
            {noformat}

            That's all. Now just stay out of failures.




            h3. Intermediate level: InnoDB plugin

            <coming soon>

            h3. Advanced level: MERGE

            <coming soon>
            [Goal|#goal]
            [Problems to solve|#problems]
            [... Problem 1: Varying result files|#problem1]
            [...... Solution|#solution1]
            [... Problem 2: Unsupported features|#problem2]
            [...... Solution|#solution2]
            [... Problem 3: Varying result files|#problem3]
            [...... Solution|#solution3]
            [... Filed bugs|#bugs]
            [Tuning|#tuning]
            [... Assumptions|#assumptions]
            [... Common tuning steps|#common]
            [... Examples|#examples]
            [...... MyISAM|#myisam]
            [...... InnoDB plugin|#innodb]
            [...... MERGE|#merge]

            {anchor:goal}

            h2. Goal

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            The suite it not supposed to provide exhaustive engine testing, and certainly cannot test non-standard engine-specific features (due to its very nature of being agnostic to the engine under test), but rather perform a relatively quick evaluation of standard functionality expected from a storage engine.

            {anchor:problems}

            h2. Problems to solve

            Existing MTR tests are not very suitable for running on different storage engines.

            {anchor:problem1}

            h3. Problem 1: Varying result files

            Traditional MTR/mysqltest have very strict requirements in regard to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            Example:

            {noformat}
            OPTIMIZE TABLE t1;

            # One engine might in certain situations say

            Table Op Msg_type Msg_text
            test.t1 optimize status Table is already up to date

            # Another always says

            Table Op Msg_type Msg_text
            test.t1 optimize status OK
            {noformat}

            Neither is wrong, and yet if we have one variant in the result file, for another engine the test will fail.

            Till recently the only solution was to copy the whole suite somewhere and recreate result files, thus introducing usually unwanted code duplication.

            {anchor:solution1}
            h4. Solution

            To solve this problem, we will use functionality developed in scope of MDEV-30.
            With this task, the original test results can be adapted to the storage engine by creating diff files (basically, patches). These patches are stored in the engine folder rather than in the MTR suite folder, so no changes in the original test suite is performed.

            {anchor:problem2}
            h3. Problem 2: Unsupported features

            Most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a traditional MTR test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail -- not just produce a result mismatch which we could patch, but actually fail in the middle of execution, -- so we will have to stop running it, and ALTER TABLE will not be tested at all; and the same will happen to any test which uses keys, even if it's used briefly for an unimportant part of the test.

            This problem is even bigger than the first one, because not only does it currently require copying the test suite and maintaining a separate set of *result* files, but also modifying the test files themselves; and often modifications will be extensive enough to rule out even the possibility of future merges with the original test suite.

            {anchor:solution2}
            h4. Solution

            We will solve this problem by allowing all tests to run through intermediate failures to the end (unless the server crashes or a syntax error in the test itself occurs). Thus, the tests will also produce mismatches, and the engine maintainer will be able to decide whether the test is really inapplicable and needs to be disabled, or the partial failures are acceptable and can be stored in the result diff file.

            Example:

            We are testing that a table option is supported by the engine (merely is accepted and stored); lets say AUTO_INCREMENT. We create a table with the option and check that it is not lost; but we also want to see that it works with ALTER; so, we add an alter table statement where the option is modified.

            Now, if we try to run such a test with FEDERATED engine, it will break on ALTER -- not because the option is unsupported, but because ALTER is. In the storage engine test suite, the test will continue to the end and will produce a mismatch (because the ALTER statement will cause an error). Then, someone who tunes the suite for FEDERATED engine, can see the difference, decide that it's reasonable, since ALTER is not supposed to work, and create an rdiff file. Thus, we do not skip the test for the table option entirely, but simply limit the testing to functionality the engine supports.

            The tests will try to be smart in their internal logic, but within reason. When a statement is likely to fail (e.g. a feature is often unsupported), the test will check the result of the statement and will produce a verbose error message, including versions why it could happen. Also, if a failure happened on table creation, or in some other key moment, so that a part of test becomes useless (if a table was not created, there is no point at trying to alter it, etc.), the test will ignore a part of the flow, and proceed to the next part. But it does not happen always, the checks are only performed when the probability of a failure is reasonably high.
            Of course, if a part of the test is bypassed, it will also cause a mismatch, so the maintainer will be aware of that and will be able to check why it happens.

            Additionally, some tests will check the value of the default index (see below notes about configuration), and if it is unset, certain tests will be completely skipped, because every part of them requires indexes (e.g. index.test).

            For the functionality described in the ENGINES table (transactions, savepoints, XA), if a test requires one of the features and it is shown as unsupported in the table, the test will produce a warning. But unlike with the indexes, it will not be skipped, because we assume that for a new engine information in the ENGINES table might be inaccurate.

            {anchor:problem3}
            h3. Problem 3: Different primitives

            Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            {{CREATE TABLE t (i INT)}}
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like a connection string for FEDERATED engine. Since there is, obviously, no connection string in generic tests, every table creation for FEDERATED will fail, and nothing will be tested. Currently, there is no way to work around that apart from copying the tests and modifying them manually.

            In even more complicated scenario, in order to create a table properly, additional actions need to be performed; e.g. to create a functional MERGE table, we need to also create an underlying MyISAM table; and if we want to alter a MERGE table, we should alter the underlying table(s) accordingly.

            {anchor:solution3}
            h4. Solution

            We will provide the engine maintainer with several tools to tune the suite for their engine.

            * Some variables can be set to configure the basic test behavior:
            ** the engine name (to be used in {{CREATE TABLE}} and be masked in {{SHOW CREATE TABLE}});
            ** default column options (when any are required, e.g. NOT NULL for CSV);
            ** default table options (when any are required, e.g. connection for FEDERATED);
            ** default index (INDEX, UNIQUE INDEX, PRIMARY KEY, whichever supported, or a special index);
            ** default types (int type, char type, in case standard ones are not supported);

            * if any actions need to be performed before/after each test, they can be described in include files which are always executed (e.g. creation of a server for FEDERATED before a test, and dropping it after a test);

            * if a server requires non-standard procedures for table creation or modification, they can be tuned in include files which tests to use to perform these actions (CREATE and ALTER table are not run directly in the tests, only by calling the include files, so they are configurable);

            * the engine can have its own set of disabled files, so that there is no need to provide only selected test names on the MTR command line -- the whole suite can be run, and unnecessary files can be disabled through the list;

            * server options can be configured;

            * non-default combinations can be configured for the engine;

            * subsuites can be completely ignored for the engine, so that if there is no interest in running partitioning tests or transactional tests, they can be simply omitted without any effort.

            For more details, see the 'Tuning' section.

            {anchor:bugs}
            h3. Bugs filed while working on the suite:

            LP:973039 / MDEV-211 (Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK)
            LP:976104 / MDEV-216 (Assertion `0' failed in my_message_sql on UPDATE IGNORE, or unknown error on release build)
            LP:990187 / MDEV-237 (Assertion `share->reopen == 1' failed at maria_extra on ADD PARTITION)
            LP:994275 / MDEV-248 (Assertion `real->type() == Item::FIELD_ITEM' failed in add_not_null_conds(JOIN*))
            LP:994854 / MDEV-254 (Server hangs on updating an XtraDB table after FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT)
            LP:997397 (TRUNCATE on a partitioned Aria table does not reset AUTO_INCREMENT)
            MDEV-365 (Assertion `block->type == PAGECACHE_EMPTY_PAGE ...' failed in pagecache_read on ADD PARTITION)
            MDEV-366 (Assertion `share->reopen == 1' failed in maria_extra on DROP TABLE which is locked twice)
            MDEV-388 (Creating a federated table with a non-existing server returns a random error code)
            MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            MySQL:64888 (Inconsistent behavior of dynamic concurrent_insert values)
            MySQL:64892 (Session-level low_priority_updates does not work for INSERT)
            MySQL:65146 (WITH CONSISTENT SNAPSHOT does not work with isolation level SERIALIZABLE)
            MySQL:65225 (InnoDB miscalculates auto-increment after changing auto_increment_increment)
            MySQL:65429 / LP:1004910 (Assertion failure block->page.space == page_get_space_id(page_align(ptr)))
            MySQL:65431 / LP:1005052 (Error 1430 while selecting from a federated table with index on a bit column)
            MySQL:65846 (INSERT DELAYED on a BLACKHOLE table hangs forever)
            MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)

            _The list might be incomplete_

            {anchor:tuning}
            h2. Tuning

            Please note that the test suite is synthetic. It means that it contains tests for a set of features which (the whole set) is not supported by any known engine. The same is true for result files -- they are taken from different engines, depending on which produce the most sensible result, and in rare cases are even artificial. This means that there is no engine which would pass the whole suite without any tuning -- not even MyISAM, which contributed to the result files the most.

            {anchor:assumptions}
            h3. Assumptions

            We presume that the tests are set on a source tree build (which is reasonable, considering that the main goal is to help with storage engine development). We want to configure the tests for some engine OurENGINE.

            The storage engine code is located in {{<basedir>/storage/<ourengine>}} folder.

            The storage_engine test suite is located in {{<basedir>/mysql-test/suite/storage_engine}} folder and contains subsuites in subfolders of the corresponding names (currently {{<basedir>/mysql-test/suite/storage_engine/parts}} and {{<basedir>/mysql-test/suite/storage_engine/trx}}.

            {anchor:common}
            h3. Common tuning steps

            1. Create {{<basedir>/storage/ourengine/mysql-test/storage_engine}} folder.

            2. Copy {{<basedir>/mysql-test/suite/storage_engine/define_engine.inc}} to {{<basedir>/storage/ourengine/mysql-test/storage_engine}} and edit the file (at the very least set ENGINE variable; check other variables which you find there, and modify as needed.

            3. If the engine requires any additional server options (e.g. to load the engine, or to tune it, or both), create {{<basedir>/storage/ourengine/mysql-test/storage_engine/suite.opt}} file and add the options there.

            4. If you know in advance that the engine requires additional steps before a test, add them at the end of {{define_engine.inc}}.

            5. If you created any SQL objects in {{define_engine.inc}}, create file {{<basedir>/storage/ourengine/mysql-test/storage_engine/cleanup_engine.inc}}, or copy a stub from {{<basedir>/mysql-test/suite/storage_engine}}, and add the logic to drop the objects.

            6. If you know in advance that the engine requires non-standard table creation and/or modification procedure, copy {{<basedir>/mysql-test/suite/storage_engine/create_table.inc}} and {{<basedir>/mysql-test/suite/storage_engine/alter_table.inc}} into {{<basedir>/storage/ourengine/mysql-test/storage_engine/}} and modify them as needed.

            7. Try to run the 1st test:
            {{perl ./mtr --suite=storage_engine-ourengine 1st}}

            8. If the test produces a mismatch, analyze it and decide whether table creation or test options or server options require more tuning, or the difference is expected.

            9. If the difference is expected, create an rdiff file as
            {{ diff -u {{<basedir>/mysql-test/suite/storage_engine/1st.result <basedir>/mysql-test/suite/storage_engine/1st.reject > <basedir>/storage/ourengine/mysql-test/storage_engine/1st.rdiff}}

            10. When the 1st test passes, run the whole suite:

            {{perl ./mtr --suite=storage_engine-ourengine --force --max-test-fail=0}}

            11. Analyze failures, modify parameters or include files as needed, create rdiff files.

            12. If any tests requires specific non-standard server/engine options, create files {{<testname>.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine}}.

            13. If any tests have to be skipped, add them to {{<basedir>/storage/ourengine/mysql-test/storage_engine/disabled.def}}.

            14. When you are satisfied with the results of storage_engine suite, proceed to the subsuites. If you are interested in running partitions tests, create folder {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            15. Repeat step 3, if needed, only now create {{suite.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}.

            16. Run the subsuite as {{perl ./mtr --suite=storage_engine/parts-ourengine --force --max-test-fail=0}}

            17. Repeat steps 11-13, only now create files in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            18. If you want to run transactional tests, repeat steps 14-17 for trx subsuite.

            19. To execute the whole set of tests, run {{perl ./mtr --suite=storage_engine-ourengine,storage_engine/*-ourengine}}

            h2. Examples

            Below are some real-life experiences of tuning the test suite for engines that currently come with MariaDB. Actual exact steps are always unique and depend on the storage engine (otherwise it wouldn't be tuning). In some cases only very basic actions are required; in others, it takes more work. In general, the more a storage engine is "different", the trickier is the task.


            h3. Easy level: MyISAM

            Lets see how to make the suite work for a relatively standard engine, in terms of behavior similar to main MySQL engines.
            We will create an overlay for MyISAM.
            _Note: "overlay" is a term introduced by MDEV\-30, and it basically means a test suite or set of suites adapted for a certain engine_

            {{cd <basedir>/mysql-test}}
            {{mkdir -p ../storage/myisam/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/myisam/mysql-test/storage_engine/}}

            Edit the copied version of {{define_engine.inc}} to set {{ENGINE}} to MyISAM:

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = MyISAM;
             #
             ################################
             #
            {noformat}

            All other parameters look good: MyISAM does not require any specific table or column options, and it supports non-unique indexes, INT and CHAR types. These all are defaults (you can see it in {{define_engine.inc}}).

            Also, we do not need any additional server options to activate the engine, since MyISAM is always there.

            So, now we can try to run the 1st test:

            {noformat}
            perl ./mtr --suite=storage_engine-myisam 1st

            ...

            storage_engine-myisam.1st [ pass ] 20
            {noformat}

            The first test passed. Okay, now we can run the whole suite. Some tests will fail, this is expected; we need to see the results
            so we can decide whether we should accept the difference, or disable the test, or patch the code.
            So, we will run it with {{\-\-force}} and {{\-\-max-test-fail=0}}, to see all at once (you might also want to redirect the output to a file, because you will need it):

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine-myisam

            ...

            Spent 42.193 of 64 seconds executing testcases

            Completed: Failed 7/99 tests, 92.93% were successful.

            Failing test(s): storage_engine-myisam.alter_tablespace storage_engine-myisam.check_table storage_engine-myisam.foreign_keys storage_engine-myisam.index_type_hash storage_engine-myisam.show_engine storage_engine-myisam.tbl_opt_insert_method storage_engine-myisam.tbl_opt_union
            {noformat}

            _Note: Total execution time depends on the engine and on the machine. For MyISAM it's pretty fast, for other engines might take longer._

            7 failing tests on the first iteration is extremely good, of course it won't be that bright with other engines. But it's MyISAM, what did you expect...

            (Please keep in mind that the suite results are synthetic, meaning that they belong to different engines, and in rare cases are even artificial, e.g. when the only available engine supporting the functionality currently has a bug. So, no engine is expected to pass all tests without tuning. Not even MyISAM.)

            Now it's time to analyze results.

            Result handling in this suite is somewhat different from standard MTR tests. Unless you managed to crash the server or to hit a syntax error in the test itself, all results will be mismatches -- that is, a test will not abort on a failed statement, but instead will try to proceed. The big benefit of it is that if your engine does not support everything, the tests are still usable, you'll just need to approve the difference in the results (by creating an rdiff file).

            Of course, it also means that the tests produce more noise than usual -- e.g. if your engine does not support ALTER, a standard MTR test would break on the first ALTER statement, while ours will continue and will allow you to use the logic, but will also produce a bunch of mismatches due to failing statements. Internal logic in tests does its best to make it cleaner, but still, the noise is expected.

            So, with this knowledge, lets find the failures and go through them one by one.
            Tip: if you saved the output to a file, failures can easily be found by {{' fail '}} search string (without quote marks).

            The first failing test is *alter_tablespace*. Well, naturally -- no tablespaces for MyISAM. But lets look at the output.

            Mismatch says that some stuff is missing, and instead the test produces this:

            {noformat}
            +ERROR HY000: Table storage engine for 't1' doesn't have this option
            +# ERROR: Statement ended with errno 1031, errname ER_ILLEGAL_HA (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
            +# The statement|command finished with ER_ILLEGAL_HA.
            +# Tablespace operations or the syntax or the mix could be unsupported.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Since MyISAM is not supposed to support tablespaces, changing the engine code is not an option. You can decide whether to disable the test entirely, or to create an rdiff file for it. Unless you have really tough restrictions on suite execution time, I recommend keeping tests and creating rdiffs. Even if you don't care about the result, the test, running as a regression check, will at least show that this very first statement has not suddenly started crashing the server, or something unexpectedly changed in the behavior.

            Creating an rdiff file is simple:

            {{diff -u suite/storage_engine/alter_tablespace.result suite/storage_engine/alter_tablespace.reject > ../storage/myisam/mysql-test/storage_engine/alter_tablespace.rdiff}}

            Next failed test: *check_table*.

            Its difference is simple:

            {noformat}
             INSERT INTO t1 (a,b) VALUES (6,'f');
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a,b) VALUES (7,'g');
             INSERT INTO t2 (a,b) VALUES (8,'h');
             CHECK TABLE t2, t1 MEDIUM;
            @@ -52,7 +52,7 @@
             INSERT INTO t1 (a) VALUES (17),(120),(132);
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a) VALUES (801),(900),(7714);
             CHECK TABLE t1 MEDIUM;
             Table Op Msg_type Msg_text
            {noformat}

            No harm if the engine realizes that the table is up to date and says so; adding a diff.

            {{diff -u suite/storage_engine/check_table.result suite/storage_engine/check_table.reject > ../storage/myisam/mysql-test/storage_engine/check_table.rdiff}}


            Next failing test: *foreign_keys*

            Just like alter_tablespace, it produces a lot of messages about possibly unsupported functionality, which is natural, since MyISAM doesn't have foreign keys. Adding a diff.


            Next failing test: *index_type_hash*

            It produces mismatches where HASH type is replaced by BTREE type:

            {noformat}
             SHOW KEYS IN t1;
             Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
            -t1 1 a 1 a # # NULL NULL # HASH
            +t1 1 a 1 a # # NULL NULL # BTREE
            {noformat}

            Since MyISAM doesn't support HASH type, it is fine. Adding a diff.

            Next failing test: *show_engine*

            This test is optimistic and expects that SHOW ENGINE <engine name> STATUS command returns _something_ -- that is, the test obfuscates the output, but expects that it contains a row. For MyISAM it is not the case, hence the diff looks like a missing row:

            {noformat}
            @@ -4,7 +4,6 @@
             SHOW ENGINE <STORAGE_ENGINE> STATUS;
             Type Name Status
            -<STORAGE_ENGINE> ### Engine status, can be long and changeable ###
             # For SHOW MUTEX even the number of lines is volatile, so the result logging is disabled,
            {noformat}

            Adding a diff.


            Next failing tests: *tbl_opt_insert_method*, *tbl_opt_union*

            MyISAM does not use the table option INSERT_METHOD, it's the MERGE engine thing. But table creation does not normally fail on unsupported options, they are simply ignored. That's what we see here:

            {noformat}
            @@ -5,7 +5,7 @@
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL,
               `b` char(8) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1 INSERT_METHOD=FIRST
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
            {noformat}

            SHOW ENGINE does not show the option. This is fine, adding rdiff. Exactly the same for UNION.

            These were all 7 failures.

            Now it's time to take care of subsuites. Currently there are two of them: {{parts}} (stands for 'partitions'), and {{trx}} (stands for 'transactions').

            MyISAM definitely supports partitioning, so lets try them first.

            (we are still in {{<basedir>/mysql-test}})

            {{mkdir ../storage/myisam/mysql-test/storage_engine/parts}}

            This will show MTR that our engine is interested in the {{storage_engine/parts}} subsuite.

            No additional parameters or tricks should be needed for MyISAM to run partition tests, since the suite already sets {{--partition}} option. So, we'll just run it:

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/parts-myisam

            ...

            Spent 1.168 of 5 seconds executing testcases

            Completed: All 8 tests were successful.
            {noformat}

            _Note: For now, it is a very basic suite, only contains a few tests and literally takes several seconds_

            All good, tests passed, nothing needs to be done here.

            Finally, there is the transactions subsuite (also contains tests for XA and snapshots). It is also very basic, but we know that MyISAM doesn't support transactions, XA and snapshots, so results won't be this pretty. Maybe we don't even need to run it, but why not try.

            {{mkdir ../storage/myisam/mysql-test/storage_engine/trx}}

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/trx-myisam

            Spent 0.000 of 10 seconds executing testcases

            Completed: Failed 13/13 tests, 0.00% were successful.

            Failing test(s): storage_engine/trx-myisam.cons_snapshot_repeatable_read storage_engine/trx-myisam.cons_snapshot_serializable storage_engine/trx-myisam.delete storage_engine/trx-myisam.insert storage_engine/trx-myisam.level_read_committed storage_engine/trx-myisam.level_read_uncommitted storage_engine/trx-myisam.level_repeatable_read storage_engine/trx-myisam.level_serializable storage_engine/trx-myisam.select_for_update storage_engine/trx-myisam.select_lock_in_share_mode storage_engine/trx-myisam.update storage_engine/trx-myisam.xa storage_engine/trx-myisam.xa_recovery
            {noformat}

            The results are mess, as expected. All differences start with an extra warning (sometimes more than one, as the same is produced for snapshots and XA):

            {noformat}
            +# -- WARNING ----------------------------------------------------------------
            +# According to I_S.ENGINES, MyISAM does not support transactions.
            +# If it is true, the test will most likely fail; you can
            +# either create an rdiff file, or add the test to disabled.def.
            +# If transactions should be supported, check the data in Information Schema.
            +# ---------------------------------------------------------------------------
            {noformat}

            The rest is a mix of wrong results (missing rows, extra rows, different rows), warnings about rollback not working, etc. With 3rd-party engines, it would be up to a maintainer whether to add diffs for all tests or just ignore the whole thing since transactions are not supported. If you choose to ignore, simply delete the newly created {{../storage/<your_engine>/mysql-test/storage_engine/trx}}, and the subsuite won't be executed for the engine. I choose to keep it, so I'll add rdiffs for all 13 files, it's not that much work, after all.

            {noformat}
            diff -u suite/storage_engine/trx/cons_snapshot_repeatable_read.result suite/storage_engine/trx/cons_snapshot_repeatable_read.reject > ../storage/myisam/mysql-test/storage_engine/trx/cons_snapshot_repeatable_read.rdiff
            ...
            diff -u suite/storage_engine/trx/xa_recovery.result suite/storage_engine/trx/xa_recovery.reject > ../storage/myisam/mysql-test/storage_engine/trx/xa_recovery.rdiff
            {noformat}

            _Tip: If you decide to do the same, take time to go through the reject files and see that your engine works as expected, under the circumstances; sometimes making things do what they are not normally expected to do reveals hidden problems._


            Now we can run everything together:

            {noformat}

            perl ./mtr --suite=storage_engine-myisam,storage_engine/*-myisam

            ...

            Spent 46.249 of 70 seconds executing testcases

            Completed: All 120 tests were successful.
            {noformat}

            That's all. Now just stay out of failures.




            h3. Intermediate level: InnoDB plugin

            <coming soon>

            h3. Advanced level: MERGE

            <coming soon>
            elenst Elena Stepanova made changes -
            Description [Goal|#goal]
            [Problems to solve|#problems]
            [... Problem 1: Varying result files|#problem1]
            [...... Solution|#solution1]
            [... Problem 2: Unsupported features|#problem2]
            [...... Solution|#solution2]
            [... Problem 3: Varying result files|#problem3]
            [...... Solution|#solution3]
            [... Filed bugs|#bugs]
            [Tuning|#tuning]
            [... Assumptions|#assumptions]
            [... Common tuning steps|#common]
            [... Examples|#examples]
            [...... MyISAM|#myisam]
            [...... InnoDB plugin|#innodb]
            [...... MERGE|#merge]

            {anchor:goal}

            h2. Goal

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            The suite it not supposed to provide exhaustive engine testing, and certainly cannot test non-standard engine-specific features (due to its very nature of being agnostic to the engine under test), but rather perform a relatively quick evaluation of standard functionality expected from a storage engine.

            {anchor:problems}

            h2. Problems to solve

            Existing MTR tests are not very suitable for running on different storage engines.

            {anchor:problem1}

            h3. Problem 1: Varying result files

            Traditional MTR/mysqltest have very strict requirements in regard to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            Example:

            {noformat}
            OPTIMIZE TABLE t1;

            # One engine might in certain situations say

            Table Op Msg_type Msg_text
            test.t1 optimize status Table is already up to date

            # Another always says

            Table Op Msg_type Msg_text
            test.t1 optimize status OK
            {noformat}

            Neither is wrong, and yet if we have one variant in the result file, for another engine the test will fail.

            Till recently the only solution was to copy the whole suite somewhere and recreate result files, thus introducing usually unwanted code duplication.

            {anchor:solution1}
            h4. Solution

            To solve this problem, we will use functionality developed in scope of MDEV-30.
            With this task, the original test results can be adapted to the storage engine by creating diff files (basically, patches). These patches are stored in the engine folder rather than in the MTR suite folder, so no changes in the original test suite is performed.

            {anchor:problem2}
            h3. Problem 2: Unsupported features

            Most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a traditional MTR test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail -- not just produce a result mismatch which we could patch, but actually fail in the middle of execution, -- so we will have to stop running it, and ALTER TABLE will not be tested at all; and the same will happen to any test which uses keys, even if it's used briefly for an unimportant part of the test.

            This problem is even bigger than the first one, because not only does it currently require copying the test suite and maintaining a separate set of *result* files, but also modifying the test files themselves; and often modifications will be extensive enough to rule out even the possibility of future merges with the original test suite.

            {anchor:solution2}
            h4. Solution

            We will solve this problem by allowing all tests to run through intermediate failures to the end (unless the server crashes or a syntax error in the test itself occurs). Thus, the tests will also produce mismatches, and the engine maintainer will be able to decide whether the test is really inapplicable and needs to be disabled, or the partial failures are acceptable and can be stored in the result diff file.

            Example:

            We are testing that a table option is supported by the engine (merely is accepted and stored); lets say AUTO_INCREMENT. We create a table with the option and check that it is not lost; but we also want to see that it works with ALTER; so, we add an alter table statement where the option is modified.

            Now, if we try to run such a test with FEDERATED engine, it will break on ALTER -- not because the option is unsupported, but because ALTER is. In the storage engine test suite, the test will continue to the end and will produce a mismatch (because the ALTER statement will cause an error). Then, someone who tunes the suite for FEDERATED engine, can see the difference, decide that it's reasonable, since ALTER is not supposed to work, and create an rdiff file. Thus, we do not skip the test for the table option entirely, but simply limit the testing to functionality the engine supports.

            The tests will try to be smart in their internal logic, but within reason. When a statement is likely to fail (e.g. a feature is often unsupported), the test will check the result of the statement and will produce a verbose error message, including versions why it could happen. Also, if a failure happened on table creation, or in some other key moment, so that a part of test becomes useless (if a table was not created, there is no point at trying to alter it, etc.), the test will ignore a part of the flow, and proceed to the next part. But it does not happen always, the checks are only performed when the probability of a failure is reasonably high.
            Of course, if a part of the test is bypassed, it will also cause a mismatch, so the maintainer will be aware of that and will be able to check why it happens.

            Additionally, some tests will check the value of the default index (see below notes about configuration), and if it is unset, certain tests will be completely skipped, because every part of them requires indexes (e.g. index.test).

            For the functionality described in the ENGINES table (transactions, savepoints, XA), if a test requires one of the features and it is shown as unsupported in the table, the test will produce a warning. But unlike with the indexes, it will not be skipped, because we assume that for a new engine information in the ENGINES table might be inaccurate.

            {anchor:problem3}
            h3. Problem 3: Different primitives

            Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            {{CREATE TABLE t (i INT)}}
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like a connection string for FEDERATED engine. Since there is, obviously, no connection string in generic tests, every table creation for FEDERATED will fail, and nothing will be tested. Currently, there is no way to work around that apart from copying the tests and modifying them manually.

            In even more complicated scenario, in order to create a table properly, additional actions need to be performed; e.g. to create a functional MERGE table, we need to also create an underlying MyISAM table; and if we want to alter a MERGE table, we should alter the underlying table(s) accordingly.

            {anchor:solution3}
            h4. Solution

            We will provide the engine maintainer with several tools to tune the suite for their engine.

            * Some variables can be set to configure the basic test behavior:
            ** the engine name (to be used in {{CREATE TABLE}} and be masked in {{SHOW CREATE TABLE}});
            ** default column options (when any are required, e.g. NOT NULL for CSV);
            ** default table options (when any are required, e.g. connection for FEDERATED);
            ** default index (INDEX, UNIQUE INDEX, PRIMARY KEY, whichever supported, or a special index);
            ** default types (int type, char type, in case standard ones are not supported);

            * if any actions need to be performed before/after each test, they can be described in include files which are always executed (e.g. creation of a server for FEDERATED before a test, and dropping it after a test);

            * if a server requires non-standard procedures for table creation or modification, they can be tuned in include files which tests to use to perform these actions (CREATE and ALTER table are not run directly in the tests, only by calling the include files, so they are configurable);

            * the engine can have its own set of disabled files, so that there is no need to provide only selected test names on the MTR command line -- the whole suite can be run, and unnecessary files can be disabled through the list;

            * server options can be configured;

            * non-default combinations can be configured for the engine;

            * subsuites can be completely ignored for the engine, so that if there is no interest in running partitioning tests or transactional tests, they can be simply omitted without any effort.

            For more details, see the 'Tuning' section.

            {anchor:bugs}
            h3. Bugs filed while working on the suite:

            LP:973039 / MDEV-211 (Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK)
            LP:976104 / MDEV-216 (Assertion `0' failed in my_message_sql on UPDATE IGNORE, or unknown error on release build)
            LP:990187 / MDEV-237 (Assertion `share->reopen == 1' failed at maria_extra on ADD PARTITION)
            LP:994275 / MDEV-248 (Assertion `real->type() == Item::FIELD_ITEM' failed in add_not_null_conds(JOIN*))
            LP:994854 / MDEV-254 (Server hangs on updating an XtraDB table after FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT)
            LP:997397 (TRUNCATE on a partitioned Aria table does not reset AUTO_INCREMENT)
            MDEV-365 (Assertion `block->type == PAGECACHE_EMPTY_PAGE ...' failed in pagecache_read on ADD PARTITION)
            MDEV-366 (Assertion `share->reopen == 1' failed in maria_extra on DROP TABLE which is locked twice)
            MDEV-388 (Creating a federated table with a non-existing server returns a random error code)
            MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            MySQL:64888 (Inconsistent behavior of dynamic concurrent_insert values)
            MySQL:64892 (Session-level low_priority_updates does not work for INSERT)
            MySQL:65146 (WITH CONSISTENT SNAPSHOT does not work with isolation level SERIALIZABLE)
            MySQL:65225 (InnoDB miscalculates auto-increment after changing auto_increment_increment)
            MySQL:65429 / LP:1004910 (Assertion failure block->page.space == page_get_space_id(page_align(ptr)))
            MySQL:65431 / LP:1005052 (Error 1430 while selecting from a federated table with index on a bit column)
            MySQL:65846 (INSERT DELAYED on a BLACKHOLE table hangs forever)
            MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)

            _The list might be incomplete_

            {anchor:tuning}
            h2. Tuning

            Please note that the test suite is synthetic. It means that it contains tests for a set of features which (the whole set) is not supported by any known engine. The same is true for result files -- they are taken from different engines, depending on which produce the most sensible result, and in rare cases are even artificial. This means that there is no engine which would pass the whole suite without any tuning -- not even MyISAM, which contributed to the result files the most.

            {anchor:assumptions}
            h3. Assumptions

            We presume that the tests are set on a source tree build (which is reasonable, considering that the main goal is to help with storage engine development). We want to configure the tests for some engine OurENGINE.

            The storage engine code is located in {{<basedir>/storage/<ourengine>}} folder.

            The storage_engine test suite is located in {{<basedir>/mysql-test/suite/storage_engine}} folder and contains subsuites in subfolders of the corresponding names (currently {{<basedir>/mysql-test/suite/storage_engine/parts}} and {{<basedir>/mysql-test/suite/storage_engine/trx}}.

            {anchor:common}
            h3. Common tuning steps

            1. Create {{<basedir>/storage/ourengine/mysql-test/storage_engine}} folder.

            2. Copy {{<basedir>/mysql-test/suite/storage_engine/define_engine.inc}} to {{<basedir>/storage/ourengine/mysql-test/storage_engine}} and edit the file (at the very least set ENGINE variable; check other variables which you find there, and modify as needed.

            3. If the engine requires any additional server options (e.g. to load the engine, or to tune it, or both), create {{<basedir>/storage/ourengine/mysql-test/storage_engine/suite.opt}} file and add the options there.

            4. If you know in advance that the engine requires additional steps before a test, add them at the end of {{define_engine.inc}}.

            5. If you created any SQL objects in {{define_engine.inc}}, create file {{<basedir>/storage/ourengine/mysql-test/storage_engine/cleanup_engine.inc}}, or copy a stub from {{<basedir>/mysql-test/suite/storage_engine}}, and add the logic to drop the objects.

            6. If you know in advance that the engine requires non-standard table creation and/or modification procedure, copy {{<basedir>/mysql-test/suite/storage_engine/create_table.inc}} and {{<basedir>/mysql-test/suite/storage_engine/alter_table.inc}} into {{<basedir>/storage/ourengine/mysql-test/storage_engine/}} and modify them as needed.

            7. Try to run the 1st test:
            {{perl ./mtr --suite=storage_engine-ourengine 1st}}

            8. If the test produces a mismatch, analyze it and decide whether table creation or test options or server options require more tuning, or the difference is expected.

            9. If the difference is expected, create an rdiff file as
            {{ diff -u {{<basedir>/mysql-test/suite/storage_engine/1st.result <basedir>/mysql-test/suite/storage_engine/1st.reject > <basedir>/storage/ourengine/mysql-test/storage_engine/1st.rdiff}}

            10. When the 1st test passes, run the whole suite:

            {{perl ./mtr --suite=storage_engine-ourengine --force --max-test-fail=0}}

            11. Analyze failures, modify parameters or include files as needed, create rdiff files.

            12. If any tests requires specific non-standard server/engine options, create files {{<testname>.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine}}.

            13. If any tests have to be skipped, add them to {{<basedir>/storage/ourengine/mysql-test/storage_engine/disabled.def}}.

            14. When you are satisfied with the results of storage_engine suite, proceed to the subsuites. If you are interested in running partitions tests, create folder {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            15. Repeat step 3, if needed, only now create {{suite.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}.

            16. Run the subsuite as {{perl ./mtr --suite=storage_engine/parts-ourengine --force --max-test-fail=0}}

            17. Repeat steps 11-13, only now create files in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            18. If you want to run transactional tests, repeat steps 14-17 for trx subsuite.

            19. To execute the whole set of tests, run {{perl ./mtr --suite=storage_engine-ourengine,storage_engine/*-ourengine}}

            h2. Examples

            Below are some real-life experiences of tuning the test suite for engines that currently come with MariaDB. Actual exact steps are always unique and depend on the storage engine (otherwise it wouldn't be tuning). In some cases only very basic actions are required; in others, it takes more work. In general, the more a storage engine is "different", the trickier is the task.


            h3. Easy level: MyISAM

            Lets see how to make the suite work for a relatively standard engine, in terms of behavior similar to main MySQL engines.
            We will create an overlay for MyISAM.
            _Note: "overlay" is a term introduced by MDEV\-30, and it basically means a test suite or set of suites adapted for a certain engine_

            {{cd <basedir>/mysql-test}}
            {{mkdir -p ../storage/myisam/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/myisam/mysql-test/storage_engine/}}

            Edit the copied version of {{define_engine.inc}} to set {{ENGINE}} to MyISAM:

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = MyISAM;
             #
             ################################
             #
            {noformat}

            All other parameters look good: MyISAM does not require any specific table or column options, and it supports non-unique indexes, INT and CHAR types. These all are defaults (you can see it in {{define_engine.inc}}).

            Also, we do not need any additional server options to activate the engine, since MyISAM is always there.

            So, now we can try to run the 1st test:

            {noformat}
            perl ./mtr --suite=storage_engine-myisam 1st

            ...

            storage_engine-myisam.1st [ pass ] 20
            {noformat}

            The first test passed. Okay, now we can run the whole suite. Some tests will fail, this is expected; we need to see the results
            so we can decide whether we should accept the difference, or disable the test, or patch the code.
            So, we will run it with {{\-\-force}} and {{\-\-max-test-fail=0}}, to see all at once (you might also want to redirect the output to a file, because you will need it):

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine-myisam

            ...

            Spent 42.193 of 64 seconds executing testcases

            Completed: Failed 7/99 tests, 92.93% were successful.

            Failing test(s): storage_engine-myisam.alter_tablespace storage_engine-myisam.check_table storage_engine-myisam.foreign_keys storage_engine-myisam.index_type_hash storage_engine-myisam.show_engine storage_engine-myisam.tbl_opt_insert_method storage_engine-myisam.tbl_opt_union
            {noformat}

            _Note: Total execution time depends on the engine and on the machine. For MyISAM it's pretty fast, for other engines might take longer._

            7 failing tests on the first iteration is extremely good, of course it won't be that bright with other engines. But it's MyISAM, what did you expect...

            (Please keep in mind that the suite results are synthetic, meaning that they belong to different engines, and in rare cases are even artificial, e.g. when the only available engine supporting the functionality currently has a bug. So, no engine is expected to pass all tests without tuning. Not even MyISAM.)

            Now it's time to analyze results.

            Result handling in this suite is somewhat different from standard MTR tests. Unless you managed to crash the server or to hit a syntax error in the test itself, all results will be mismatches -- that is, a test will not abort on a failed statement, but instead will try to proceed. The big benefit of it is that if your engine does not support everything, the tests are still usable, you'll just need to approve the difference in the results (by creating an rdiff file).

            Of course, it also means that the tests produce more noise than usual -- e.g. if your engine does not support ALTER, a standard MTR test would break on the first ALTER statement, while ours will continue and will allow you to use the logic, but will also produce a bunch of mismatches due to failing statements. Internal logic in tests does its best to make it cleaner, but still, the noise is expected.

            So, with this knowledge, lets find the failures and go through them one by one.
            Tip: if you saved the output to a file, failures can easily be found by {{' fail '}} search string (without quote marks).

            The first failing test is *alter_tablespace*. Well, naturally -- no tablespaces for MyISAM. But lets look at the output.

            Mismatch says that some stuff is missing, and instead the test produces this:

            {noformat}
            +ERROR HY000: Table storage engine for 't1' doesn't have this option
            +# ERROR: Statement ended with errno 1031, errname ER_ILLEGAL_HA (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
            +# The statement|command finished with ER_ILLEGAL_HA.
            +# Tablespace operations or the syntax or the mix could be unsupported.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Since MyISAM is not supposed to support tablespaces, changing the engine code is not an option. You can decide whether to disable the test entirely, or to create an rdiff file for it. Unless you have really tough restrictions on suite execution time, I recommend keeping tests and creating rdiffs. Even if you don't care about the result, the test, running as a regression check, will at least show that this very first statement has not suddenly started crashing the server, or something unexpectedly changed in the behavior.

            Creating an rdiff file is simple:

            {{diff -u suite/storage_engine/alter_tablespace.result suite/storage_engine/alter_tablespace.reject > ../storage/myisam/mysql-test/storage_engine/alter_tablespace.rdiff}}

            Next failed test: *check_table*.

            Its difference is simple:

            {noformat}
             INSERT INTO t1 (a,b) VALUES (6,'f');
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a,b) VALUES (7,'g');
             INSERT INTO t2 (a,b) VALUES (8,'h');
             CHECK TABLE t2, t1 MEDIUM;
            @@ -52,7 +52,7 @@
             INSERT INTO t1 (a) VALUES (17),(120),(132);
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a) VALUES (801),(900),(7714);
             CHECK TABLE t1 MEDIUM;
             Table Op Msg_type Msg_text
            {noformat}

            No harm if the engine realizes that the table is up to date and says so; adding a diff.

            {{diff -u suite/storage_engine/check_table.result suite/storage_engine/check_table.reject > ../storage/myisam/mysql-test/storage_engine/check_table.rdiff}}


            Next failing test: *foreign_keys*

            Just like alter_tablespace, it produces a lot of messages about possibly unsupported functionality, which is natural, since MyISAM doesn't have foreign keys. Adding a diff.


            Next failing test: *index_type_hash*

            It produces mismatches where HASH type is replaced by BTREE type:

            {noformat}
             SHOW KEYS IN t1;
             Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
            -t1 1 a 1 a # # NULL NULL # HASH
            +t1 1 a 1 a # # NULL NULL # BTREE
            {noformat}

            Since MyISAM doesn't support HASH type, it is fine. Adding a diff.

            Next failing test: *show_engine*

            This test is optimistic and expects that SHOW ENGINE <engine name> STATUS command returns _something_ -- that is, the test obfuscates the output, but expects that it contains a row. For MyISAM it is not the case, hence the diff looks like a missing row:

            {noformat}
            @@ -4,7 +4,6 @@
             SHOW ENGINE <STORAGE_ENGINE> STATUS;
             Type Name Status
            -<STORAGE_ENGINE> ### Engine status, can be long and changeable ###
             # For SHOW MUTEX even the number of lines is volatile, so the result logging is disabled,
            {noformat}

            Adding a diff.


            Next failing tests: *tbl_opt_insert_method*, *tbl_opt_union*

            MyISAM does not use the table option INSERT_METHOD, it's the MERGE engine thing. But table creation does not normally fail on unsupported options, they are simply ignored. That's what we see here:

            {noformat}
            @@ -5,7 +5,7 @@
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL,
               `b` char(8) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1 INSERT_METHOD=FIRST
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
            {noformat}

            SHOW ENGINE does not show the option. This is fine, adding rdiff. Exactly the same for UNION.

            These were all 7 failures.

            Now it's time to take care of subsuites. Currently there are two of them: {{parts}} (stands for 'partitions'), and {{trx}} (stands for 'transactions').

            MyISAM definitely supports partitioning, so lets try them first.

            (we are still in {{<basedir>/mysql-test}})

            {{mkdir ../storage/myisam/mysql-test/storage_engine/parts}}

            This will show MTR that our engine is interested in the {{storage_engine/parts}} subsuite.

            No additional parameters or tricks should be needed for MyISAM to run partition tests, since the suite already sets {{--partition}} option. So, we'll just run it:

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/parts-myisam

            ...

            Spent 1.168 of 5 seconds executing testcases

            Completed: All 8 tests were successful.
            {noformat}

            _Note: For now, it is a very basic suite, only contains a few tests and literally takes several seconds_

            All good, tests passed, nothing needs to be done here.

            Finally, there is the transactions subsuite (also contains tests for XA and snapshots). It is also very basic, but we know that MyISAM doesn't support transactions, XA and snapshots, so results won't be this pretty. Maybe we don't even need to run it, but why not try.

            {{mkdir ../storage/myisam/mysql-test/storage_engine/trx}}

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/trx-myisam

            Spent 0.000 of 10 seconds executing testcases

            Completed: Failed 13/13 tests, 0.00% were successful.

            Failing test(s): storage_engine/trx-myisam.cons_snapshot_repeatable_read storage_engine/trx-myisam.cons_snapshot_serializable storage_engine/trx-myisam.delete storage_engine/trx-myisam.insert storage_engine/trx-myisam.level_read_committed storage_engine/trx-myisam.level_read_uncommitted storage_engine/trx-myisam.level_repeatable_read storage_engine/trx-myisam.level_serializable storage_engine/trx-myisam.select_for_update storage_engine/trx-myisam.select_lock_in_share_mode storage_engine/trx-myisam.update storage_engine/trx-myisam.xa storage_engine/trx-myisam.xa_recovery
            {noformat}

            The results are mess, as expected. All differences start with an extra warning (sometimes more than one, as the same is produced for snapshots and XA):

            {noformat}
            +# -- WARNING ----------------------------------------------------------------
            +# According to I_S.ENGINES, MyISAM does not support transactions.
            +# If it is true, the test will most likely fail; you can
            +# either create an rdiff file, or add the test to disabled.def.
            +# If transactions should be supported, check the data in Information Schema.
            +# ---------------------------------------------------------------------------
            {noformat}

            The rest is a mix of wrong results (missing rows, extra rows, different rows), warnings about rollback not working, etc. With 3rd-party engines, it would be up to a maintainer whether to add diffs for all tests or just ignore the whole thing since transactions are not supported. If you choose to ignore, simply delete the newly created {{../storage/<your_engine>/mysql-test/storage_engine/trx}}, and the subsuite won't be executed for the engine. I choose to keep it, so I'll add rdiffs for all 13 files, it's not that much work, after all.

            {noformat}
            diff -u suite/storage_engine/trx/cons_snapshot_repeatable_read.result suite/storage_engine/trx/cons_snapshot_repeatable_read.reject > ../storage/myisam/mysql-test/storage_engine/trx/cons_snapshot_repeatable_read.rdiff
            ...
            diff -u suite/storage_engine/trx/xa_recovery.result suite/storage_engine/trx/xa_recovery.reject > ../storage/myisam/mysql-test/storage_engine/trx/xa_recovery.rdiff
            {noformat}

            _Tip: If you decide to do the same, take time to go through the reject files and see that your engine works as expected, under the circumstances; sometimes making things do what they are not normally expected to do reveals hidden problems._


            Now we can run everything together:

            {noformat}

            perl ./mtr --suite=storage_engine-myisam,storage_engine/*-myisam

            ...

            Spent 46.249 of 70 seconds executing testcases

            Completed: All 120 tests were successful.
            {noformat}

            That's all. Now just stay out of failures.




            h3. Intermediate level: InnoDB plugin

            <coming soon>

            h3. Advanced level: MERGE

            <coming soon>
            [Goal|#goal]
            [Problems to solve|#problems]
            [... Problem 1: Varying result files|#problem1]
            [...... Solution|#solution1]
            [... Problem 2: Unsupported features|#problem2]
            [...... Solution|#solution2]
            [... Problem 3: Varying result files|#problem3]
            [...... Solution|#solution3]
            [... Filed bugs|#bugs]
            [Tuning|#tuning]
            [... Assumptions|#assumptions]
            [... Common tuning steps|#common]
            [... Examples|#examples]
            [...... MyISAM|#myisam]
            [...... InnoDB plugin|#innodb]
            [...... MERGE|#merge]

            {anchor:goal}

            h2. Goal

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            The suite it not supposed to provide exhaustive engine testing, and certainly cannot test non-standard engine-specific features (due to its very nature of being agnostic to the engine under test), but rather perform a relatively quick evaluation of standard functionality expected from a storage engine.

            {anchor:problems}

            h2. Problems to solve

            Existing MTR tests are not very suitable for running on different storage engines.

            {anchor:problem1}

            h3. Problem 1: Varying result files

            Traditional MTR/mysqltest have very strict requirements in regard to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            Example:

            {noformat}
            OPTIMIZE TABLE t1;

            # One engine might in certain situations say

            Table Op Msg_type Msg_text
            test.t1 optimize status Table is already up to date

            # Another always says

            Table Op Msg_type Msg_text
            test.t1 optimize status OK
            {noformat}

            Neither is wrong, and yet if we have one variant in the result file, for another engine the test will fail.

            Till recently the only solution was to copy the whole suite somewhere and recreate result files, thus introducing usually unwanted code duplication.

            {anchor:solution1}
            h4. Solution

            To solve this problem, we will use functionality developed in scope of MDEV-30.
            With this task, the original test results can be adapted to the storage engine by creating diff files (basically, patches). These patches are stored in the engine folder rather than in the MTR suite folder, so no changes in the original test suite is performed.

            {anchor:problem2}
            h3. Problem 2: Unsupported features

            Most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a traditional MTR test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail -- not just produce a result mismatch which we could patch, but actually fail in the middle of execution, -- so we will have to stop running it, and ALTER TABLE will not be tested at all; and the same will happen to any test which uses keys, even if it's used briefly for an unimportant part of the test.

            This problem is even bigger than the first one, because not only does it currently require copying the test suite and maintaining a separate set of *result* files, but also modifying the test files themselves; and often modifications will be extensive enough to rule out even the possibility of future merges with the original test suite.

            {anchor:solution2}
            h4. Solution

            We will solve this problem by allowing all tests to run through intermediate failures to the end (unless the server crashes or a syntax error in the test itself occurs). Thus, the tests will also produce mismatches, and the engine maintainer will be able to decide whether the test is really inapplicable and needs to be disabled, or the partial failures are acceptable and can be stored in the result diff file.

            Example:

            We are testing that a table option is supported by the engine (merely is accepted and stored); lets say AUTO_INCREMENT. We create a table with the option and check that it is not lost; but we also want to see that it works with ALTER; so, we add an alter table statement where the option is modified.

            Now, if we try to run such a test with FEDERATED engine, it will break on ALTER -- not because the option is unsupported, but because ALTER is. In the storage engine test suite, the test will continue to the end and will produce a mismatch (because the ALTER statement will cause an error). Then, someone who tunes the suite for FEDERATED engine, can see the difference, decide that it's reasonable, since ALTER is not supposed to work, and create an rdiff file. Thus, we do not skip the test for the table option entirely, but simply limit the testing to functionality the engine supports.

            The tests will try to be smart in their internal logic, but within reason. When a statement is likely to fail (e.g. a feature is often unsupported), the test will check the result of the statement and will produce a verbose error message, including versions why it could happen. Also, if a failure happened on table creation, or in some other key moment, so that a part of test becomes useless (if a table was not created, there is no point at trying to alter it, etc.), the test will ignore a part of the flow, and proceed to the next part. But it does not happen always, the checks are only performed when the probability of a failure is reasonably high.
            Of course, if a part of the test is bypassed, it will also cause a mismatch, so the maintainer will be aware of that and will be able to check why it happens.

            Additionally, some tests will check the value of the default index (see below notes about configuration), and if it is unset, certain tests will be completely skipped, because every part of them requires indexes (e.g. index.test).

            For the functionality described in the ENGINES table (transactions, savepoints, XA), if a test requires one of the features and it is shown as unsupported in the table, the test will produce a warning. But unlike with the indexes, it will not be skipped, because we assume that for a new engine information in the ENGINES table might be inaccurate.

            {anchor:problem3}
            h3. Problem 3: Different primitives

            Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            {{CREATE TABLE t (i INT)}}
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like a connection string for FEDERATED engine. Since there is, obviously, no connection string in generic tests, every table creation for FEDERATED will fail, and nothing will be tested. Currently, there is no way to work around that apart from copying the tests and modifying them manually.

            In even more complicated scenario, in order to create a table properly, additional actions need to be performed; e.g. to create a functional MERGE table, we need to also create an underlying MyISAM table; and if we want to alter a MERGE table, we should alter the underlying table(s) accordingly.

            {anchor:solution3}
            h4. Solution

            We will provide the engine maintainer with several tools to tune the suite for their engine.

            * Some variables can be set to configure the basic test behavior:
            ** the engine name (to be used in {{CREATE TABLE}} and be masked in {{SHOW CREATE TABLE}});
            ** default column options (when any are required, e.g. NOT NULL for CSV);
            ** default table options (when any are required, e.g. connection for FEDERATED);
            ** default index (INDEX, UNIQUE INDEX, PRIMARY KEY, whichever supported, or a special index);
            ** default types (int type, char type, in case standard ones are not supported);

            * if any actions need to be performed before/after each test, they can be described in include files which are always executed (e.g. creation of a server for FEDERATED before a test, and dropping it after a test);

            * if a server requires non-standard procedures for table creation or modification, they can be tuned in include files which tests to use to perform these actions (CREATE and ALTER table are not run directly in the tests, only by calling the include files, so they are configurable);

            * the engine can have its own set of disabled files, so that there is no need to provide only selected test names on the MTR command line -- the whole suite can be run, and unnecessary files can be disabled through the list;

            * server options can be configured;

            * non-default combinations can be configured for the engine;

            * subsuites can be completely ignored for the engine, so that if there is no interest in running partitioning tests or transactional tests, they can be simply omitted without any effort.

            For more details, see the 'Tuning' section.

            {anchor:bugs}
            h3. Bugs filed while working on the suite:

            LP:973039 / MDEV-211 (Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK)
            LP:976104 / MDEV-216 (Assertion `0' failed in my_message_sql on UPDATE IGNORE, or unknown error on release build)
            LP:990187 / MDEV-237 (Assertion `share->reopen == 1' failed at maria_extra on ADD PARTITION)
            LP:994275 / MDEV-248 (Assertion `real->type() == Item::FIELD_ITEM' failed in add_not_null_conds(JOIN*))
            LP:994854 / MDEV-254 (Server hangs on updating an XtraDB table after FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT)
            LP:997397 (TRUNCATE on a partitioned Aria table does not reset AUTO_INCREMENT)
            MDEV-365 (Assertion `block->type == PAGECACHE_EMPTY_PAGE ...' failed in pagecache_read on ADD PARTITION)
            MDEV-366 (Assertion `share->reopen == 1' failed in maria_extra on DROP TABLE which is locked twice)
            MDEV-388 (Creating a federated table with a non-existing server returns a random error code)
            MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            MySQL:64888 (Inconsistent behavior of dynamic concurrent_insert values)
            MySQL:64892 (Session-level low_priority_updates does not work for INSERT)
            MySQL:65146 (WITH CONSISTENT SNAPSHOT does not work with isolation level SERIALIZABLE)
            MySQL:65225 (InnoDB miscalculates auto-increment after changing auto_increment_increment)
            MySQL:65429 / LP:1004910 (Assertion failure block->page.space == page_get_space_id(page_align(ptr)))
            MySQL:65431 / LP:1005052 (Error 1430 while selecting from a federated table with index on a bit column)
            MySQL:65846 (INSERT DELAYED on a BLACKHOLE table hangs forever)
            MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)

            _The list might be incomplete_

            {anchor:tuning}
            h2. Tuning

            Please note that the test suite is synthetic. It means that it contains tests for a set of features which (the whole set) is not supported by any known engine. The same is true for result files -- they are taken from different engines, depending on which produce the most sensible result, and in rare cases are even artificial. This means that there is no engine which would pass the whole suite without any tuning -- not even MyISAM, which contributed to the result files the most.

            {anchor:assumptions}
            h3. Assumptions

            We presume that the tests are set on a source tree build (which is reasonable, considering that the main goal is to help with storage engine development). We want to configure the tests for some engine OurENGINE.

            The storage engine code is located in {{<basedir>/storage/<ourengine>}} folder.

            The storage_engine test suite is located in {{<basedir>/mysql-test/suite/storage_engine}} folder and contains subsuites in subfolders of the corresponding names (currently {{<basedir>/mysql-test/suite/storage_engine/parts}} and {{<basedir>/mysql-test/suite/storage_engine/trx}}.

            {anchor:common}
            h3. Common tuning steps

            1. Create {{<basedir>/storage/ourengine/mysql-test/storage_engine}} folder.

            2. Copy {{<basedir>/mysql-test/suite/storage_engine/define_engine.inc}} to {{<basedir>/storage/ourengine/mysql-test/storage_engine}} and edit the file (at the very least set ENGINE variable; check other variables which you find there, and modify as needed.

            3. If the engine requires any additional server options (e.g. to load the engine, or to tune it, or both), create {{<basedir>/storage/ourengine/mysql-test/storage_engine/suite.opt}} file and add the options there.

            4. If you know in advance that the engine requires additional steps before a test, add them at the end of {{define_engine.inc}}.

            5. If you created any SQL objects in {{define_engine.inc}}, create file {{<basedir>/storage/ourengine/mysql-test/storage_engine/cleanup_engine.inc}}, or copy a stub from {{<basedir>/mysql-test/suite/storage_engine}}, and add the logic to drop the objects.

            6. If you know in advance that the engine requires non-standard table creation and/or modification procedure, copy {{<basedir>/mysql-test/suite/storage_engine/create_table.inc}} and {{<basedir>/mysql-test/suite/storage_engine/alter_table.inc}} into {{<basedir>/storage/ourengine/mysql-test/storage_engine/}} and modify them as needed.

            7. Try to run the 1st test:
            {{perl ./mtr --suite=storage_engine-ourengine 1st}}

            8. If the test produces a mismatch, analyze it and decide whether table creation or test options or server options require more tuning, or the difference is expected.

            9. If the difference is expected, create an rdiff file as
            {{ diff -u {{<basedir>/mysql-test/suite/storage_engine/1st.result <basedir>/mysql-test/suite/storage_engine/1st.reject > <basedir>/storage/ourengine/mysql-test/storage_engine/1st.rdiff}}

            10. When the 1st test passes, run the whole suite:

            {{perl ./mtr --suite=storage_engine-ourengine --force --max-test-fail=0}}

            11. Analyze failures, modify parameters or include files as needed, create rdiff files.

            12. If any tests requires specific non-standard server/engine options, create files {{<testname>.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine}}.

            13. If any tests have to be skipped, add them to {{<basedir>/storage/ourengine/mysql-test/storage_engine/disabled.def}}.

            14. When you are satisfied with the results of storage_engine suite, proceed to the subsuites. If you are interested in running partitions tests, create folder {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            15. Repeat step 3, if needed, only now create {{suite.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}.

            16. Run the subsuite as {{perl ./mtr --suite=storage_engine/parts-ourengine --force --max-test-fail=0}}

            17. Repeat steps 11-13, only now create files in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            18. If you want to run transactional tests, repeat steps 14-17 for trx subsuite.

            19. To execute the whole set of tests, run {{perl ./mtr --suite=storage_engine-ourengine,storage_engine/*-ourengine}}

            h2. Examples

            Below are some real-life experiences of tuning the test suite for engines that currently come with MariaDB. Actual exact steps are always unique and depend on the storage engine (otherwise it wouldn't be tuning). In some cases only very basic actions are required; in others, it takes more work. In general, the more a storage engine is "different", the trickier is the task.


            h3. Easy level: MyISAM

            Lets see how to make the suite work for a relatively standard engine, in terms of behavior similar to main MySQL engines.
            We will create an overlay for MyISAM.
            _Note: "overlay" is a term introduced by MDEV\-30, and it basically means a test suite or set of suites adapted for a certain engine_

            {{cd <basedir>/mysql-test}}
            {{mkdir -p ../storage/myisam/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/myisam/mysql-test/storage_engine/}}

            Edit the copied version of {{define_engine.inc}} to set {{ENGINE}} to MyISAM:

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = MyISAM;
             #
             ################################
             #
            {noformat}

            All other parameters look good: MyISAM does not require any specific table or column options, and it supports non-unique indexes, INT and CHAR types. These all are defaults (you can see it in {{define_engine.inc}}).

            Also, we do not need any additional server options to activate the engine, since MyISAM is always there.

            So, now we can try to run the 1st test:

            {noformat}
            perl ./mtr --suite=storage_engine-myisam 1st

            ...

            storage_engine-myisam.1st [ pass ] 20
            {noformat}

            The first test passed. Okay, now we can run the whole suite. Some tests will fail, this is expected; we need to see the results
            so we can decide whether we should accept the difference, or disable the test, or patch the code.
            So, we will run it with {{\-\-force}} and {{\-\-max-test-fail=0}}, to see all at once (you might also want to redirect the output to a file, because you will need it):

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine-myisam

            ...

            Spent 42.193 of 64 seconds executing testcases

            Completed: Failed 7/99 tests, 92.93% were successful.

            Failing test(s): storage_engine-myisam.alter_tablespace storage_engine-myisam.check_table storage_engine-myisam.foreign_keys storage_engine-myisam.index_type_hash storage_engine-myisam.show_engine storage_engine-myisam.tbl_opt_insert_method storage_engine-myisam.tbl_opt_union
            {noformat}

            _Note: Total execution time depends on the engine and on the machine. For MyISAM it's pretty fast, for other engines might take longer._

            7 failing tests on the first iteration is extremely good, of course it won't be that bright with other engines. But it's MyISAM, what did you expect...

            (Please keep in mind that the suite results are synthetic, meaning that they belong to different engines, and in rare cases are even artificial, e.g. when the only available engine supporting the functionality currently has a bug. So, no engine is expected to pass all tests without tuning. Not even MyISAM.)

            Now it's time to analyze results.

            Result handling in this suite is somewhat different from standard MTR tests. Unless you managed to crash the server or to hit a syntax error in the test itself, all results will be mismatches -- that is, a test will not abort on a failed statement, but instead will try to proceed. The big benefit of it is that if your engine does not support everything, the tests are still usable, you'll just need to approve the difference in the results (by creating an rdiff file).

            Of course, it also means that the tests produce more noise than usual -- e.g. if your engine does not support ALTER, a standard MTR test would break on the first ALTER statement, while ours will continue and will allow you to use the logic, but will also produce a bunch of mismatches due to failing statements. Internal logic in tests does its best to make it cleaner, but still, the noise is expected.

            So, with this knowledge, lets find the failures and go through them one by one.
            Tip: if you saved the output to a file, failures can easily be found by {{' fail '}} search string (without quote marks).

            The first failing test is *alter_tablespace*. Well, naturally -- no tablespaces for MyISAM. But lets look at the output.

            Mismatch says that some stuff is missing, and instead the test produces this:

            {noformat}
            +ERROR HY000: Table storage engine for 't1' doesn't have this option
            +# ERROR: Statement ended with errno 1031, errname ER_ILLEGAL_HA (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
            +# The statement|command finished with ER_ILLEGAL_HA.
            +# Tablespace operations or the syntax or the mix could be unsupported.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Since MyISAM is not supposed to support tablespaces, changing the engine code is not an option. You can decide whether to disable the test entirely, or to create an rdiff file for it. Unless you have really tough restrictions on suite execution time, I recommend keeping tests and creating rdiffs. Even if you don't care about the result, the test, running as a regression check, will at least show that this very first statement has not suddenly started crashing the server, or something unexpectedly changed in the behavior.

            Creating an rdiff file is simple:

            {{diff -u suite/storage_engine/alter_tablespace.result suite/storage_engine/alter_tablespace.reject > ../storage/myisam/mysql-test/storage_engine/alter_tablespace.rdiff}}

            Next failed test: *check_table*.

            Its difference is simple:

            {noformat}
             INSERT INTO t1 (a,b) VALUES (6,'f');
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a,b) VALUES (7,'g');
             INSERT INTO t2 (a,b) VALUES (8,'h');
             CHECK TABLE t2, t1 MEDIUM;
            @@ -52,7 +52,7 @@
             INSERT INTO t1 (a) VALUES (17),(120),(132);
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a) VALUES (801),(900),(7714);
             CHECK TABLE t1 MEDIUM;
             Table Op Msg_type Msg_text
            {noformat}

            No harm if the engine realizes that the table is up to date and says so; adding a diff.

            {{diff -u suite/storage_engine/check_table.result suite/storage_engine/check_table.reject > ../storage/myisam/mysql-test/storage_engine/check_table.rdiff}}


            Next failing test: *foreign_keys*

            Just like alter_tablespace, it produces a lot of messages about possibly unsupported functionality, which is natural, since MyISAM doesn't have foreign keys. Adding a diff.


            Next failing test: *index_type_hash*

            It produces mismatches where HASH type is replaced by BTREE type:

            {noformat}
             SHOW KEYS IN t1;
             Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
            -t1 1 a 1 a # # NULL NULL # HASH
            +t1 1 a 1 a # # NULL NULL # BTREE
            {noformat}

            Since MyISAM doesn't support HASH type, it is fine. Adding a diff.

            Next failing test: *show_engine*

            This test is optimistic and expects that SHOW ENGINE <engine name> STATUS command returns _something_ -- that is, the test obfuscates the output, but expects that it contains a row. For MyISAM it is not the case, hence the diff looks like a missing row:

            {noformat}
            @@ -4,7 +4,6 @@
             SHOW ENGINE <STORAGE_ENGINE> STATUS;
             Type Name Status
            -<STORAGE_ENGINE> ### Engine status, can be long and changeable ###
             # For SHOW MUTEX even the number of lines is volatile, so the result logging is disabled,
            {noformat}

            Adding a diff.


            Next failing tests: *tbl_opt_insert_method*, *tbl_opt_union*

            MyISAM does not use the table option INSERT_METHOD, it's the MERGE engine thing. But table creation does not normally fail on unsupported options, they are simply ignored. That's what we see here:

            {noformat}
            @@ -5,7 +5,7 @@
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL,
               `b` char(8) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1 INSERT_METHOD=FIRST
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
            {noformat}

            SHOW ENGINE does not show the option. This is fine, adding rdiff. Exactly the same for UNION.

            These were all 7 failures.

            Now it's time to take care of subsuites. Currently there are two of them: {{parts}} (stands for 'partitions'), and {{trx}} (stands for 'transactions').

            MyISAM definitely supports partitioning, so lets try them first.

            (we are still in {{<basedir>/mysql-test}})

            {{mkdir ../storage/myisam/mysql-test/storage_engine/parts}}

            This will show MTR that our engine is interested in the {{storage_engine/parts}} subsuite.

            No additional parameters or tricks should be needed for MyISAM to run partition tests, since the suite already sets {{--partition}} option. So, we'll just run it:

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/parts-myisam

            ...

            Spent 1.168 of 5 seconds executing testcases

            Completed: All 8 tests were successful.
            {noformat}

            _Note: For now, it is a very basic suite, only contains a few tests and literally takes several seconds_

            All good, tests passed, nothing needs to be done here.

            Finally, there is the transactions subsuite (also contains tests for XA and snapshots). It is also very basic, but we know that MyISAM doesn't support transactions, XA and snapshots, so results won't be this pretty. Maybe we don't even need to run it, but why not try.

            {{mkdir ../storage/myisam/mysql-test/storage_engine/trx}}

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/trx-myisam

            Spent 0.000 of 10 seconds executing testcases

            Completed: Failed 13/13 tests, 0.00% were successful.

            Failing test(s): storage_engine/trx-myisam.cons_snapshot_repeatable_read storage_engine/trx-myisam.cons_snapshot_serializable storage_engine/trx-myisam.delete storage_engine/trx-myisam.insert storage_engine/trx-myisam.level_read_committed storage_engine/trx-myisam.level_read_uncommitted storage_engine/trx-myisam.level_repeatable_read storage_engine/trx-myisam.level_serializable storage_engine/trx-myisam.select_for_update storage_engine/trx-myisam.select_lock_in_share_mode storage_engine/trx-myisam.update storage_engine/trx-myisam.xa storage_engine/trx-myisam.xa_recovery
            {noformat}

            The results are mess, as expected. All differences start with an extra warning (sometimes more than one, as the same is produced for snapshots and XA):

            {noformat}
            +# -- WARNING ----------------------------------------------------------------
            +# According to I_S.ENGINES, MyISAM does not support transactions.
            +# If it is true, the test will most likely fail; you can
            +# either create an rdiff file, or add the test to disabled.def.
            +# If transactions should be supported, check the data in Information Schema.
            +# ---------------------------------------------------------------------------
            {noformat}

            The rest is a mix of wrong results (missing rows, extra rows, different rows), warnings about rollback not working, etc. With 3rd-party engines, it would be up to a maintainer whether to add diffs for all tests or just ignore the whole thing since transactions are not supported. If you choose to ignore, simply delete the newly created {{../storage/<your_engine>/mysql-test/storage_engine/trx}}, and the subsuite won't be executed for the engine. I choose to keep it, so I'll add rdiffs for all 13 files, it's not that much work, after all.

            {noformat}
            diff -u suite/storage_engine/trx/cons_snapshot_repeatable_read.result suite/storage_engine/trx/cons_snapshot_repeatable_read.reject > ../storage/myisam/mysql-test/storage_engine/trx/cons_snapshot_repeatable_read.rdiff
            ...
            diff -u suite/storage_engine/trx/xa_recovery.result suite/storage_engine/trx/xa_recovery.reject > ../storage/myisam/mysql-test/storage_engine/trx/xa_recovery.rdiff
            {noformat}

            _Tip: If you decide to do the same, take time to go through the reject files and see that your engine works as expected, under the circumstances; sometimes making things do what they are not normally expected to do reveals hidden problems._


            Now we can run everything together:

            {noformat}

            perl ./mtr --suite=storage_engine-myisam,storage_engine/*-myisam

            ...

            Spent 46.249 of 70 seconds executing testcases

            Completed: All 120 tests were successful.
            {noformat}

            That's all. Now just stay out of failures.

            {anchor:innodb}

            h3. Intermediate level: InnoDB plugin

            A little bit more work is required to create an overlay for InnoDB. Lets try to do it for InnoDB plugin (which is not loaded by default as of 5.5.25, but is built there).

            Again, start with creating the overlay directory:

            {{mkdir -p ../storage/innobase/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/innobase/mysql-test/storage_engine/}}
            Edit {{../storage/innobase/mysql-test/storage_engine/define_engine.inc}}

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = InnoDB;
             #
             ################################
             #
            {noformat}

            As for MyISAM, all defaults are fine for InnoDB. But now we also need to server startup options to run server with the InnoDB plugin.

            create the file {{../storage/innobase/mysql-test/storage_engine/suite.opt}}:

            {noformat}
            --ignore-builtin-innodb
            --plugin-load=ha_innodb
            --innodb
            {noformat}

            It should be enough for the base suite. Lets run the 1st test now:

            {noformat}

            perl ./mtr --suite=storage_engine-innobase 1st

            ...

            storage_engine-innobase.1st [ pass ] 852
            {noformat}

            And then the whole suite:

            {noformat}

            perl ./mtr --suite=storage_engine-innobase --max-test-fail=0 --force

            ...

            Spent 153.712 of 402 seconds executing testcases

            Completed: Failed 28/99 tests, 71.72% were successful.

            Failing test(s): storage_engine-innobase.alter_table_online storage_engine-innobase.alter_tablespace storage_engine-innobase.autoinc_secondary storage_engine-innobase.autoinc_vars storage_engine-innobase.cache_index storage_engine-innobase.checksum_table_live storage_engine-innobase.delete_low_prio storage_engine-innobase.fulltext_search storage_engine-innobase.index_enable_disable storage_engine-innobase.index_type_hash storage_engine-innobase.insert_delayed storage_engine-innobase.insert_high_prio storage_engine-innobase.insert_low_prio storage_engine-innobase.lock_concurrent storage_engine-innobase.optimize_table storage_engine-innobase.repair_table storage_engine-innobase.select_high_prio storage_engine-innobase.tbl_opt_ai storage_engine-innobase.tbl_opt_data_index_dir storage_engine-innobase.tbl_opt_insert_method storage_engine-innobase.tbl_opt_key_block_size storage_engine-innobase.tbl_opt_row_format storage_engine-innobase.tbl_opt_union storage_engine-innobase.type_char_indexes storage_engine-innobase.type_float_indexes storage_engine-innobase.type_spatial_indexes storage_engine-innobase.update_low_prio storage_engine-innobase.vcol
            {noformat}

            Not as great as it was with MyISAM. Lets see the details.

            Some mismatches are either identical or similar to those in MyISAM, and caused by unsupported functionality (e.g. fulltext search, hash indexes, optimize_table, etc.). I won't go through them here, will just add rdiff files.

            But some deserve attention.

            *alter_table_online*:

            {noformat}
             ALTER ONLINE TABLE t1 CHANGE b new_name <INT_COLUMN>;
            +ERROR HY000: Can't execute the given 'ALTER' command as online
            +# ERROR: Statement ended with errno 1915, errname ER_CANT_DO_ONLINE (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command finished with ER_CANT_DO_ONLINE.
            +# Functionality or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            It's hard to say whether all engines that support ALTER ONLINE should support them for the same set of changes; most likely not, and what we see here is just an InnoDB limitation. On the other hand, we know that MariaDB supports ALTER ONLINE, and namely renaming a column (see http://kb.askmonty.org/en/alter-table), and InnoDB supports at least some ALTER ONLINE operations (e.g. CHANGE COLUMN i i INT DEFAULT 1 works); so I think it's worth filing it as a low-priority bug, at least to make sure it works as expected: https://mariadb.atlassian.net/browse/MDEV-397

            For now, I will add the test to {{../storage/innobase/mysql-test/storage_engine/disabled.def}} list (need to create it, since it's the first test we disable for the engine):

            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            {noformat}

            If later it turns out to be expected behavior or limitation, I will remove the line from {{disabled.def}}, and will instead add an rdiff file.


            *alter_tablespace*:

            {noformat}
            +# ERROR: Statement ended with errno 1030, errname ER_GET_ERRNO (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
            +# The statement|command finished with ER_GET_ERRNO.
            +# Tablespace operations or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Now, that seems unexpected. But then again, tablespace operations are only applicable when InnoDB works in {{innodb-file-per-table}} mode, which we did not set in our options. Unless we want to use it for all tests, lets set it for this one only:

            {{../storage/innobase/mysql-test/storage_engine/alter_tablespace.opt}}
            {noformat}
            --innodb-file-per-table=1
            {noformat}

            *autoinc_vars*:

            {noformat}
             INSERT INTO t1 (a,b) VALUES (NULL,'g'),(NULL,'h'),(NULL,'i');
             SELECT LAST_INSERT_ID();
             LAST_INSERT_ID()
            -850
            +1100
             SELECT * FROM t1;
             a b
             1 a
            +1100 g
            +1150 h
            +1200 i
             2 b
             200 d
             3 c
             500 e
             800 f
            -850 g
            -900 h
            -950 i
             DROP TABLE t1;
             SET auto_increment_increment = 500;
             SET auto_increment_offset = 300;
            {noformat}

            This is weird. Now real investigation starts -- there is a good reason to look at the reject file to see the continuous flow:

            {noformat}
            ...

            SET auto_increment_increment = 300;
            INSERT INTO t1 (a,b) VALUES (NULL,'d'),(NULL,'e'),(NULL,'f');
            SELECT LAST_INSERT_ID();
            LAST_INSERT_ID()
            200
            SELECT * FROM t1;
            a b
            1 a
            2 b
            200 d
            3 c
            500 e
            800 f
            SET auto_increment_increment = 50;
            INSERT INTO t1 (a,b) VALUES (NULL,'g'),(NULL,'h'),(NULL,'i');
            SELECT LAST_INSERT_ID();
            LAST_INSERT_ID()
            1100
            SELECT * FROM t1;
            a b
            1 a
            1100 g
            1150 h
            1200 i
            2 b
            200 d
            3 c
            500 e
            800 f
            DROP TABLE t1;
            {noformat}

            The first insert works all right with {{auto_increment_increment = 300}}. Then we change it to {{50}}, but the following insert still uses {{300}} for the first value it inserts, and only then switches to {{50}}. Thus we get {{1100}} instead of {{850}}, and following values also differ. This smells like a bug, although not a very serious one. Since a brief check shows it's also reproducible on Oracle MySQL, we will file it on bugs.mysql.com: http://bugs.mysql.com/bug.php?id=65225 (I actually did it some time ago, when I tried to run the storage engine suite for InnoDB for the first time, that's why it's not brand new).

            And we will also add the test to {{../storage/innobase/mysql-test/storage_engine/disabled.def}}:
            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
            {noformat}


            *delete_low_prio*, *insert_high_prio*, *insert_low_prio*, *select_high_prio*, *update_low_prio*:

            They all have similar fragments in their output:

            {noformat}
            +# Timeout in include/wait_show_condition.inc for = 'DELETE FROM t1'
            +# show_statement : SHOW PROCESSLIST
            +# field : Info
            +# condition : = 'DELETE FROM t1'
            +# max_run_time : 3
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command finished with timeout in wait_show_condition.inc.
            +# DELETE or table locking or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            As the documentation says, the high|low priority functionality (e.g. DELETE LOW_PRIORITY) only works for table-level locking, and the whole test is based on this assumption. InnoDB uses row-level locking, so the entire flow does not work quite as expected. We still can add rdiff files, but, unlike the most of other tests, these ones take relatively long (probably over 10 seconds each). Besides, since locking works entirely different here, the test results are likely to be unstable, as it will be all about timing. So, it makes more sense to disable the tests by adding them to {{../storage/innobase/mysql-test/storage_engine/disabled.def}}:

            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
            delete_low_prio : InnoDB does not use table-level locking
            insert_high_prio : InnoDB does not use table-level locking
            insert_low_prio : InnoDB does not use table-level locking
            select_high_prio : InnoDB does not use table-level locking
            update_low_prio : InnoDB does not use table-level locking
            {noformat}


            *tbl_opt_ai*:

            {noformat}
             Table Create Table
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> AUTO_INCREMENT=10 DEFAULT CHARSET=latin1
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
             ALTER TABLE t1 AUTO_INCREMENT=100;
             SHOW CREATE TABLE t1;
             Table Create Table
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> AUTO_INCREMENT=100 DEFAULT CHARSET=latin1
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
             DROP TABLE t1;
            {noformat}

            We already looked at ignored table options in MyISAM tests, but this one is different. Why would AUTO_INCREMENT be ignored, it should be supported all right by InnoDB? (Brief manual check confirms it). Some digging shows, however, that in our case it is _truly_ ignored. It is reproducible with Oracle MySQL, filing a bug on bugs.mysql.com: http://bugs.mysql.com/bug.php?id=65901

            Adding the test to {{../storage/innobase/mysql-test/storage_engine/disabled.def}}:

            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
            delete_low_prio : InnoDB does not use table-level locking
            insert_high_prio : InnoDB does not use table-level locking
            insert_low_prio : InnoDB does not use table-level locking
            select_high_prio : InnoDB does not use table-level locking
            update_low_prio : InnoDB does not use table-level locking
            tbl_opt_ai : MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)
            {noformat}


            *tbl_opt_key_block_size*, *tbl_opt_row_format*:

            {noformat}
             CREATE TABLE t1 (a <INT_COLUMN>, b <CHAR_COLUMN>) ENGINE=<STORAGE_ENGINE> <CUSTOM_TABLE_OPTIONS> KEY_BLOCK_SIZE=8;
            +Warnings:
            +Warning 1478 InnoDB: KEY_BLOCK_SIZE requires innodb_file_per_table.
            +Warning 1478 InnoDB: KEY_BLOCK_SIZE requires innodb_file_format > Antelope.
            +Warning 1478 InnoDB: ignoring KEY_BLOCK_SIZE=8.
            {noformat}

            Doing the same as we did for alter_tablespace, only now adding both {{innodb_file_per_table}} and {{innodb_file_format}}:

            {{../storage/innobase/mysql-test/storage_engine/tbl_opt_key_block_size.opt}}:

            {noformat}
            --innodb-file-per-table=1
            --innodb-file-format=Barracuda
            {noformat}


            *type_char_indexes*:

            {noformat}
             SET SESSION optimizer_switch = 'engine_condition_pushdown=on';
             EXPLAIN SELECT * FROM t1 WHERE c > 'a';
             id select_type table type possible_keys key key_len ref rows Extra
            -# # # range c_v c_v # # # Using index condition
            +# # # range c_v c_v # # # Using where
             SELECT * FROM t1 WHERE c > 'a';
             c c20 v16 v128
             b char3 varchar1a varchar1b
            @@ -135,7 +135,7 @@
             r3a
             EXPLAIN SELECT * FROM t1 WHERE v16 = 'varchar1a' OR v16 = 'varchar3a' ORDER BY v16;
             id select_type table type possible_keys key key_len ref rows Extra
            -# # # range # v16 # # # #
            +# # # ALL # NULL # # # #
             SELECT * FROM t1 WHERE v16 = 'varchar1a' OR v16 = 'varchar3a' ORDER BY v16;
             c c20 v16 v128
             a char1 varchar1a varchar1b
            {noformat}

            _Note: For now we assume that inside one engine, statistics is stable enough to produce consistent results on each test run, which is why we show certain fields in explain to let you decide whether you are satisfied with them or not. If further experience shows that even for the same engine, these tests routinely produce different results, and more often than not it's valid behavior, we might change it._

            For now, I will consider these results acceptable, and will add rdiff.

            As I said before, the rest of failures do not deserve verbose analysis, they are pretty straightforward, I just added rdiff for each of them.


            Now working with {{storage_engine/parts}} and {{storage_engine/trx}}.

            {{mkdir ../storage/innobase/mysql-test/storage_engine/trx}}
            {{mkdir ../storage/innobase/mysql-test/storage_engine/parts}}

            Copy your previously created {{suite.opt}} file to each of the subfolders: as far as MTR is concerned, they are separate suites.

            {{cp ../storage/innobase/mysql-test/storage_engine/suite.opt ../storage/innobase/mysql-test/storage_engine/trx/}}
            {{cp ../storage/innobase/mysql-test/storage_engine/suite.opt ../storage/innobase/mysql-test/storage_engine/parts/}}

            Maybe you'll want to add something else to those options. I, for one, will add {{\-\-innodb-lock-wait-timeout=1}} to {{../storage/innobase/mysql-test/storage_engine/trx/suite.opt}}. Probably it should have been done for other suites, too -- but it's never late, if there are any timeout issues observed.

            When you add rdiff files for subsuites, don't forget to put them in the subfolders:

            {{diff -u suite/storage_engine/parts/checksum_table.result suite/storage_engine/parts/checksum_table.reject > ../storage/innobase/mysql-test/storage_engine/parts/checksum_table.rdiff}}
            etc.

            Again, mostly failures are mismatches due to different output or unsupported functionality.
            _Note: Also note that repair_table test results are likely to differ, even if repair is supported, since the test tries to corrupt existing table files, which are different for each engine._

            *trx/cons_snapshot_serializable*:

            {noformat}
             # If consistent read works on this isolation level (SERIALIZABLE), the following SELECT should not return the value we inserted (1)
             SELECT * FROM t1;
             a
            +1
             COMMIT;
            {noformat}

            It is a bug. Filing as http://bugs.mysql.com/bug.php?id=65146 and adding to disabled.def (don't forget that it should be under trx folder now:
            {{../storage/innobase/mysql-test/storage_engine/trx/disabled.def}}:

            {noformat}
            cons_snapshot_serializable : MySQL:65146 (CONSISTENT SNAPSHOT does not work with SERIALIZABLE)
            {noformat}

            Now, running the whole set:

            {noformat}
            perl ./mtr --suite=storage_engine-innobase,storage_engine/*-innobase

            ...

            Spent 300.715 of 364 seconds executing testcases

            Completed: All 111 tests were successful.
            {noformat}

            Much slower than for MyISAM, but that's how it is usually is.

            {anchor:merge}

            h3. Advanced level: MERGE

            <coming soon>
            elenst Elena Stepanova made changes -
            Description [Goal|#goal]
            [Problems to solve|#problems]
            [... Problem 1: Varying result files|#problem1]
            [...... Solution|#solution1]
            [... Problem 2: Unsupported features|#problem2]
            [...... Solution|#solution2]
            [... Problem 3: Varying result files|#problem3]
            [...... Solution|#solution3]
            [... Filed bugs|#bugs]
            [Tuning|#tuning]
            [... Assumptions|#assumptions]
            [... Common tuning steps|#common]
            [... Examples|#examples]
            [...... MyISAM|#myisam]
            [...... InnoDB plugin|#innodb]
            [...... MERGE|#merge]

            {anchor:goal}

            h2. Goal

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            The suite it not supposed to provide exhaustive engine testing, and certainly cannot test non-standard engine-specific features (due to its very nature of being agnostic to the engine under test), but rather perform a relatively quick evaluation of standard functionality expected from a storage engine.

            {anchor:problems}

            h2. Problems to solve

            Existing MTR tests are not very suitable for running on different storage engines.

            {anchor:problem1}

            h3. Problem 1: Varying result files

            Traditional MTR/mysqltest have very strict requirements in regard to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            Example:

            {noformat}
            OPTIMIZE TABLE t1;

            # One engine might in certain situations say

            Table Op Msg_type Msg_text
            test.t1 optimize status Table is already up to date

            # Another always says

            Table Op Msg_type Msg_text
            test.t1 optimize status OK
            {noformat}

            Neither is wrong, and yet if we have one variant in the result file, for another engine the test will fail.

            Till recently the only solution was to copy the whole suite somewhere and recreate result files, thus introducing usually unwanted code duplication.

            {anchor:solution1}
            h4. Solution

            To solve this problem, we will use functionality developed in scope of MDEV-30.
            With this task, the original test results can be adapted to the storage engine by creating diff files (basically, patches). These patches are stored in the engine folder rather than in the MTR suite folder, so no changes in the original test suite is performed.

            {anchor:problem2}
            h3. Problem 2: Unsupported features

            Most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a traditional MTR test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail -- not just produce a result mismatch which we could patch, but actually fail in the middle of execution, -- so we will have to stop running it, and ALTER TABLE will not be tested at all; and the same will happen to any test which uses keys, even if it's used briefly for an unimportant part of the test.

            This problem is even bigger than the first one, because not only does it currently require copying the test suite and maintaining a separate set of *result* files, but also modifying the test files themselves; and often modifications will be extensive enough to rule out even the possibility of future merges with the original test suite.

            {anchor:solution2}
            h4. Solution

            We will solve this problem by allowing all tests to run through intermediate failures to the end (unless the server crashes or a syntax error in the test itself occurs). Thus, the tests will also produce mismatches, and the engine maintainer will be able to decide whether the test is really inapplicable and needs to be disabled, or the partial failures are acceptable and can be stored in the result diff file.

            Example:

            We are testing that a table option is supported by the engine (merely is accepted and stored); lets say AUTO_INCREMENT. We create a table with the option and check that it is not lost; but we also want to see that it works with ALTER; so, we add an alter table statement where the option is modified.

            Now, if we try to run such a test with FEDERATED engine, it will break on ALTER -- not because the option is unsupported, but because ALTER is. In the storage engine test suite, the test will continue to the end and will produce a mismatch (because the ALTER statement will cause an error). Then, someone who tunes the suite for FEDERATED engine, can see the difference, decide that it's reasonable, since ALTER is not supposed to work, and create an rdiff file. Thus, we do not skip the test for the table option entirely, but simply limit the testing to functionality the engine supports.

            The tests will try to be smart in their internal logic, but within reason. When a statement is likely to fail (e.g. a feature is often unsupported), the test will check the result of the statement and will produce a verbose error message, including versions why it could happen. Also, if a failure happened on table creation, or in some other key moment, so that a part of test becomes useless (if a table was not created, there is no point at trying to alter it, etc.), the test will ignore a part of the flow, and proceed to the next part. But it does not happen always, the checks are only performed when the probability of a failure is reasonably high.
            Of course, if a part of the test is bypassed, it will also cause a mismatch, so the maintainer will be aware of that and will be able to check why it happens.

            Additionally, some tests will check the value of the default index (see below notes about configuration), and if it is unset, certain tests will be completely skipped, because every part of them requires indexes (e.g. index.test).

            For the functionality described in the ENGINES table (transactions, savepoints, XA), if a test requires one of the features and it is shown as unsupported in the table, the test will produce a warning. But unlike with the indexes, it will not be skipped, because we assume that for a new engine information in the ENGINES table might be inaccurate.

            {anchor:problem3}
            h3. Problem 3: Different primitives

            Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            {{CREATE TABLE t (i INT)}}
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like a connection string for FEDERATED engine. Since there is, obviously, no connection string in generic tests, every table creation for FEDERATED will fail, and nothing will be tested. Currently, there is no way to work around that apart from copying the tests and modifying them manually.

            In even more complicated scenario, in order to create a table properly, additional actions need to be performed; e.g. to create a functional MERGE table, we need to also create an underlying MyISAM table; and if we want to alter a MERGE table, we should alter the underlying table(s) accordingly.

            {anchor:solution3}
            h4. Solution

            We will provide the engine maintainer with several tools to tune the suite for their engine.

            * Some variables can be set to configure the basic test behavior:
            ** the engine name (to be used in {{CREATE TABLE}} and be masked in {{SHOW CREATE TABLE}});
            ** default column options (when any are required, e.g. NOT NULL for CSV);
            ** default table options (when any are required, e.g. connection for FEDERATED);
            ** default index (INDEX, UNIQUE INDEX, PRIMARY KEY, whichever supported, or a special index);
            ** default types (int type, char type, in case standard ones are not supported);

            * if any actions need to be performed before/after each test, they can be described in include files which are always executed (e.g. creation of a server for FEDERATED before a test, and dropping it after a test);

            * if a server requires non-standard procedures for table creation or modification, they can be tuned in include files which tests to use to perform these actions (CREATE and ALTER table are not run directly in the tests, only by calling the include files, so they are configurable);

            * the engine can have its own set of disabled files, so that there is no need to provide only selected test names on the MTR command line -- the whole suite can be run, and unnecessary files can be disabled through the list;

            * server options can be configured;

            * non-default combinations can be configured for the engine;

            * subsuites can be completely ignored for the engine, so that if there is no interest in running partitioning tests or transactional tests, they can be simply omitted without any effort.

            For more details, see the 'Tuning' section.

            {anchor:bugs}
            h3. Bugs filed while working on the suite:

            LP:973039 / MDEV-211 (Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK)
            LP:976104 / MDEV-216 (Assertion `0' failed in my_message_sql on UPDATE IGNORE, or unknown error on release build)
            LP:990187 / MDEV-237 (Assertion `share->reopen == 1' failed at maria_extra on ADD PARTITION)
            LP:994275 / MDEV-248 (Assertion `real->type() == Item::FIELD_ITEM' failed in add_not_null_conds(JOIN*))
            LP:994854 / MDEV-254 (Server hangs on updating an XtraDB table after FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT)
            LP:997397 (TRUNCATE on a partitioned Aria table does not reset AUTO_INCREMENT)
            MDEV-365 (Assertion `block->type == PAGECACHE_EMPTY_PAGE ...' failed in pagecache_read on ADD PARTITION)
            MDEV-366 (Assertion `share->reopen == 1' failed in maria_extra on DROP TABLE which is locked twice)
            MDEV-388 (Creating a federated table with a non-existing server returns a random error code)
            MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            MySQL:64888 (Inconsistent behavior of dynamic concurrent_insert values)
            MySQL:64892 (Session-level low_priority_updates does not work for INSERT)
            MySQL:65146 (WITH CONSISTENT SNAPSHOT does not work with isolation level SERIALIZABLE)
            MySQL:65225 (InnoDB miscalculates auto-increment after changing auto_increment_increment)
            MySQL:65429 / LP:1004910 (Assertion failure block->page.space == page_get_space_id(page_align(ptr)))
            MySQL:65431 / LP:1005052 (Error 1430 while selecting from a federated table with index on a bit column)
            MySQL:65846 (INSERT DELAYED on a BLACKHOLE table hangs forever)
            MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)

            _The list might be incomplete_

            {anchor:tuning}
            h2. Tuning

            Please note that the test suite is synthetic. It means that it contains tests for a set of features which (the whole set) is not supported by any known engine. The same is true for result files -- they are taken from different engines, depending on which produce the most sensible result, and in rare cases are even artificial. This means that there is no engine which would pass the whole suite without any tuning -- not even MyISAM, which contributed to the result files the most.

            {anchor:assumptions}
            h3. Assumptions

            We presume that the tests are set on a source tree build (which is reasonable, considering that the main goal is to help with storage engine development). We want to configure the tests for some engine OurENGINE.

            The storage engine code is located in {{<basedir>/storage/<ourengine>}} folder.

            The storage_engine test suite is located in {{<basedir>/mysql-test/suite/storage_engine}} folder and contains subsuites in subfolders of the corresponding names (currently {{<basedir>/mysql-test/suite/storage_engine/parts}} and {{<basedir>/mysql-test/suite/storage_engine/trx}}.

            {anchor:common}
            h3. Common tuning steps

            1. Create {{<basedir>/storage/ourengine/mysql-test/storage_engine}} folder.

            2. Copy {{<basedir>/mysql-test/suite/storage_engine/define_engine.inc}} to {{<basedir>/storage/ourengine/mysql-test/storage_engine}} and edit the file (at the very least set ENGINE variable; check other variables which you find there, and modify as needed.

            3. If the engine requires any additional server options (e.g. to load the engine, or to tune it, or both), create {{<basedir>/storage/ourengine/mysql-test/storage_engine/suite.opt}} file and add the options there.

            4. If you know in advance that the engine requires additional steps before a test, add them at the end of {{define_engine.inc}}.

            5. If you created any SQL objects in {{define_engine.inc}}, create file {{<basedir>/storage/ourengine/mysql-test/storage_engine/cleanup_engine.inc}}, or copy a stub from {{<basedir>/mysql-test/suite/storage_engine}}, and add the logic to drop the objects.

            6. If you know in advance that the engine requires non-standard table creation and/or modification procedure, copy {{<basedir>/mysql-test/suite/storage_engine/create_table.inc}} and {{<basedir>/mysql-test/suite/storage_engine/alter_table.inc}} into {{<basedir>/storage/ourengine/mysql-test/storage_engine/}} and modify them as needed.

            7. Try to run the 1st test:
            {{perl ./mtr --suite=storage_engine-ourengine 1st}}

            8. If the test produces a mismatch, analyze it and decide whether table creation or test options or server options require more tuning, or the difference is expected.

            9. If the difference is expected, create an rdiff file as
            {{ diff -u {{<basedir>/mysql-test/suite/storage_engine/1st.result <basedir>/mysql-test/suite/storage_engine/1st.reject > <basedir>/storage/ourengine/mysql-test/storage_engine/1st.rdiff}}

            10. When the 1st test passes, run the whole suite:

            {{perl ./mtr --suite=storage_engine-ourengine --force --max-test-fail=0}}

            11. Analyze failures, modify parameters or include files as needed, create rdiff files.

            12. If any tests requires specific non-standard server/engine options, create files {{<testname>.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine}}.

            13. If any tests have to be skipped, add them to {{<basedir>/storage/ourengine/mysql-test/storage_engine/disabled.def}}.

            14. When you are satisfied with the results of storage_engine suite, proceed to the subsuites. If you are interested in running partitions tests, create folder {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            15. Repeat step 3, if needed, only now create {{suite.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}.

            16. Run the subsuite as {{perl ./mtr --suite=storage_engine/parts-ourengine --force --max-test-fail=0}}

            17. Repeat steps 11-13, only now create files in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            18. If you want to run transactional tests, repeat steps 14-17 for trx subsuite.

            19. To execute the whole set of tests, run {{perl ./mtr --suite=storage_engine-ourengine,storage_engine/*-ourengine}}

            h2. Examples

            Below are some real-life experiences of tuning the test suite for engines that currently come with MariaDB. Actual exact steps are always unique and depend on the storage engine (otherwise it wouldn't be tuning). In some cases only very basic actions are required; in others, it takes more work. In general, the more a storage engine is "different", the trickier is the task.


            h3. Easy level: MyISAM

            Lets see how to make the suite work for a relatively standard engine, in terms of behavior similar to main MySQL engines.
            We will create an overlay for MyISAM.
            _Note: "overlay" is a term introduced by MDEV\-30, and it basically means a test suite or set of suites adapted for a certain engine_

            {{cd <basedir>/mysql-test}}
            {{mkdir -p ../storage/myisam/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/myisam/mysql-test/storage_engine/}}

            Edit the copied version of {{define_engine.inc}} to set {{ENGINE}} to MyISAM:

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = MyISAM;
             #
             ################################
             #
            {noformat}

            All other parameters look good: MyISAM does not require any specific table or column options, and it supports non-unique indexes, INT and CHAR types. These all are defaults (you can see it in {{define_engine.inc}}).

            Also, we do not need any additional server options to activate the engine, since MyISAM is always there.

            So, now we can try to run the 1st test:

            {noformat}
            perl ./mtr --suite=storage_engine-myisam 1st

            ...

            storage_engine-myisam.1st [ pass ] 20
            {noformat}

            The first test passed. Okay, now we can run the whole suite. Some tests will fail, this is expected; we need to see the results
            so we can decide whether we should accept the difference, or disable the test, or patch the code.
            So, we will run it with {{\-\-force}} and {{\-\-max-test-fail=0}}, to see all at once (you might also want to redirect the output to a file, because you will need it):

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine-myisam

            ...

            Spent 42.193 of 64 seconds executing testcases

            Completed: Failed 7/99 tests, 92.93% were successful.

            Failing test(s): storage_engine-myisam.alter_tablespace storage_engine-myisam.check_table storage_engine-myisam.foreign_keys storage_engine-myisam.index_type_hash storage_engine-myisam.show_engine storage_engine-myisam.tbl_opt_insert_method storage_engine-myisam.tbl_opt_union
            {noformat}

            _Note: Total execution time depends on the engine and on the machine. For MyISAM it's pretty fast, for other engines might take longer._

            7 failing tests on the first iteration is extremely good, of course it won't be that bright with other engines. But it's MyISAM, what did you expect...

            (Please keep in mind that the suite results are synthetic, meaning that they belong to different engines, and in rare cases are even artificial, e.g. when the only available engine supporting the functionality currently has a bug. So, no engine is expected to pass all tests without tuning. Not even MyISAM.)

            Now it's time to analyze results.

            Result handling in this suite is somewhat different from standard MTR tests. Unless you managed to crash the server or to hit a syntax error in the test itself, all results will be mismatches -- that is, a test will not abort on a failed statement, but instead will try to proceed. The big benefit of it is that if your engine does not support everything, the tests are still usable, you'll just need to approve the difference in the results (by creating an rdiff file).

            Of course, it also means that the tests produce more noise than usual -- e.g. if your engine does not support ALTER, a standard MTR test would break on the first ALTER statement, while ours will continue and will allow you to use the logic, but will also produce a bunch of mismatches due to failing statements. Internal logic in tests does its best to make it cleaner, but still, the noise is expected.

            So, with this knowledge, lets find the failures and go through them one by one.
            Tip: if you saved the output to a file, failures can easily be found by {{' fail '}} search string (without quote marks).

            The first failing test is *alter_tablespace*. Well, naturally -- no tablespaces for MyISAM. But lets look at the output.

            Mismatch says that some stuff is missing, and instead the test produces this:

            {noformat}
            +ERROR HY000: Table storage engine for 't1' doesn't have this option
            +# ERROR: Statement ended with errno 1031, errname ER_ILLEGAL_HA (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
            +# The statement|command finished with ER_ILLEGAL_HA.
            +# Tablespace operations or the syntax or the mix could be unsupported.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Since MyISAM is not supposed to support tablespaces, changing the engine code is not an option. You can decide whether to disable the test entirely, or to create an rdiff file for it. Unless you have really tough restrictions on suite execution time, I recommend keeping tests and creating rdiffs. Even if you don't care about the result, the test, running as a regression check, will at least show that this very first statement has not suddenly started crashing the server, or something unexpectedly changed in the behavior.

            Creating an rdiff file is simple:

            {{diff -u suite/storage_engine/alter_tablespace.result suite/storage_engine/alter_tablespace.reject > ../storage/myisam/mysql-test/storage_engine/alter_tablespace.rdiff}}

            Next failed test: *check_table*.

            Its difference is simple:

            {noformat}
             INSERT INTO t1 (a,b) VALUES (6,'f');
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a,b) VALUES (7,'g');
             INSERT INTO t2 (a,b) VALUES (8,'h');
             CHECK TABLE t2, t1 MEDIUM;
            @@ -52,7 +52,7 @@
             INSERT INTO t1 (a) VALUES (17),(120),(132);
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a) VALUES (801),(900),(7714);
             CHECK TABLE t1 MEDIUM;
             Table Op Msg_type Msg_text
            {noformat}

            No harm if the engine realizes that the table is up to date and says so; adding a diff.

            {{diff -u suite/storage_engine/check_table.result suite/storage_engine/check_table.reject > ../storage/myisam/mysql-test/storage_engine/check_table.rdiff}}


            Next failing test: *foreign_keys*

            Just like alter_tablespace, it produces a lot of messages about possibly unsupported functionality, which is natural, since MyISAM doesn't have foreign keys. Adding a diff.


            Next failing test: *index_type_hash*

            It produces mismatches where HASH type is replaced by BTREE type:

            {noformat}
             SHOW KEYS IN t1;
             Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
            -t1 1 a 1 a # # NULL NULL # HASH
            +t1 1 a 1 a # # NULL NULL # BTREE
            {noformat}

            Since MyISAM doesn't support HASH type, it is fine. Adding a diff.

            Next failing test: *show_engine*

            This test is optimistic and expects that SHOW ENGINE <engine name> STATUS command returns _something_ -- that is, the test obfuscates the output, but expects that it contains a row. For MyISAM it is not the case, hence the diff looks like a missing row:

            {noformat}
            @@ -4,7 +4,6 @@
             SHOW ENGINE <STORAGE_ENGINE> STATUS;
             Type Name Status
            -<STORAGE_ENGINE> ### Engine status, can be long and changeable ###
             # For SHOW MUTEX even the number of lines is volatile, so the result logging is disabled,
            {noformat}

            Adding a diff.


            Next failing tests: *tbl_opt_insert_method*, *tbl_opt_union*

            MyISAM does not use the table option INSERT_METHOD, it's the MERGE engine thing. But table creation does not normally fail on unsupported options, they are simply ignored. That's what we see here:

            {noformat}
            @@ -5,7 +5,7 @@
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL,
               `b` char(8) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1 INSERT_METHOD=FIRST
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
            {noformat}

            SHOW ENGINE does not show the option. This is fine, adding rdiff. Exactly the same for UNION.

            These were all 7 failures.

            Now it's time to take care of subsuites. Currently there are two of them: {{parts}} (stands for 'partitions'), and {{trx}} (stands for 'transactions').

            MyISAM definitely supports partitioning, so lets try them first.

            (we are still in {{<basedir>/mysql-test}})

            {{mkdir ../storage/myisam/mysql-test/storage_engine/parts}}

            This will show MTR that our engine is interested in the {{storage_engine/parts}} subsuite.

            No additional parameters or tricks should be needed for MyISAM to run partition tests, since the suite already sets {{--partition}} option. So, we'll just run it:

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/parts-myisam

            ...

            Spent 1.168 of 5 seconds executing testcases

            Completed: All 8 tests were successful.
            {noformat}

            _Note: For now, it is a very basic suite, only contains a few tests and literally takes several seconds_

            All good, tests passed, nothing needs to be done here.

            Finally, there is the transactions subsuite (also contains tests for XA and snapshots). It is also very basic, but we know that MyISAM doesn't support transactions, XA and snapshots, so results won't be this pretty. Maybe we don't even need to run it, but why not try.

            {{mkdir ../storage/myisam/mysql-test/storage_engine/trx}}

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/trx-myisam

            Spent 0.000 of 10 seconds executing testcases

            Completed: Failed 13/13 tests, 0.00% were successful.

            Failing test(s): storage_engine/trx-myisam.cons_snapshot_repeatable_read storage_engine/trx-myisam.cons_snapshot_serializable storage_engine/trx-myisam.delete storage_engine/trx-myisam.insert storage_engine/trx-myisam.level_read_committed storage_engine/trx-myisam.level_read_uncommitted storage_engine/trx-myisam.level_repeatable_read storage_engine/trx-myisam.level_serializable storage_engine/trx-myisam.select_for_update storage_engine/trx-myisam.select_lock_in_share_mode storage_engine/trx-myisam.update storage_engine/trx-myisam.xa storage_engine/trx-myisam.xa_recovery
            {noformat}

            The results are mess, as expected. All differences start with an extra warning (sometimes more than one, as the same is produced for snapshots and XA):

            {noformat}
            +# -- WARNING ----------------------------------------------------------------
            +# According to I_S.ENGINES, MyISAM does not support transactions.
            +# If it is true, the test will most likely fail; you can
            +# either create an rdiff file, or add the test to disabled.def.
            +# If transactions should be supported, check the data in Information Schema.
            +# ---------------------------------------------------------------------------
            {noformat}

            The rest is a mix of wrong results (missing rows, extra rows, different rows), warnings about rollback not working, etc. With 3rd-party engines, it would be up to a maintainer whether to add diffs for all tests or just ignore the whole thing since transactions are not supported. If you choose to ignore, simply delete the newly created {{../storage/<your_engine>/mysql-test/storage_engine/trx}}, and the subsuite won't be executed for the engine. I choose to keep it, so I'll add rdiffs for all 13 files, it's not that much work, after all.

            {noformat}
            diff -u suite/storage_engine/trx/cons_snapshot_repeatable_read.result suite/storage_engine/trx/cons_snapshot_repeatable_read.reject > ../storage/myisam/mysql-test/storage_engine/trx/cons_snapshot_repeatable_read.rdiff
            ...
            diff -u suite/storage_engine/trx/xa_recovery.result suite/storage_engine/trx/xa_recovery.reject > ../storage/myisam/mysql-test/storage_engine/trx/xa_recovery.rdiff
            {noformat}

            _Tip: If you decide to do the same, take time to go through the reject files and see that your engine works as expected, under the circumstances; sometimes making things do what they are not normally expected to do reveals hidden problems._


            Now we can run everything together:

            {noformat}

            perl ./mtr --suite=storage_engine-myisam,storage_engine/*-myisam

            ...

            Spent 46.249 of 70 seconds executing testcases

            Completed: All 120 tests were successful.
            {noformat}

            That's all. Now just stay out of failures.

            {anchor:innodb}

            h3. Intermediate level: InnoDB plugin

            A little bit more work is required to create an overlay for InnoDB. Lets try to do it for InnoDB plugin (which is not loaded by default as of 5.5.25, but is built there).

            Again, start with creating the overlay directory:

            {{mkdir -p ../storage/innobase/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/innobase/mysql-test/storage_engine/}}
            Edit {{../storage/innobase/mysql-test/storage_engine/define_engine.inc}}

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = InnoDB;
             #
             ################################
             #
            {noformat}

            As for MyISAM, all defaults are fine for InnoDB. But now we also need to server startup options to run server with the InnoDB plugin.

            create the file {{../storage/innobase/mysql-test/storage_engine/suite.opt}}:

            {noformat}
            --ignore-builtin-innodb
            --plugin-load=ha_innodb
            --innodb
            {noformat}

            It should be enough for the base suite. Lets run the 1st test now:

            {noformat}

            perl ./mtr --suite=storage_engine-innobase 1st

            ...

            storage_engine-innobase.1st [ pass ] 852
            {noformat}

            And then the whole suite:

            {noformat}

            perl ./mtr --suite=storage_engine-innobase --max-test-fail=0 --force

            ...

            Spent 153.712 of 402 seconds executing testcases

            Completed: Failed 28/99 tests, 71.72% were successful.

            Failing test(s): storage_engine-innobase.alter_table_online storage_engine-innobase.alter_tablespace storage_engine-innobase.autoinc_secondary storage_engine-innobase.autoinc_vars storage_engine-innobase.cache_index storage_engine-innobase.checksum_table_live storage_engine-innobase.delete_low_prio storage_engine-innobase.fulltext_search storage_engine-innobase.index_enable_disable storage_engine-innobase.index_type_hash storage_engine-innobase.insert_delayed storage_engine-innobase.insert_high_prio storage_engine-innobase.insert_low_prio storage_engine-innobase.lock_concurrent storage_engine-innobase.optimize_table storage_engine-innobase.repair_table storage_engine-innobase.select_high_prio storage_engine-innobase.tbl_opt_ai storage_engine-innobase.tbl_opt_data_index_dir storage_engine-innobase.tbl_opt_insert_method storage_engine-innobase.tbl_opt_key_block_size storage_engine-innobase.tbl_opt_row_format storage_engine-innobase.tbl_opt_union storage_engine-innobase.type_char_indexes storage_engine-innobase.type_float_indexes storage_engine-innobase.type_spatial_indexes storage_engine-innobase.update_low_prio storage_engine-innobase.vcol
            {noformat}

            Not as great as it was with MyISAM. Lets see the details.

            Some mismatches are either identical or similar to those in MyISAM, and caused by unsupported functionality (e.g. fulltext search, hash indexes, optimize_table, etc.). I won't go through them here, will just add rdiff files.

            But some deserve attention.

            *alter_table_online*:

            {noformat}
             ALTER ONLINE TABLE t1 CHANGE b new_name <INT_COLUMN>;
            +ERROR HY000: Can't execute the given 'ALTER' command as online
            +# ERROR: Statement ended with errno 1915, errname ER_CANT_DO_ONLINE (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command finished with ER_CANT_DO_ONLINE.
            +# Functionality or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            It's hard to say whether all engines that support ALTER ONLINE should support them for the same set of changes; most likely not, and what we see here is just an InnoDB limitation. On the other hand, we know that MariaDB supports ALTER ONLINE, and namely renaming a column (see http://kb.askmonty.org/en/alter-table), and InnoDB supports at least some ALTER ONLINE operations (e.g. CHANGE COLUMN i i INT DEFAULT 1 works); so I think it's worth filing it as a low-priority bug, at least to make sure it works as expected: https://mariadb.atlassian.net/browse/MDEV-397

            For now, I will add the test to {{../storage/innobase/mysql-test/storage_engine/disabled.def}} list (need to create it, since it's the first test we disable for the engine):

            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            {noformat}

            If later it turns out to be expected behavior or limitation, I will remove the line from {{disabled.def}}, and will instead add an rdiff file.


            *alter_tablespace*:

            {noformat}
            +# ERROR: Statement ended with errno 1030, errname ER_GET_ERRNO (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
            +# The statement|command finished with ER_GET_ERRNO.
            +# Tablespace operations or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Now, that seems unexpected. But then again, tablespace operations are only applicable when InnoDB works in {{innodb-file-per-table}} mode, which we did not set in our options. Unless we want to use it for all tests, lets set it for this one only:

            {{../storage/innobase/mysql-test/storage_engine/alter_tablespace.opt}}
            {noformat}
            --innodb-file-per-table=1
            {noformat}

            *autoinc_vars*:

            {noformat}
             INSERT INTO t1 (a,b) VALUES (NULL,'g'),(NULL,'h'),(NULL,'i');
             SELECT LAST_INSERT_ID();
             LAST_INSERT_ID()
            -850
            +1100
             SELECT * FROM t1;
             a b
             1 a
            +1100 g
            +1150 h
            +1200 i
             2 b
             200 d
             3 c
             500 e
             800 f
            -850 g
            -900 h
            -950 i
             DROP TABLE t1;
             SET auto_increment_increment = 500;
             SET auto_increment_offset = 300;
            {noformat}

            This is weird. Now real investigation starts -- there is a good reason to look at the reject file to see the continuous flow:

            {noformat}
            ...

            SET auto_increment_increment = 300;
            INSERT INTO t1 (a,b) VALUES (NULL,'d'),(NULL,'e'),(NULL,'f');
            SELECT LAST_INSERT_ID();
            LAST_INSERT_ID()
            200
            SELECT * FROM t1;
            a b
            1 a
            2 b
            200 d
            3 c
            500 e
            800 f
            SET auto_increment_increment = 50;
            INSERT INTO t1 (a,b) VALUES (NULL,'g'),(NULL,'h'),(NULL,'i');
            SELECT LAST_INSERT_ID();
            LAST_INSERT_ID()
            1100
            SELECT * FROM t1;
            a b
            1 a
            1100 g
            1150 h
            1200 i
            2 b
            200 d
            3 c
            500 e
            800 f
            DROP TABLE t1;
            {noformat}

            The first insert works all right with {{auto_increment_increment = 300}}. Then we change it to {{50}}, but the following insert still uses {{300}} for the first value it inserts, and only then switches to {{50}}. Thus we get {{1100}} instead of {{850}}, and following values also differ. This smells like a bug, although not a very serious one. Since a brief check shows it's also reproducible on Oracle MySQL, we will file it on bugs.mysql.com: http://bugs.mysql.com/bug.php?id=65225 (I actually did it some time ago, when I tried to run the storage engine suite for InnoDB for the first time, that's why it's not brand new).

            And we will also add the test to {{../storage/innobase/mysql-test/storage_engine/disabled.def}}:
            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
            {noformat}


            *delete_low_prio*, *insert_high_prio*, *insert_low_prio*, *select_high_prio*, *update_low_prio*:

            They all have similar fragments in their output:

            {noformat}
            +# Timeout in include/wait_show_condition.inc for = 'DELETE FROM t1'
            +# show_statement : SHOW PROCESSLIST
            +# field : Info
            +# condition : = 'DELETE FROM t1'
            +# max_run_time : 3
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command finished with timeout in wait_show_condition.inc.
            +# DELETE or table locking or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            As the documentation says, the high|low priority functionality (e.g. DELETE LOW_PRIORITY) only works for table-level locking, and the whole test is based on this assumption. InnoDB uses row-level locking, so the entire flow does not work quite as expected. We still can add rdiff files, but, unlike the most of other tests, these ones take relatively long (probably over 10 seconds each). Besides, since locking works entirely different here, the test results are likely to be unstable, as it will be all about timing. So, it makes more sense to disable the tests by adding them to {{../storage/innobase/mysql-test/storage_engine/disabled.def}}:

            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
            delete_low_prio : InnoDB does not use table-level locking
            insert_high_prio : InnoDB does not use table-level locking
            insert_low_prio : InnoDB does not use table-level locking
            select_high_prio : InnoDB does not use table-level locking
            update_low_prio : InnoDB does not use table-level locking
            {noformat}


            *tbl_opt_ai*:

            {noformat}
             Table Create Table
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> AUTO_INCREMENT=10 DEFAULT CHARSET=latin1
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
             ALTER TABLE t1 AUTO_INCREMENT=100;
             SHOW CREATE TABLE t1;
             Table Create Table
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> AUTO_INCREMENT=100 DEFAULT CHARSET=latin1
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
             DROP TABLE t1;
            {noformat}

            We already looked at ignored table options in MyISAM tests, but this one is different. Why would AUTO_INCREMENT be ignored, it should be supported all right by InnoDB? (Brief manual check confirms it). Some digging shows, however, that in our case it is _truly_ ignored. It is reproducible with Oracle MySQL, filing a bug on bugs.mysql.com: http://bugs.mysql.com/bug.php?id=65901

            Adding the test to {{../storage/innobase/mysql-test/storage_engine/disabled.def}}:

            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
            delete_low_prio : InnoDB does not use table-level locking
            insert_high_prio : InnoDB does not use table-level locking
            insert_low_prio : InnoDB does not use table-level locking
            select_high_prio : InnoDB does not use table-level locking
            update_low_prio : InnoDB does not use table-level locking
            tbl_opt_ai : MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)
            {noformat}


            *tbl_opt_key_block_size*, *tbl_opt_row_format*:

            {noformat}
             CREATE TABLE t1 (a <INT_COLUMN>, b <CHAR_COLUMN>) ENGINE=<STORAGE_ENGINE> <CUSTOM_TABLE_OPTIONS> KEY_BLOCK_SIZE=8;
            +Warnings:
            +Warning 1478 InnoDB: KEY_BLOCK_SIZE requires innodb_file_per_table.
            +Warning 1478 InnoDB: KEY_BLOCK_SIZE requires innodb_file_format > Antelope.
            +Warning 1478 InnoDB: ignoring KEY_BLOCK_SIZE=8.
            {noformat}

            Doing the same as we did for alter_tablespace, only now adding both {{innodb_file_per_table}} and {{innodb_file_format}}:

            {{../storage/innobase/mysql-test/storage_engine/tbl_opt_key_block_size.opt}}:

            {noformat}
            --innodb-file-per-table=1
            --innodb-file-format=Barracuda
            {noformat}


            *type_char_indexes*:

            {noformat}
             SET SESSION optimizer_switch = 'engine_condition_pushdown=on';
             EXPLAIN SELECT * FROM t1 WHERE c > 'a';
             id select_type table type possible_keys key key_len ref rows Extra
            -# # # range c_v c_v # # # Using index condition
            +# # # range c_v c_v # # # Using where
             SELECT * FROM t1 WHERE c > 'a';
             c c20 v16 v128
             b char3 varchar1a varchar1b
            @@ -135,7 +135,7 @@
             r3a
             EXPLAIN SELECT * FROM t1 WHERE v16 = 'varchar1a' OR v16 = 'varchar3a' ORDER BY v16;
             id select_type table type possible_keys key key_len ref rows Extra
            -# # # range # v16 # # # #
            +# # # ALL # NULL # # # #
             SELECT * FROM t1 WHERE v16 = 'varchar1a' OR v16 = 'varchar3a' ORDER BY v16;
             c c20 v16 v128
             a char1 varchar1a varchar1b
            {noformat}

            _Note: For now we assume that inside one engine, statistics is stable enough to produce consistent results on each test run, which is why we show certain fields in explain to let you decide whether you are satisfied with them or not. If further experience shows that even for the same engine, these tests routinely produce different results, and more often than not it's valid behavior, we might change it._

            For now, I will consider these results acceptable, and will add rdiff.

            As I said before, the rest of failures do not deserve verbose analysis, they are pretty straightforward, I just added rdiff for each of them.


            Now working with {{storage_engine/parts}} and {{storage_engine/trx}}.

            {{mkdir ../storage/innobase/mysql-test/storage_engine/trx}}
            {{mkdir ../storage/innobase/mysql-test/storage_engine/parts}}

            Copy your previously created {{suite.opt}} file to each of the subfolders: as far as MTR is concerned, they are separate suites.

            {{cp ../storage/innobase/mysql-test/storage_engine/suite.opt ../storage/innobase/mysql-test/storage_engine/trx/}}
            {{cp ../storage/innobase/mysql-test/storage_engine/suite.opt ../storage/innobase/mysql-test/storage_engine/parts/}}

            Maybe you'll want to add something else to those options. I, for one, will add {{\-\-innodb-lock-wait-timeout=1}} to {{../storage/innobase/mysql-test/storage_engine/trx/suite.opt}}. Probably it should have been done for other suites, too -- but it's never late, if there are any timeout issues observed.

            When you add rdiff files for subsuites, don't forget to put them in the subfolders:

            {{diff -u suite/storage_engine/parts/checksum_table.result suite/storage_engine/parts/checksum_table.reject > ../storage/innobase/mysql-test/storage_engine/parts/checksum_table.rdiff}}
            etc.

            Again, mostly failures are mismatches due to different output or unsupported functionality.
            _Note: Also note that repair_table test results are likely to differ, even if repair is supported, since the test tries to corrupt existing table files, which are different for each engine._

            *trx/cons_snapshot_serializable*:

            {noformat}
             # If consistent read works on this isolation level (SERIALIZABLE), the following SELECT should not return the value we inserted (1)
             SELECT * FROM t1;
             a
            +1
             COMMIT;
            {noformat}

            It is a bug. Filing as http://bugs.mysql.com/bug.php?id=65146 and adding to disabled.def (don't forget that it should be under trx folder now:
            {{../storage/innobase/mysql-test/storage_engine/trx/disabled.def}}:

            {noformat}
            cons_snapshot_serializable : MySQL:65146 (CONSISTENT SNAPSHOT does not work with SERIALIZABLE)
            {noformat}

            Now, running the whole set:

            {noformat}
            perl ./mtr --suite=storage_engine-innobase,storage_engine/*-innobase

            ...

            Spent 300.715 of 364 seconds executing testcases

            Completed: All 111 tests were successful.
            {noformat}

            Much slower than for MyISAM, but that's how it is usually is.

            {anchor:merge}

            h3. Advanced level: MERGE

            <coming soon>
            [Goal|#goal]
            [Problems to solve|#problems]
            [... Problem 1: Varying result files|#problem1]
            [...... Solution|#solution1]
            [... Problem 2: Unsupported features|#problem2]
            [...... Solution|#solution2]
            [... Problem 3: Varying result files|#problem3]
            [...... Solution|#solution3]
            [... Filed bugs|#bugs]
            [Tuning|#tuning]
            [... Assumptions|#assumptions]
            [... Common tuning steps|#common]
            [... Examples|#examples]
            [...... MyISAM|#myisam]
            [...... InnoDB plugin|#innodb]
            [...... MERGE|#merge]

            {anchor:goal}

            h2. Goal

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            The suite it not supposed to provide exhaustive engine testing, and certainly cannot test non-standard engine-specific features (due to its very nature of being agnostic to the engine under test), but rather perform a relatively quick evaluation of standard functionality expected from a storage engine.

            {anchor:problems}

            h2. Problems to solve

            Existing MTR tests are not very suitable for running on different storage engines.

            {anchor:problem1}

            h3. Problem 1: Varying result files

            Traditional MTR/mysqltest have very strict requirements in regard to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            Example:

            {noformat}
            OPTIMIZE TABLE t1;

            # One engine might in certain situations say

            Table Op Msg_type Msg_text
            test.t1 optimize status Table is already up to date

            # Another always says

            Table Op Msg_type Msg_text
            test.t1 optimize status OK
            {noformat}

            Neither is wrong, and yet if we have one variant in the result file, for another engine the test will fail.

            Till recently the only solution was to copy the whole suite somewhere and recreate result files, thus introducing usually unwanted code duplication.

            {anchor:solution1}
            h4. Solution

            To solve this problem, we will use functionality developed in scope of MDEV-30.
            With this task, the original test results can be adapted to the storage engine by creating diff files (basically, patches). These patches are stored in the engine folder rather than in the MTR suite folder, so no changes in the original test suite is performed.

            {anchor:problem2}
            h3. Problem 2: Unsupported features

            Most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a traditional MTR test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail -- not just produce a result mismatch which we could patch, but actually fail in the middle of execution, -- so we will have to stop running it, and ALTER TABLE will not be tested at all; and the same will happen to any test which uses keys, even if it's used briefly for an unimportant part of the test.

            This problem is even bigger than the first one, because not only does it currently require copying the test suite and maintaining a separate set of *result* files, but also modifying the test files themselves; and often modifications will be extensive enough to rule out even the possibility of future merges with the original test suite.

            {anchor:solution2}
            h4. Solution

            We will solve this problem by allowing all tests to run through intermediate failures to the end (unless the server crashes or a syntax error in the test itself occurs). Thus, the tests will also produce mismatches, and the engine maintainer will be able to decide whether the test is really inapplicable and needs to be disabled, or the partial failures are acceptable and can be stored in the result diff file.

            Example:

            We are testing that a table option is supported by the engine (merely is accepted and stored); lets say AUTO_INCREMENT. We create a table with the option and check that it is not lost; but we also want to see that it works with ALTER; so, we add an alter table statement where the option is modified.

            Now, if we try to run such a test with FEDERATED engine, it will break on ALTER -- not because the option is unsupported, but because ALTER is. In the storage engine test suite, the test will continue to the end and will produce a mismatch (because the ALTER statement will cause an error). Then, someone who tunes the suite for FEDERATED engine, can see the difference, decide that it's reasonable, since ALTER is not supposed to work, and create an rdiff file. Thus, we do not skip the test for the table option entirely, but simply limit the testing to functionality the engine supports.

            The tests will try to be smart in their internal logic, but within reason. When a statement is likely to fail (e.g. a feature is often unsupported), the test will check the result of the statement and will produce a verbose error message, including versions why it could happen. Also, if a failure happened on table creation, or in some other key moment, so that a part of test becomes useless (if a table was not created, there is no point at trying to alter it, etc.), the test will ignore a part of the flow, and proceed to the next part. But it does not happen always, the checks are only performed when the probability of a failure is reasonably high.
            Of course, if a part of the test is bypassed, it will also cause a mismatch, so the maintainer will be aware of that and will be able to check why it happens.

            Additionally, some tests will check the value of the default index (see below notes about configuration), and if it is unset, certain tests will be completely skipped, because every part of them requires indexes (e.g. index.test).

            For the functionality described in the ENGINES table (transactions, savepoints, XA), if a test requires one of the features and it is shown as unsupported in the table, the test will produce a warning. But unlike with the indexes, it will not be skipped, because we assume that for a new engine information in the ENGINES table might be inaccurate.

            {anchor:problem3}
            h3. Problem 3: Different primitives

            Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            {{CREATE TABLE t (i INT)}}
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like a connection string for FEDERATED engine. Since there is, obviously, no connection string in generic tests, every table creation for FEDERATED will fail, and nothing will be tested. Currently, there is no way to work around that apart from copying the tests and modifying them manually.

            In even more complicated scenario, in order to create a table properly, additional actions need to be performed; e.g. to create a functional MERGE table, we need to also create an underlying MyISAM table; and if we want to alter a MERGE table, we should alter the underlying table(s) accordingly.

            {anchor:solution3}
            h4. Solution

            We will provide the engine maintainer with several tools to tune the suite for their engine.

            * Some variables can be set to configure the basic test behavior:
            ** the engine name (to be used in {{CREATE TABLE}} and be masked in {{SHOW CREATE TABLE}});
            ** default column options (when any are required, e.g. NOT NULL for CSV);
            ** default table options (when any are required, e.g. connection for FEDERATED);
            ** default index (INDEX, UNIQUE INDEX, PRIMARY KEY, whichever supported, or a special index);
            ** default types (int type, char type, in case standard ones are not supported);

            * if any actions need to be performed before/after each test, they can be described in include files which are always executed (e.g. creation of a server for FEDERATED before a test, and dropping it after a test);

            * if a server requires non-standard procedures for table creation or modification, they can be tuned in include files which tests to use to perform these actions (CREATE and ALTER table are not run directly in the tests, only by calling the include files, so they are configurable);

            * the engine can have its own set of disabled files, so that there is no need to provide only selected test names on the MTR command line -- the whole suite can be run, and unnecessary files can be disabled through the list;

            * server options can be configured;

            * non-default combinations can be configured for the engine;

            * subsuites can be completely ignored for the engine, so that if there is no interest in running partitioning tests or transactional tests, they can be simply omitted without any effort.

            For more details, see the 'Tuning' section.

            {anchor:bugs}
            h3. Bugs filed while working on the suite:

            LP:973039 / MDEV-211 (Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK)
            LP:976104 / MDEV-216 (Assertion `0' failed in my_message_sql on UPDATE IGNORE, or unknown error on release build)
            LP:990187 / MDEV-237 (Assertion `share->reopen == 1' failed at maria_extra on ADD PARTITION)
            LP:994275 / MDEV-248 (Assertion `real->type() == Item::FIELD_ITEM' failed in add_not_null_conds(JOIN*))
            LP:994854 / MDEV-254 (Server hangs on updating an XtraDB table after FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT)
            LP:997397 (TRUNCATE on a partitioned Aria table does not reset AUTO_INCREMENT)
            MDEV-365 (Assertion `block->type == PAGECACHE_EMPTY_PAGE ...' failed in pagecache_read on ADD PARTITION)
            MDEV-366 (Assertion `share->reopen == 1' failed in maria_extra on DROP TABLE which is locked twice)
            MDEV-388 (Creating a federated table with a non-existing server returns a random error code)
            MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            MySQL:64888 (Inconsistent behavior of dynamic concurrent_insert values)
            MySQL:64892 (Session-level low_priority_updates does not work for INSERT)
            MySQL:65146 (WITH CONSISTENT SNAPSHOT does not work with isolation level SERIALIZABLE)
            MySQL:65225 (InnoDB miscalculates auto-increment after changing auto_increment_increment)
            MySQL:65429 / LP:1004910 (Assertion failure block->page.space == page_get_space_id(page_align(ptr)))
            MySQL:65431 / LP:1005052 (Error 1430 while selecting from a federated table with index on a bit column)
            MySQL:65846 (INSERT DELAYED on a BLACKHOLE table hangs forever)
            MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)

            _The list might be incomplete_

            {anchor:tuning}
            h2. Tuning

            Please note that the test suite is synthetic. It means that it contains tests for a set of features which (the whole set) is not supported by any known engine. The same is true for result files -- they are taken from different engines, depending on which produce the most sensible result, and in rare cases are even artificial. This means that there is no engine which would pass the whole suite without any tuning -- not even MyISAM, which contributed to the result files the most.

            {anchor:assumptions}
            h3. Assumptions

            We presume that the tests are set on a source tree build (which is reasonable, considering that the main goal is to help with storage engine development). We want to configure the tests for some engine OurENGINE.

            The storage engine code is located in {{<basedir>/storage/<ourengine>}} folder.

            The storage_engine test suite is located in {{<basedir>/mysql-test/suite/storage_engine}} folder and contains subsuites in subfolders of the corresponding names (currently {{<basedir>/mysql-test/suite/storage_engine/parts}} and {{<basedir>/mysql-test/suite/storage_engine/trx}}.

            {anchor:common}
            h3. Common tuning steps

            1. Create {{<basedir>/storage/ourengine/mysql-test/storage_engine}} folder.

            2. Copy {{<basedir>/mysql-test/suite/storage_engine/define_engine.inc}} to {{<basedir>/storage/ourengine/mysql-test/storage_engine}} and edit the file (at the very least set ENGINE variable; check other variables which you find there, and modify as needed.

            3. If the engine requires any additional server options (e.g. to load the engine, or to tune it, or both), create {{<basedir>/storage/ourengine/mysql-test/storage_engine/suite.opt}} file and add the options there.

            4. If you know in advance that the engine requires additional steps before a test, add them at the end of {{define_engine.inc}}.

            5. If you created any SQL objects in {{define_engine.inc}}, create file {{<basedir>/storage/ourengine/mysql-test/storage_engine/cleanup_engine.inc}}, or copy a stub from {{<basedir>/mysql-test/suite/storage_engine}}, and add the logic to drop the objects.

            6. If you know in advance that the engine requires non-standard table creation and/or modification procedure, copy {{<basedir>/mysql-test/suite/storage_engine/create_table.inc}} and {{<basedir>/mysql-test/suite/storage_engine/alter_table.inc}} into {{<basedir>/storage/ourengine/mysql-test/storage_engine/}} and modify them as needed.

            7. Try to run the 1st test:
            {{perl ./mtr --suite=storage_engine-ourengine 1st}}

            8. If the test produces a mismatch, analyze it and decide whether table creation or test options or server options require more tuning, or the difference is expected.

            9. If the difference is expected, create an rdiff file as
            {{ diff -u {{<basedir>/mysql-test/suite/storage_engine/1st.result <basedir>/mysql-test/suite/storage_engine/1st.reject > <basedir>/storage/ourengine/mysql-test/storage_engine/1st.rdiff}}

            10. When the 1st test passes, run the whole suite:

            {{perl ./mtr --suite=storage_engine-ourengine --force --max-test-fail=0}}

            11. Analyze failures, modify parameters or include files as needed, create rdiff files.

            12. If any tests requires specific non-standard server/engine options, create files {{<testname>.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine}}.

            13. If any tests have to be skipped, add them to {{<basedir>/storage/ourengine/mysql-test/storage_engine/disabled.def}}.

            14. When you are satisfied with the results of storage_engine suite, proceed to the subsuites. If you are interested in running partitions tests, create folder {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            15. Repeat step 3, if needed, only now create {{suite.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}.

            16. Run the subsuite as {{perl ./mtr --suite=storage_engine/parts-ourengine --force --max-test-fail=0}}

            17. Repeat steps 11-13, only now create files in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            18. If you want to run transactional tests, repeat steps 14-17 for trx subsuite.

            19. To execute the whole set of tests, run {{perl ./mtr --suite=storage_engine-ourengine,storage_engine/*-ourengine}}

            h2. Examples

            Below are some real-life experiences of tuning the test suite for engines that currently come with MariaDB. Actual exact steps are always unique and depend on the storage engine (otherwise it wouldn't be tuning). In some cases only very basic actions are required; in others, it takes more work. In general, the more a storage engine is "different", the trickier is the task.


            h3. Easy level: MyISAM

            Lets see how to make the suite work for a relatively standard engine, in terms of behavior similar to main MySQL engines.
            We will create an overlay for MyISAM.
            _Note: "overlay" is a term introduced by MDEV\-30, and it basically means a test suite or set of suites adapted for a certain engine_

            {{cd <basedir>/mysql-test}}
            {{mkdir -p ../storage/myisam/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/myisam/mysql-test/storage_engine/}}

            Edit the copied version of {{define_engine.inc}} to set {{ENGINE}} to MyISAM:

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = MyISAM;
             #
             ################################
             #
            {noformat}

            All other parameters look good: MyISAM does not require any specific table or column options, and it supports non-unique indexes, INT and CHAR types. These all are defaults (you can see it in {{define_engine.inc}}).

            Also, we do not need any additional server options to activate the engine, since MyISAM is always there.

            So, now we can try to run the 1st test:

            {noformat}
            perl ./mtr --suite=storage_engine-myisam 1st

            ...

            storage_engine-myisam.1st [ pass ] 20
            {noformat}

            The first test passed. Okay, now we can run the whole suite. Some tests will fail, this is expected; we need to see the results
            so we can decide whether we should accept the difference, or disable the test, or patch the code.
            So, we will run it with {{\-\-force}} and {{\-\-max-test-fail=0}}, to see all at once (you might also want to redirect the output to a file, because you will need it):

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine-myisam

            ...

            Spent 42.193 of 64 seconds executing testcases

            Completed: Failed 7/99 tests, 92.93% were successful.

            Failing test(s): storage_engine-myisam.alter_tablespace storage_engine-myisam.check_table storage_engine-myisam.foreign_keys storage_engine-myisam.index_type_hash storage_engine-myisam.show_engine storage_engine-myisam.tbl_opt_insert_method storage_engine-myisam.tbl_opt_union
            {noformat}

            _Note: Total execution time depends on the engine and on the machine. For MyISAM it's pretty fast, for other engines might take longer._

            7 failing tests on the first iteration is extremely good, of course it won't be that bright with other engines. But it's MyISAM, what did you expect...

            (Please keep in mind that the suite results are synthetic, meaning that they belong to different engines, and in rare cases are even artificial, e.g. when the only available engine supporting the functionality currently has a bug. So, no engine is expected to pass all tests without tuning. Not even MyISAM.)

            Now it's time to analyze results.

            Result handling in this suite is somewhat different from standard MTR tests. Unless you managed to crash the server or to hit a syntax error in the test itself, all results will be mismatches -- that is, a test will not abort on a failed statement, but instead will try to proceed. The big benefit of it is that if your engine does not support everything, the tests are still usable, you'll just need to approve the difference in the results (by creating an rdiff file).

            Of course, it also means that the tests produce more noise than usual -- e.g. if your engine does not support ALTER, a standard MTR test would break on the first ALTER statement, while ours will continue and will allow you to use the logic, but will also produce a bunch of mismatches due to failing statements. Internal logic in tests does its best to make it cleaner, but still, the noise is expected.

            So, with this knowledge, lets find the failures and go through them one by one.
            Tip: if you saved the output to a file, failures can easily be found by {{' fail '}} search string (without quote marks).

            The first failing test is *alter_tablespace*. Well, naturally -- no tablespaces for MyISAM. But lets look at the output.

            Mismatch says that some stuff is missing, and instead the test produces this:

            {noformat}
            +ERROR HY000: Table storage engine for 't1' doesn't have this option
            +# ERROR: Statement ended with errno 1031, errname ER_ILLEGAL_HA (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
            +# The statement|command finished with ER_ILLEGAL_HA.
            +# Tablespace operations or the syntax or the mix could be unsupported.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Since MyISAM is not supposed to support tablespaces, changing the engine code is not an option. You can decide whether to disable the test entirely, or to create an rdiff file for it. Unless you have really tough restrictions on suite execution time, I recommend keeping tests and creating rdiffs. Even if you don't care about the result, the test, running as a regression check, will at least show that this very first statement has not suddenly started crashing the server, or something unexpectedly changed in the behavior.

            Creating an rdiff file is simple:

            {{diff -u suite/storage_engine/alter_tablespace.result suite/storage_engine/alter_tablespace.reject > ../storage/myisam/mysql-test/storage_engine/alter_tablespace.rdiff}}

            Next failed test: *check_table*.

            Its difference is simple:

            {noformat}
             INSERT INTO t1 (a,b) VALUES (6,'f');
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a,b) VALUES (7,'g');
             INSERT INTO t2 (a,b) VALUES (8,'h');
             CHECK TABLE t2, t1 MEDIUM;
            @@ -52,7 +52,7 @@
             INSERT INTO t1 (a) VALUES (17),(120),(132);
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a) VALUES (801),(900),(7714);
             CHECK TABLE t1 MEDIUM;
             Table Op Msg_type Msg_text
            {noformat}

            No harm if the engine realizes that the table is up to date and says so; adding a diff.

            {{diff -u suite/storage_engine/check_table.result suite/storage_engine/check_table.reject > ../storage/myisam/mysql-test/storage_engine/check_table.rdiff}}


            Next failing test: *foreign_keys*

            Just like alter_tablespace, it produces a lot of messages about possibly unsupported functionality, which is natural, since MyISAM doesn't have foreign keys. Adding a diff.


            Next failing test: *index_type_hash*

            It produces mismatches where HASH type is replaced by BTREE type:

            {noformat}
             SHOW KEYS IN t1;
             Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
            -t1 1 a 1 a # # NULL NULL # HASH
            +t1 1 a 1 a # # NULL NULL # BTREE
            {noformat}

            Since MyISAM doesn't support HASH type, it is fine. Adding a diff.

            Next failing test: *show_engine*

            This test is optimistic and expects that SHOW ENGINE <engine name> STATUS command returns _something_ -- that is, the test obfuscates the output, but expects that it contains a row. For MyISAM it is not the case, hence the diff looks like a missing row:

            {noformat}
            @@ -4,7 +4,6 @@
             SHOW ENGINE <STORAGE_ENGINE> STATUS;
             Type Name Status
            -<STORAGE_ENGINE> ### Engine status, can be long and changeable ###
             # For SHOW MUTEX even the number of lines is volatile, so the result logging is disabled,
            {noformat}

            Adding a diff.


            Next failing tests: *tbl_opt_insert_method*, *tbl_opt_union*

            MyISAM does not use the table option INSERT_METHOD, it's the MERGE engine thing. But table creation does not normally fail on unsupported options, they are simply ignored. That's what we see here:

            {noformat}
            @@ -5,7 +5,7 @@
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL,
               `b` char(8) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1 INSERT_METHOD=FIRST
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
            {noformat}

            SHOW ENGINE does not show the option. This is fine, adding rdiff. Exactly the same for UNION.

            These were all 7 failures.

            Now it's time to take care of subsuites. Currently there are two of them: {{parts}} (stands for 'partitions'), and {{trx}} (stands for 'transactions').

            MyISAM definitely supports partitioning, so lets try them first.

            (we are still in {{<basedir>/mysql-test}})

            {{mkdir ../storage/myisam/mysql-test/storage_engine/parts}}

            This will show MTR that our engine is interested in the {{storage_engine/parts}} subsuite.

            No additional parameters or tricks should be needed for MyISAM to run partition tests, since the suite already sets {{--partition}} option. So, we'll just run it:

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/parts-myisam

            ...

            Spent 1.168 of 5 seconds executing testcases

            Completed: All 8 tests were successful.
            {noformat}

            _Note: For now, it is a very basic suite, only contains a few tests and literally takes several seconds_

            All good, tests passed, nothing needs to be done here.

            Finally, there is the transactions subsuite (also contains tests for XA and snapshots). It is also very basic, but we know that MyISAM doesn't support transactions, XA and snapshots, so results won't be this pretty. Maybe we don't even need to run it, but why not try.

            {{mkdir ../storage/myisam/mysql-test/storage_engine/trx}}

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/trx-myisam

            Spent 0.000 of 10 seconds executing testcases

            Completed: Failed 13/13 tests, 0.00% were successful.

            Failing test(s): storage_engine/trx-myisam.cons_snapshot_repeatable_read storage_engine/trx-myisam.cons_snapshot_serializable storage_engine/trx-myisam.delete storage_engine/trx-myisam.insert storage_engine/trx-myisam.level_read_committed storage_engine/trx-myisam.level_read_uncommitted storage_engine/trx-myisam.level_repeatable_read storage_engine/trx-myisam.level_serializable storage_engine/trx-myisam.select_for_update storage_engine/trx-myisam.select_lock_in_share_mode storage_engine/trx-myisam.update storage_engine/trx-myisam.xa storage_engine/trx-myisam.xa_recovery
            {noformat}

            The results are mess, as expected. All differences start with an extra warning (sometimes more than one, as the same is produced for snapshots and XA):

            {noformat}
            +# -- WARNING ----------------------------------------------------------------
            +# According to I_S.ENGINES, MyISAM does not support transactions.
            +# If it is true, the test will most likely fail; you can
            +# either create an rdiff file, or add the test to disabled.def.
            +# If transactions should be supported, check the data in Information Schema.
            +# ---------------------------------------------------------------------------
            {noformat}

            The rest is a mix of wrong results (missing rows, extra rows, different rows), warnings about rollback not working, etc. With 3rd-party engines, it would be up to a maintainer whether to add diffs for all tests or just ignore the whole thing since transactions are not supported. If you choose to ignore, simply delete the newly created {{../storage/<your_engine>/mysql-test/storage_engine/trx}}, and the subsuite won't be executed for the engine. I choose to keep it, so I'll add rdiffs for all 13 files, it's not that much work, after all.

            {noformat}
            diff -u suite/storage_engine/trx/cons_snapshot_repeatable_read.result suite/storage_engine/trx/cons_snapshot_repeatable_read.reject > ../storage/myisam/mysql-test/storage_engine/trx/cons_snapshot_repeatable_read.rdiff
            ...
            diff -u suite/storage_engine/trx/xa_recovery.result suite/storage_engine/trx/xa_recovery.reject > ../storage/myisam/mysql-test/storage_engine/trx/xa_recovery.rdiff
            {noformat}

            _Tip: If you decide to do the same, take time to go through the reject files and see that your engine works as expected, under the circumstances; sometimes making things do what they are not normally expected to do reveals hidden problems._


            Now we can run everything together:

            {noformat}

            perl ./mtr --suite=storage_engine-myisam,storage_engine/*-myisam

            ...

            Spent 46.249 of 70 seconds executing testcases

            Completed: All 120 tests were successful.
            {noformat}

            That's all. Now just stay out of failures.

            {anchor:innodb}

            h3. Intermediate level: InnoDB plugin

            A little bit more work is required to create an overlay for InnoDB. Lets try to do it for InnoDB plugin (which is not loaded by default as of 5.5.25, but is built there).

            Again, start with creating the overlay directory:

            {{mkdir -p ../storage/innobase/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/innobase/mysql-test/storage_engine/}}
            Edit {{../storage/innobase/mysql-test/storage_engine/define_engine.inc}}

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = InnoDB;
             #
             ################################
             #
            {noformat}

            As for MyISAM, all defaults are fine for InnoDB. But now we also need to server startup options to run server with the InnoDB plugin.

            create the file {{../storage/innobase/mysql-test/storage_engine/suite.opt}}:

            {noformat}
            --ignore-builtin-innodb
            --plugin-load=ha_innodb
            --innodb
            {noformat}

            It should be enough for the base suite. Lets run the 1st test now:

            {noformat}

            perl ./mtr --suite=storage_engine-innobase 1st

            ...

            storage_engine-innobase.1st [ pass ] 852
            {noformat}

            And then the whole suite:

            {noformat}

            perl ./mtr --suite=storage_engine-innobase --max-test-fail=0 --force

            ...

            Spent 153.712 of 402 seconds executing testcases

            Completed: Failed 28/99 tests, 71.72% were successful.

            Failing test(s): storage_engine-innobase.alter_table_online storage_engine-innobase.alter_tablespace storage_engine-innobase.autoinc_secondary storage_engine-innobase.autoinc_vars storage_engine-innobase.cache_index storage_engine-innobase.checksum_table_live storage_engine-innobase.delete_low_prio storage_engine-innobase.fulltext_search storage_engine-innobase.index_enable_disable storage_engine-innobase.index_type_hash storage_engine-innobase.insert_delayed storage_engine-innobase.insert_high_prio storage_engine-innobase.insert_low_prio storage_engine-innobase.lock_concurrent storage_engine-innobase.optimize_table storage_engine-innobase.repair_table storage_engine-innobase.select_high_prio storage_engine-innobase.tbl_opt_ai storage_engine-innobase.tbl_opt_data_index_dir storage_engine-innobase.tbl_opt_insert_method storage_engine-innobase.tbl_opt_key_block_size storage_engine-innobase.tbl_opt_row_format storage_engine-innobase.tbl_opt_union storage_engine-innobase.type_char_indexes storage_engine-innobase.type_float_indexes storage_engine-innobase.type_spatial_indexes storage_engine-innobase.update_low_prio storage_engine-innobase.vcol
            {noformat}

            Not as great as it was with MyISAM. Lets see the details.

            Some mismatches are either identical or similar to those in MyISAM, and caused by unsupported functionality (e.g. fulltext search, hash indexes, optimize_table, etc.). I won't go through them here, will just add rdiff files.

            But some deserve attention.

            *alter_table_online*:

            {noformat}
             ALTER ONLINE TABLE t1 CHANGE b new_name <INT_COLUMN>;
            +ERROR HY000: Can't execute the given 'ALTER' command as online
            +# ERROR: Statement ended with errno 1915, errname ER_CANT_DO_ONLINE (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command finished with ER_CANT_DO_ONLINE.
            +# Functionality or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            It's hard to say whether all engines that support ALTER ONLINE should support them for the same set of changes; most likely not, and what we see here is just an InnoDB limitation. On the other hand, we know that MariaDB supports ALTER ONLINE, and namely renaming a column (see http://kb.askmonty.org/en/alter-table), and InnoDB supports at least some ALTER ONLINE operations (e.g. CHANGE COLUMN i i INT DEFAULT 1 works); so I think it's worth filing it as a low-priority bug, at least to make sure it works as expected: https://mariadb.atlassian.net/browse/MDEV-397

            For now, I will add the test to {{../storage/innobase/mysql-test/storage_engine/disabled.def}} list (need to create it, since it's the first test we disable for the engine):

            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            {noformat}

            If later it turns out to be expected behavior or limitation, I will remove the line from {{disabled.def}}, and will instead add an rdiff file.


            *alter_tablespace*:

            {noformat}
            +# ERROR: Statement ended with errno 1030, errname ER_GET_ERRNO (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
            +# The statement|command finished with ER_GET_ERRNO.
            +# Tablespace operations or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Now, that seems unexpected. But then again, tablespace operations are only applicable when InnoDB works in {{innodb-file-per-table}} mode, which we did not set in our options. Unless we want to use it for all tests, lets set it for this one only:

            {{../storage/innobase/mysql-test/storage_engine/alter_tablespace.opt}}
            {noformat}
            --innodb-file-per-table=1
            {noformat}

            *autoinc_vars*:

            {noformat}
             INSERT INTO t1 (a,b) VALUES (NULL,'g'),(NULL,'h'),(NULL,'i');
             SELECT LAST_INSERT_ID();
             LAST_INSERT_ID()
            -850
            +1100
             SELECT * FROM t1;
             a b
             1 a
            +1100 g
            +1150 h
            +1200 i
             2 b
             200 d
             3 c
             500 e
             800 f
            -850 g
            -900 h
            -950 i
             DROP TABLE t1;
             SET auto_increment_increment = 500;
             SET auto_increment_offset = 300;
            {noformat}

            This is weird. Now real investigation starts -- there is a good reason to look at the reject file to see the continuous flow:

            {noformat}
            ...

            SET auto_increment_increment = 300;
            INSERT INTO t1 (a,b) VALUES (NULL,'d'),(NULL,'e'),(NULL,'f');
            SELECT LAST_INSERT_ID();
            LAST_INSERT_ID()
            200
            SELECT * FROM t1;
            a b
            1 a
            2 b
            200 d
            3 c
            500 e
            800 f
            SET auto_increment_increment = 50;
            INSERT INTO t1 (a,b) VALUES (NULL,'g'),(NULL,'h'),(NULL,'i');
            SELECT LAST_INSERT_ID();
            LAST_INSERT_ID()
            1100
            SELECT * FROM t1;
            a b
            1 a
            1100 g
            1150 h
            1200 i
            2 b
            200 d
            3 c
            500 e
            800 f
            DROP TABLE t1;
            {noformat}

            The first insert works all right with {{auto_increment_increment = 300}}. Then we change it to {{50}}, but the following insert still uses {{300}} for the first value it inserts, and only then switches to {{50}}. Thus we get {{1100}} instead of {{850}}, and following values also differ. This smells like a bug, although not a very serious one. Since a brief check shows it's also reproducible on Oracle MySQL, we will file it on bugs.mysql.com: http://bugs.mysql.com/bug.php?id=65225 (I actually did it some time ago, when I tried to run the storage engine suite for InnoDB for the first time, that's why it's not brand new).

            And we will also add the test to {{../storage/innobase/mysql-test/storage_engine/disabled.def}}:
            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
            {noformat}


            *delete_low_prio*, *insert_high_prio*, *insert_low_prio*, *select_high_prio*, *update_low_prio*:

            They all have similar fragments in their output:

            {noformat}
            +# Timeout in include/wait_show_condition.inc for = 'DELETE FROM t1'
            +# show_statement : SHOW PROCESSLIST
            +# field : Info
            +# condition : = 'DELETE FROM t1'
            +# max_run_time : 3
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command finished with timeout in wait_show_condition.inc.
            +# DELETE or table locking or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            As the documentation says, the high|low priority functionality (e.g. DELETE LOW_PRIORITY) only works for table-level locking, and the whole test is based on this assumption. InnoDB uses row-level locking, so the entire flow does not work quite as expected. We still can add rdiff files, but, unlike the most of other tests, these ones take relatively long (probably over 10 seconds each). Besides, since locking works entirely different here, the test results are likely to be unstable, as it will be all about timing. So, it makes more sense to disable the tests by adding them to {{../storage/innobase/mysql-test/storage_engine/disabled.def}}:

            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
            delete_low_prio : InnoDB does not use table-level locking
            insert_high_prio : InnoDB does not use table-level locking
            insert_low_prio : InnoDB does not use table-level locking
            select_high_prio : InnoDB does not use table-level locking
            update_low_prio : InnoDB does not use table-level locking
            {noformat}


            *tbl_opt_ai*:

            {noformat}
             Table Create Table
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> AUTO_INCREMENT=10 DEFAULT CHARSET=latin1
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
             ALTER TABLE t1 AUTO_INCREMENT=100;
             SHOW CREATE TABLE t1;
             Table Create Table
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> AUTO_INCREMENT=100 DEFAULT CHARSET=latin1
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
             DROP TABLE t1;
            {noformat}

            We already looked at ignored table options in MyISAM tests, but this one is different. Why would AUTO_INCREMENT be ignored, it should be supported all right by InnoDB? (Brief manual check confirms it). Some digging shows, however, that in our case it is _truly_ ignored. It is reproducible with Oracle MySQL, filing a bug on bugs.mysql.com: http://bugs.mysql.com/bug.php?id=65901

            Adding the test to {{../storage/innobase/mysql-test/storage_engine/disabled.def}}:

            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
            delete_low_prio : InnoDB does not use table-level locking
            insert_high_prio : InnoDB does not use table-level locking
            insert_low_prio : InnoDB does not use table-level locking
            select_high_prio : InnoDB does not use table-level locking
            update_low_prio : InnoDB does not use table-level locking
            tbl_opt_ai : MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)
            {noformat}


            *tbl_opt_key_block_size*, *tbl_opt_row_format*:

            {noformat}
             CREATE TABLE t1 (a <INT_COLUMN>, b <CHAR_COLUMN>) ENGINE=<STORAGE_ENGINE> <CUSTOM_TABLE_OPTIONS> KEY_BLOCK_SIZE=8;
            +Warnings:
            +Warning 1478 InnoDB: KEY_BLOCK_SIZE requires innodb_file_per_table.
            +Warning 1478 InnoDB: KEY_BLOCK_SIZE requires innodb_file_format > Antelope.
            +Warning 1478 InnoDB: ignoring KEY_BLOCK_SIZE=8.
            {noformat}

            Doing the same as we did for alter_tablespace, only now adding both {{innodb_file_per_table}} and {{innodb_file_format}}:

            {{../storage/innobase/mysql-test/storage_engine/tbl_opt_key_block_size.opt}}:

            {noformat}
            --innodb-file-per-table=1
            --innodb-file-format=Barracuda
            {noformat}


            *type_char_indexes*:

            {noformat}
             SET SESSION optimizer_switch = 'engine_condition_pushdown=on';
             EXPLAIN SELECT * FROM t1 WHERE c > 'a';
             id select_type table type possible_keys key key_len ref rows Extra
            -# # # range c_v c_v # # # Using index condition
            +# # # range c_v c_v # # # Using where
             SELECT * FROM t1 WHERE c > 'a';
             c c20 v16 v128
             b char3 varchar1a varchar1b
            @@ -135,7 +135,7 @@
             r3a
             EXPLAIN SELECT * FROM t1 WHERE v16 = 'varchar1a' OR v16 = 'varchar3a' ORDER BY v16;
             id select_type table type possible_keys key key_len ref rows Extra
            -# # # range # v16 # # # #
            +# # # ALL # NULL # # # #
             SELECT * FROM t1 WHERE v16 = 'varchar1a' OR v16 = 'varchar3a' ORDER BY v16;
             c c20 v16 v128
             a char1 varchar1a varchar1b
            {noformat}

            _Note: For now we assume that inside one engine, statistics is stable enough to produce consistent results on each test run, which is why we show certain fields in explain to let you decide whether you are satisfied with them or not. If further experience shows that even for the same engine, these tests routinely produce different results, and more often than not it's valid behavior, we might change it._

            For now, I will consider these results acceptable, and will add rdiff.

            As I said before, the rest of failures do not deserve verbose analysis, they are pretty straightforward, I just added rdiff for each of them.


            Now working with {{storage_engine/parts}} and {{storage_engine/trx}}.

            {{mkdir ../storage/innobase/mysql-test/storage_engine/trx}}
            {{mkdir ../storage/innobase/mysql-test/storage_engine/parts}}

            Copy your previously created {{suite.opt}} file to each of the subfolders: as far as MTR is concerned, they are separate suites.

            {{cp ../storage/innobase/mysql-test/storage_engine/suite.opt ../storage/innobase/mysql-test/storage_engine/trx/}}
            {{cp ../storage/innobase/mysql-test/storage_engine/suite.opt ../storage/innobase/mysql-test/storage_engine/parts/}}

            Maybe you'll want to add something else to those options. I, for one, will add {{\-\-innodb-lock-wait-timeout=1}} to {{../storage/innobase/mysql-test/storage_engine/trx/suite.opt}}. Probably it should have been done for other suites, too -- but it's never late, if there are any timeout issues observed.

            When you add rdiff files for subsuites, don't forget to put them in the subfolders:

            {{diff -u suite/storage_engine/parts/checksum_table.result suite/storage_engine/parts/checksum_table.reject > ../storage/innobase/mysql-test/storage_engine/parts/checksum_table.rdiff}}
            etc.

            Again, mostly failures are mismatches due to different output or unsupported functionality.
            _Note: Also note that repair_table test results are likely to differ, even if repair is supported, since the test tries to corrupt existing table files, which are different for each engine._

            *trx/cons_snapshot_serializable*:

            {noformat}
             # If consistent read works on this isolation level (SERIALIZABLE), the following SELECT should not return the value we inserted (1)
             SELECT * FROM t1;
             a
            +1
             COMMIT;
            {noformat}

            It is a bug. Filing as http://bugs.mysql.com/bug.php?id=65146 and adding to disabled.def (don't forget that it should be under trx folder now:
            {{../storage/innobase/mysql-test/storage_engine/trx/disabled.def}}:

            {noformat}
            cons_snapshot_serializable : MySQL:65146 (CONSISTENT SNAPSHOT does not work with SERIALIZABLE)
            {noformat}

            Now, running the whole set:

            {noformat}
            perl ./mtr --suite=storage_engine-innobase,storage_engine/*-innobase

            ...

            Spent 300.715 of 364 seconds executing testcases

            Completed: All 111 tests were successful.
            {noformat}

            Much slower than for MyISAM, but that's how it is usually is.

            {anchor:merge}

            h3. Advanced level: MERGE


            Yet more tricks would be required to tune the same suite for the MERGE engine, because now we will also have to think about how a table is created.
            We can't just create a plain MERGE table and work with it, it needs to have an underlying table, at least one; and if we alter the merge table, underlying tables need to be altered accordingly, otherwise the merge table will become non-functional.

            Start the same way as we started for other engines, by creating the overlay folder:

            {{mkdir -p ../storage/myisammrg/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/myisammrg/mysql-test/storage_engine/}}

            We know that we'll need INSERT_METHOD and UNION in our table options; in other circumstances, they should have been added to {{$default_tbl_opts}}; but we cannot set a global UNION, because it will contain different underlying tables, and since we will be modifying the creation procedure anyway, there is no point at adding INSERT_METHOD here, either.

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = MRG_MYISAM;
             #
             ################################
             #
            {noformat}

            What happens if we now run the 1st test if we did before?

            {{perl ./mtr --suite=storage_engine-myisammrg 1st}}

            {noformat}
             SHOW COLUMNS IN t1;
             INSERT INTO t1 VALUES (1,'a');
            +ERROR HY000: Table 't1' is read only
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command finished with ER_OPEN_AS_READONLY.
            +# INSERT INTO .. VALUES or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            That's because we don't have underlying tables under the merge table. We need to modify table creation procedure.
            First, we need to decide how to do it. There can be many ways, I will choose a simple one, as I think:
            - before each test, I will create a special {{mrg}} schema, which will contain underlying tables, so I don't need to remember all the names when it's time to cleanup;
            - at the end of the test, i will drop the {{mrg}} schema, and thus will get rid of all additional objects at once;
            - whenever a new test table has to be created, I will create a MyISAM table with the same name in {{mrg}} schema, and will point my test table at it;
            - whenever a test table has to be altered, I will also alter the MyISAM table with the same name in {{mrg}} schema.

            In order to achieve this, we need to override 3 files, and modify our already created {{../storage/myisammrg/mysql-test/storage_engine/define_engine.inc}}. Lets start with the latter.

            {{define_engine.inc}} is the include file which is executed before each test. So, it's the place to put the logic which precedes a test.
            At the end of {{../storage/myisammrg/mysql-test/storage_engine/define_engine.inc}} I will {{mrg}} schema creation:

            {noformat}
            @@ -40,6 +40,10 @@
             # Here you can place your custom MTR code which needs to be executed before each test,
             # e.g. creation of an additional schema or table, etc.
             # The cleanup part should be defined in cleanup_engine.inc
            +--disable_warnings
            +DROP DATABASE IF EXISTS mrg;
            +--enable_warnings
            +CREATE DATABASE mrg;
            {noformat}

            Now, it's time for the 3 files to override:

            {{cp suite/storage_engine/cleanup_engine.inc ../storage/myisammrg/mysql-test/storage_engine/}}
            {{cp suite/storage_engine/create_table.inc ../storage/myisammrg/mysql-test/storage_engine/}}
            {{cp suite/storage_engine/alter_table.inc ../storage/myisammrg/mysql-test/storage_engine/}}
             
            {{cleanup_engine.inc}} is the file which is executed after each test; so, in {{../storage/myisammrg/mysql-test/storage_engine/cleanup_engine.inc}} I will be dropping my {{mrg}} schema:

            {noformat}
            @@ -8,4 +8,9 @@
             # Here you can add whatever is needed to cleanup
             # in case your define_engine.inc created any artefacts,
             # e.g. an additional schema and/or tables.
            +--disable_query_log
            +--disable_warnings
            +DROP DATABASE IF EXISTS mrg;
            +--enable_warnings
            +--enable_query_log
            {noformat}

            Now, the actual table creation.
            Tests do not run {{CREATE TABLE}} / {{ALTER TABLE}} statements directly, they always call {{create_table.inc}} or {{alter_table.inc}}, correspondingly. So, if we edit them properly, it will affect all tests at once -- the gain is worth spending some effort.

            Below I will show the changes I had made; in fact, there are many ways to achieve the same goal, probably some of them more efficient. Be creative when the time comes.

            {noformat}
            --- suite/storage_engine/create_table.inc 2012-07-15 17:46:03.638461728 +0400
            +++ ../storage/myisammrg/mysql-test/storage_engine/create_table.inc 2012-07-15 22:08:29.324511647 +0400
            @@ -54,6 +54,15 @@
               --let $table_name = t1
             }
             
            +# Child statement is a statement that will create an underlying table.
            +# From this point, it will deviate from the main statement, that's why
            +# we start creating it here in parallel with the main one.
            +# For underlying tables, we will create a table in mrg schema, e.g.
            +# for table t1 the underlying table will be mrg.t1, etc.
            +# Since we will only create one child here, it should be enough. If we want more,
            +# we can always add a suffix, e.g. mrg.t1_child1, mrg.t1_child2, etc.
            +
            +--let $child_statement = $create_statement mrg.$table_name
             --let $create_statement = $create_statement $table_name
             
             if (!$create_definition)
            @@ -70,6 +79,9 @@
             if ($create_definition)
             {
               --let $create_statement = $create_statement ($create_definition)
            + # Table definition for the underlying table should be the same
            + # as for the MERGE table
            + --let $child_statement = $child_statement ($create_definition)
             }
             
             # If $default_engine is set, we will rely on the default storage engine
            @@ -78,6 +90,12 @@
             {
               --let $create_statement = $create_statement ENGINE=$storage_engine
             }
            +# Engine for an underlying table differs
            +--let $child_statement = $child_statement ENGINE=MyISAM
            +
            +# Save default table options, we will want to restore them later
            +--let $default_tbl_opts_saved = $default_tbl_opts
            +--let $default_tbl_opts = $default_tbl_opts UNION(mrg.$table_name) INSERT_METHOD=LAST
             
             # Default table options from define_engine.inc
             --let $create_statement = $create_statement $default_tbl_opts
            @@ -86,6 +104,7 @@
             if ($table_options)
             {
               --let $create_statement = $create_statement $table_options
            + --let $child_statement = $child_statement $table_options
             }
             
             # The difference between $extra_tbl_opts and $table_options
            @@ -98,16 +117,19 @@
             if ($extra_tbl_opts)
             {
               --let $create_statement = $create_statement $extra_tbl_opts
            + --let $child_statement = $child_statement $extra_tbl_opts
             }
             
             if ($as_select)
             {
               --let $create_statement = $create_statement AS $as_select
            + --let $child_statement = $child_statement AS $as_select
             }
             
             if ($partition_options)
             {
               --let $create_statement = $create_statement $partition_options
            + --let $child_statement = $child_statement $partition_options
             }
             
             # We now have the complete CREATE statement in $create_statement.
            @@ -120,6 +142,12 @@
             # Surround it by --disable_query_log/--enable_query_log
             # if you don't want it to appear in the result output.
             #####################
            +--disable_warnings
            +--disable_query_log
            +eval DROP TABLE IF EXISTS mrg.$table_name;
            +eval $child_statement;
            +--enable_query_log
            +--enable_warnings
             
             if ($disable_query_log)
             {
            @@ -166,6 +194,10 @@
             --let $temporary = 0
             --let $disable_query_log = 0
             
            +# Restore default table options now
            +--let $default_tbl_opts = $default_tbl_opts_saved
            +
            +
             # Restore the error codes of the main statement
             --let $mysql_errno = $my_errno
             --let $mysql_errname = $my_errname
            {noformat}

            We know we also need to modify {{alter_table.inc}}, but it's interesting to see if our changes actually work.

            {noformat}

            perl ./mtr --suite=storage_engine-myisammrg 1st

            ...

            storage_engine-myisammrg.1st [ pass ] 26
            {noformat}


            Great. Lets now modify {{../storage/myisammrg/mysql-test/storage_engine/alter_table.inc}}:

            {noformat}
            @@ -20,9 +20,12 @@
             # --let $alter_definition = ADD COLUMN b $char_col DEFAULT ''
             #
             
            +--let $child_alter_definition = $alter_definition
            +
             if ($rename_to)
             {
               --let $alter_definition = RENAME TO $rename_to
            + --let $child_alter_definition = RENAME TO mrg.$rename_to
             }
             
             if (!$alter_definition)
            @@ -43,6 +46,9 @@
             }
             
             --let $alter_statement = $alter_statement TABLE $table_name $alter_definition
            +# We don't want to do ONLINE on underlying tables, we are not testing MyISAM
            +--let $child_statement = ALTER TABLE mrg.$table_name $child_alter_definition
            +
             
             
             # We now have the complete ALTER statement in $alter_statement.
            @@ -75,6 +81,20 @@
             # Surround it by --disable_query_log/--enable_query_log
             # if you don't want it to appear in the result output.
             #####################
            +--disable_query_log
            +--disable_warnings
            +
            +# We will only try to alter the underlying table if the main alter was successful
            +if (!$my_errno)
            +{
            + if ($rename_to)
            + {
            + eval ALTER TABLE $rename_to UNION(mrg.$rename_to);
            + }
            + eval $child_statement;
            +}
            +--enable_warnings
            +--enable_query_log
             
             # Unset the parameters, we don't want them to be accidentally reused later
             --let $alter_definition =
            {noformat}

            {quote}
            Note that in both create_table and alter_table we run our additional code with {{disable_query_log}} / {{disable_result_log}}. It's a tradeoff: this way we reduce the number of mismatches (because our additional code does not produce anything), but it will also make investigation more difficult, should a problem start somewhere in this code. It's up to the person who maintains the engine suite to decide what's best.

            Example:
            We have a MERGE table which points to an underlying table containing non-unique values. Normally, the test assumes that the table under test contains these values, of course; but in our case it's actually the underlying MyISAM table.
            Then, the test performs {{ALTER TABLE .. ADD UNIQUE INDEX ...}} and expects it to fail.
            In our case, the statement on the MERGE table will succeed, but the statement on the underlying table will fail quietly; if the test tries to do something else afterwards, it reveal that the merge table and the underlying table diverged, but it won't be clear from the test output why it happened.
            {quote}

            Now lets try to run the suite:

            {noformat}
            perl ./mtr --suite=storage_engine-myisammrg --force --max-test-fail=0

            Spent 34.141 of 80 seconds executing testcases

            Completed: Failed 41/98 tests, 58.16% were successful.

            {noformat}
            Not great, but not that bad either, considering. Lets look at the results.

            *alter_table* and some other tests produce the following mismatch on SHOW CREATE TABLE:

            {noformat}
            @@ -127,7 +127,7 @@
               `a` int(11) DEFAULT NULL,
               `b` char(8) DEFAULT NULL,
               `c` char(8) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=utf8
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=utf8 INSERT_METHOD=LAST UNION=(`mrg`.`t1`)
             ALTER TABLE t1 DEFAULT CHARACTER SET = latin1 COLLATE latin1_general_ci;
            {noformat}

            Quite as expected, since we have additional options on our tables; requires adding an rdiff.


            *alter_table_online*:

            {noformat}
             ALTER ONLINE TABLE t1 MODIFY b <INT_COLUMN> DEFAULT 5;
            -ERROR HY000: Can't execute the given 'ALTER' command as online
            +# ERROR: Statement succeeded (expected results: ER_CANT_DO_ONLINE)
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command succeeded unexpectedly.
            +# Functionality or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            This is all right I guess. It's good that online alter can be done, right?
            But this is bad:

            {noformat}
             ALTER ONLINE TABLE t1 CHANGE b new_name <INT_COLUMN>;
            -ERROR HY000: Can't execute the given 'ALTER' command as online
            +ERROR HY000: Unable to open underlying table which is differently defined or of non-MyISAM type or doesn't exist
            +# ERROR: Statement ended with errno 1168, errname ER_WRONG_MRG_TABLE (expected results: ER_CANT_DO_ONLINE)
             ALTER ONLINE TABLE t1 COMMENT 'new comment';
            {noformat}

            Looking earlier in the test output, we find out that we are working with temporary tables here. And there is the bug MySQL:57657 which says that altering a temporary MERGE table is broken in 5.5. Whether to add an rdiff or disable the test -- it's a question. I think I will disable, after all, although it's a bit sad. You can choose to be smarter, and since you have your own {{alter_table.inc}} anyway, add some logic in there, checking whether a table is temporary or not.


            *create_table*:

            {noformat}
             CREATE TABLE t1 ENGINE=<STORAGE_ENGINE> <CUSTOM_TABLE_OPTIONS> AS SELECT 1 UNION SELECT 2;
            -SHOW CREATE TABLE t1;
            -Table Create Table
            -t1 CREATE TABLE `t1` (
            - `1` bigint(20) NOT NULL DEFAULT '0'
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
            -SELECT * FROM t1;
            -1
            -1
            -2
            -DROP TABLE t1;
            +ERROR HY000: 'test.t1' is not BASE TABLE
            +# ERROR: Statement ended with errno 1347, errname ER_WRONG_OBJECT (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command finished with ER_WRONG_OBJECT.
            +# CREATE TABLE .. AS SELECT or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            {{AS SELECT}} doesn't work with MERGE tables; we didn't consider it in our simple changes of {{create_table.inc}}, because we only do {{AS SELECT}} a few times in the suite, so it seems easier just to accept this difference here. Although in general, it's up to the person who modifies the creation procedure.


            *lock*:

            The test is quite messed up, because merge children are locked through the parent tables, which the test of course does not expect. E.g. if it locks two tables and then drops them, it expects that nothing is locked any longer, which is not true for the merge tables. Adding an rdiff, anyway locking is very specific for merge tables and needs to be tested as an engine feature rather than as basic functionality.

            The rest are usual mismatches due to unsupported functionality etc.


            MERGE engine doesn't support partitions and transactions, but again, lets see what happens, since it's nearly for free:

            {{mkdir ../storage/myisammrg/mysql-test/storage_engine/parts}}
            {{mkdir ../storage/myisammrg/mysql-test/storage_engine/trx}}

            {noformat}
            perl ./mtr --suite=storage_engine/*-myisammrg --force --max-test-fail=0
            {noformat}

            All tests failed, of course.

            For all partitioned tables:

            {noformat}
            +ERROR HY000: Engine cannot be used in partitioned tables
            +# ERROR: Statement ended with errno 1572, errname ER_PARTITION_MERGE_ERROR (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ CREATE TABLE t1 (a INT(11) /*!*/ /*Custom column options*/) ENGINE=MRG_MYISAM /*!*/ /*Custom table options*/ UNION(mrg.t1) INSERT_METHOD=LAST PARTITION BY HASH(a) PARTITIONS 2 ]
            +# The statement|command finished with ER_PARTITION_MERGE_ERROR.
            +# Partitions or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Transactional tests run somehow, but of course diffs are as extensive as they were for MyISAM. All this is expected, and can be solved either by removing the nearly created {{trx}} and {{parts}} subdirs, or adding rdiffs. It seems reasonable to remove {{parts}} and keep {{trx}}, but with the paranoic assumption that one day an attempt to create a partitioned MERGE table will crash the server, I will keep {{parts}} too; anyway they all together take less than a second (rejecting table creation and failing everything with "table doesn't exist" is fast). So, I will add rdiffs for each file.

            Running all at once now:

            {noformat}
            perl ./mtr --suite=storage_engine-myisammrg,storage_engine/*-myisammrg

            Spent 46.994 of 70 seconds executing testcases

            Completed: All 119 tests were successful.
            {noformat}
            elenst Elena Stepanova made changes -
            Description [Goal|#goal]
            [Problems to solve|#problems]
            [... Problem 1: Varying result files|#problem1]
            [...... Solution|#solution1]
            [... Problem 2: Unsupported features|#problem2]
            [...... Solution|#solution2]
            [... Problem 3: Varying result files|#problem3]
            [...... Solution|#solution3]
            [... Filed bugs|#bugs]
            [Tuning|#tuning]
            [... Assumptions|#assumptions]
            [... Common tuning steps|#common]
            [... Examples|#examples]
            [...... MyISAM|#myisam]
            [...... InnoDB plugin|#innodb]
            [...... MERGE|#merge]

            {anchor:goal}

            h2. Goal

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            The suite it not supposed to provide exhaustive engine testing, and certainly cannot test non-standard engine-specific features (due to its very nature of being agnostic to the engine under test), but rather perform a relatively quick evaluation of standard functionality expected from a storage engine.

            {anchor:problems}

            h2. Problems to solve

            Existing MTR tests are not very suitable for running on different storage engines.

            {anchor:problem1}

            h3. Problem 1: Varying result files

            Traditional MTR/mysqltest have very strict requirements in regard to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            Example:

            {noformat}
            OPTIMIZE TABLE t1;

            # One engine might in certain situations say

            Table Op Msg_type Msg_text
            test.t1 optimize status Table is already up to date

            # Another always says

            Table Op Msg_type Msg_text
            test.t1 optimize status OK
            {noformat}

            Neither is wrong, and yet if we have one variant in the result file, for another engine the test will fail.

            Till recently the only solution was to copy the whole suite somewhere and recreate result files, thus introducing usually unwanted code duplication.

            {anchor:solution1}
            h4. Solution

            To solve this problem, we will use functionality developed in scope of MDEV-30.
            With this task, the original test results can be adapted to the storage engine by creating diff files (basically, patches). These patches are stored in the engine folder rather than in the MTR suite folder, so no changes in the original test suite is performed.

            {anchor:problem2}
            h3. Problem 2: Unsupported features

            Most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a traditional MTR test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail -- not just produce a result mismatch which we could patch, but actually fail in the middle of execution, -- so we will have to stop running it, and ALTER TABLE will not be tested at all; and the same will happen to any test which uses keys, even if it's used briefly for an unimportant part of the test.

            This problem is even bigger than the first one, because not only does it currently require copying the test suite and maintaining a separate set of *result* files, but also modifying the test files themselves; and often modifications will be extensive enough to rule out even the possibility of future merges with the original test suite.

            {anchor:solution2}
            h4. Solution

            We will solve this problem by allowing all tests to run through intermediate failures to the end (unless the server crashes or a syntax error in the test itself occurs). Thus, the tests will also produce mismatches, and the engine maintainer will be able to decide whether the test is really inapplicable and needs to be disabled, or the partial failures are acceptable and can be stored in the result diff file.

            Example:

            We are testing that a table option is supported by the engine (merely is accepted and stored); lets say AUTO_INCREMENT. We create a table with the option and check that it is not lost; but we also want to see that it works with ALTER; so, we add an alter table statement where the option is modified.

            Now, if we try to run such a test with FEDERATED engine, it will break on ALTER -- not because the option is unsupported, but because ALTER is. In the storage engine test suite, the test will continue to the end and will produce a mismatch (because the ALTER statement will cause an error). Then, someone who tunes the suite for FEDERATED engine, can see the difference, decide that it's reasonable, since ALTER is not supposed to work, and create an rdiff file. Thus, we do not skip the test for the table option entirely, but simply limit the testing to functionality the engine supports.

            The tests will try to be smart in their internal logic, but within reason. When a statement is likely to fail (e.g. a feature is often unsupported), the test will check the result of the statement and will produce a verbose error message, including versions why it could happen. Also, if a failure happened on table creation, or in some other key moment, so that a part of test becomes useless (if a table was not created, there is no point at trying to alter it, etc.), the test will ignore a part of the flow, and proceed to the next part. But it does not happen always, the checks are only performed when the probability of a failure is reasonably high.
            Of course, if a part of the test is bypassed, it will also cause a mismatch, so the maintainer will be aware of that and will be able to check why it happens.

            Additionally, some tests will check the value of the default index (see below notes about configuration), and if it is unset, certain tests will be completely skipped, because every part of them requires indexes (e.g. index.test).

            For the functionality described in the ENGINES table (transactions, savepoints, XA), if a test requires one of the features and it is shown as unsupported in the table, the test will produce a warning. But unlike with the indexes, it will not be skipped, because we assume that for a new engine information in the ENGINES table might be inaccurate.

            {anchor:problem3}
            h3. Problem 3: Different primitives

            Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            {{CREATE TABLE t (i INT)}}
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like a connection string for FEDERATED engine. Since there is, obviously, no connection string in generic tests, every table creation for FEDERATED will fail, and nothing will be tested. Currently, there is no way to work around that apart from copying the tests and modifying them manually.

            In even more complicated scenario, in order to create a table properly, additional actions need to be performed; e.g. to create a functional MERGE table, we need to also create an underlying MyISAM table; and if we want to alter a MERGE table, we should alter the underlying table(s) accordingly.

            {anchor:solution3}
            h4. Solution

            We will provide the engine maintainer with several tools to tune the suite for their engine.

            * Some variables can be set to configure the basic test behavior:
            ** the engine name (to be used in {{CREATE TABLE}} and be masked in {{SHOW CREATE TABLE}});
            ** default column options (when any are required, e.g. NOT NULL for CSV);
            ** default table options (when any are required, e.g. connection for FEDERATED);
            ** default index (INDEX, UNIQUE INDEX, PRIMARY KEY, whichever supported, or a special index);
            ** default types (int type, char type, in case standard ones are not supported);

            * if any actions need to be performed before/after each test, they can be described in include files which are always executed (e.g. creation of a server for FEDERATED before a test, and dropping it after a test);

            * if a server requires non-standard procedures for table creation or modification, they can be tuned in include files which tests to use to perform these actions (CREATE and ALTER table are not run directly in the tests, only by calling the include files, so they are configurable);

            * the engine can have its own set of disabled files, so that there is no need to provide only selected test names on the MTR command line -- the whole suite can be run, and unnecessary files can be disabled through the list;

            * server options can be configured;

            * non-default combinations can be configured for the engine;

            * subsuites can be completely ignored for the engine, so that if there is no interest in running partitioning tests or transactional tests, they can be simply omitted without any effort.

            For more details, see the 'Tuning' section.

            {anchor:bugs}
            h3. Bugs filed while working on the suite:

            LP:973039 / MDEV-211 (Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK)
            LP:976104 / MDEV-216 (Assertion `0' failed in my_message_sql on UPDATE IGNORE, or unknown error on release build)
            LP:990187 / MDEV-237 (Assertion `share->reopen == 1' failed at maria_extra on ADD PARTITION)
            LP:994275 / MDEV-248 (Assertion `real->type() == Item::FIELD_ITEM' failed in add_not_null_conds(JOIN*))
            LP:994854 / MDEV-254 (Server hangs on updating an XtraDB table after FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT)
            LP:997397 (TRUNCATE on a partitioned Aria table does not reset AUTO_INCREMENT)
            MDEV-365 (Assertion `block->type == PAGECACHE_EMPTY_PAGE ...' failed in pagecache_read on ADD PARTITION)
            MDEV-366 (Assertion `share->reopen == 1' failed in maria_extra on DROP TABLE which is locked twice)
            MDEV-388 (Creating a federated table with a non-existing server returns a random error code)
            MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            MySQL:64888 (Inconsistent behavior of dynamic concurrent_insert values)
            MySQL:64892 (Session-level low_priority_updates does not work for INSERT)
            MySQL:65146 (WITH CONSISTENT SNAPSHOT does not work with isolation level SERIALIZABLE)
            MySQL:65225 (InnoDB miscalculates auto-increment after changing auto_increment_increment)
            MySQL:65429 / LP:1004910 (Assertion failure block->page.space == page_get_space_id(page_align(ptr)))
            MySQL:65431 / LP:1005052 (Error 1430 while selecting from a federated table with index on a bit column)
            MySQL:65846 (INSERT DELAYED on a BLACKHOLE table hangs forever)
            MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)

            _The list might be incomplete_

            {anchor:tuning}
            h2. Tuning

            Please note that the test suite is synthetic. It means that it contains tests for a set of features which (the whole set) is not supported by any known engine. The same is true for result files -- they are taken from different engines, depending on which produce the most sensible result, and in rare cases are even artificial. This means that there is no engine which would pass the whole suite without any tuning -- not even MyISAM, which contributed to the result files the most.

            {anchor:assumptions}
            h3. Assumptions

            We presume that the tests are set on a source tree build (which is reasonable, considering that the main goal is to help with storage engine development). We want to configure the tests for some engine OurENGINE.

            The storage engine code is located in {{<basedir>/storage/<ourengine>}} folder.

            The storage_engine test suite is located in {{<basedir>/mysql-test/suite/storage_engine}} folder and contains subsuites in subfolders of the corresponding names (currently {{<basedir>/mysql-test/suite/storage_engine/parts}} and {{<basedir>/mysql-test/suite/storage_engine/trx}}.

            {anchor:common}
            h3. Common tuning steps

            1. Create {{<basedir>/storage/ourengine/mysql-test/storage_engine}} folder.

            2. Copy {{<basedir>/mysql-test/suite/storage_engine/define_engine.inc}} to {{<basedir>/storage/ourengine/mysql-test/storage_engine}} and edit the file (at the very least set ENGINE variable; check other variables which you find there, and modify as needed.

            3. If the engine requires any additional server options (e.g. to load the engine, or to tune it, or both), create {{<basedir>/storage/ourengine/mysql-test/storage_engine/suite.opt}} file and add the options there.

            4. If you know in advance that the engine requires additional steps before a test, add them at the end of {{define_engine.inc}}.

            5. If you created any SQL objects in {{define_engine.inc}}, create file {{<basedir>/storage/ourengine/mysql-test/storage_engine/cleanup_engine.inc}}, or copy a stub from {{<basedir>/mysql-test/suite/storage_engine}}, and add the logic to drop the objects.

            6. If you know in advance that the engine requires non-standard table creation and/or modification procedure, copy {{<basedir>/mysql-test/suite/storage_engine/create_table.inc}} and {{<basedir>/mysql-test/suite/storage_engine/alter_table.inc}} into {{<basedir>/storage/ourengine/mysql-test/storage_engine/}} and modify them as needed.

            7. Try to run the 1st test:
            {{perl ./mtr --suite=storage_engine-ourengine 1st}}

            8. If the test produces a mismatch, analyze it and decide whether table creation or test options or server options require more tuning, or the difference is expected.

            9. If the difference is expected, create an rdiff file as
            {{ diff -u {{<basedir>/mysql-test/suite/storage_engine/1st.result <basedir>/mysql-test/suite/storage_engine/1st.reject > <basedir>/storage/ourengine/mysql-test/storage_engine/1st.rdiff}}

            10. When the 1st test passes, run the whole suite:

            {{perl ./mtr --suite=storage_engine-ourengine --force --max-test-fail=0}}

            11. Analyze failures, modify parameters or include files as needed, create rdiff files.

            12. If any tests requires specific non-standard server/engine options, create files {{<testname>.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine}}.

            13. If any tests have to be skipped, add them to {{<basedir>/storage/ourengine/mysql-test/storage_engine/disabled.def}}.

            14. When you are satisfied with the results of storage_engine suite, proceed to the subsuites. If you are interested in running partitions tests, create folder {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            15. Repeat step 3, if needed, only now create {{suite.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}.

            16. Run the subsuite as {{perl ./mtr --suite=storage_engine/parts-ourengine --force --max-test-fail=0}}

            17. Repeat steps 11-13, only now create files in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            18. If you want to run transactional tests, repeat steps 14-17 for trx subsuite.

            19. To execute the whole set of tests, run {{perl ./mtr --suite=storage_engine-ourengine,storage_engine/*-ourengine}}

            h2. Examples

            Below are some real-life experiences of tuning the test suite for engines that currently come with MariaDB. Actual exact steps are always unique and depend on the storage engine (otherwise it wouldn't be tuning). In some cases only very basic actions are required; in others, it takes more work. In general, the more a storage engine is "different", the trickier is the task.


            h3. Easy level: MyISAM

            Lets see how to make the suite work for a relatively standard engine, in terms of behavior similar to main MySQL engines.
            We will create an overlay for MyISAM.
            _Note: "overlay" is a term introduced by MDEV\-30, and it basically means a test suite or set of suites adapted for a certain engine_

            {{cd <basedir>/mysql-test}}
            {{mkdir -p ../storage/myisam/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/myisam/mysql-test/storage_engine/}}

            Edit the copied version of {{define_engine.inc}} to set {{ENGINE}} to MyISAM:

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = MyISAM;
             #
             ################################
             #
            {noformat}

            All other parameters look good: MyISAM does not require any specific table or column options, and it supports non-unique indexes, INT and CHAR types. These all are defaults (you can see it in {{define_engine.inc}}).

            Also, we do not need any additional server options to activate the engine, since MyISAM is always there.

            So, now we can try to run the 1st test:

            {noformat}
            perl ./mtr --suite=storage_engine-myisam 1st

            ...

            storage_engine-myisam.1st [ pass ] 20
            {noformat}

            The first test passed. Okay, now we can run the whole suite. Some tests will fail, this is expected; we need to see the results
            so we can decide whether we should accept the difference, or disable the test, or patch the code.
            So, we will run it with {{\-\-force}} and {{\-\-max-test-fail=0}}, to see all at once (you might also want to redirect the output to a file, because you will need it):

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine-myisam

            ...

            Spent 42.193 of 64 seconds executing testcases

            Completed: Failed 7/99 tests, 92.93% were successful.

            Failing test(s): storage_engine-myisam.alter_tablespace storage_engine-myisam.check_table storage_engine-myisam.foreign_keys storage_engine-myisam.index_type_hash storage_engine-myisam.show_engine storage_engine-myisam.tbl_opt_insert_method storage_engine-myisam.tbl_opt_union
            {noformat}

            _Note: Total execution time depends on the engine and on the machine. For MyISAM it's pretty fast, for other engines might take longer._

            7 failing tests on the first iteration is extremely good, of course it won't be that bright with other engines. But it's MyISAM, what did you expect...

            (Please keep in mind that the suite results are synthetic, meaning that they belong to different engines, and in rare cases are even artificial, e.g. when the only available engine supporting the functionality currently has a bug. So, no engine is expected to pass all tests without tuning. Not even MyISAM.)

            Now it's time to analyze results.

            Result handling in this suite is somewhat different from standard MTR tests. Unless you managed to crash the server or to hit a syntax error in the test itself, all results will be mismatches -- that is, a test will not abort on a failed statement, but instead will try to proceed. The big benefit of it is that if your engine does not support everything, the tests are still usable, you'll just need to approve the difference in the results (by creating an rdiff file).

            Of course, it also means that the tests produce more noise than usual -- e.g. if your engine does not support ALTER, a standard MTR test would break on the first ALTER statement, while ours will continue and will allow you to use the logic, but will also produce a bunch of mismatches due to failing statements. Internal logic in tests does its best to make it cleaner, but still, the noise is expected.

            So, with this knowledge, lets find the failures and go through them one by one.
            Tip: if you saved the output to a file, failures can easily be found by {{' fail '}} search string (without quote marks).

            The first failing test is *alter_tablespace*. Well, naturally -- no tablespaces for MyISAM. But lets look at the output.

            Mismatch says that some stuff is missing, and instead the test produces this:

            {noformat}
            +ERROR HY000: Table storage engine for 't1' doesn't have this option
            +# ERROR: Statement ended with errno 1031, errname ER_ILLEGAL_HA (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
            +# The statement|command finished with ER_ILLEGAL_HA.
            +# Tablespace operations or the syntax or the mix could be unsupported.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Since MyISAM is not supposed to support tablespaces, changing the engine code is not an option. You can decide whether to disable the test entirely, or to create an rdiff file for it. Unless you have really tough restrictions on suite execution time, I recommend keeping tests and creating rdiffs. Even if you don't care about the result, the test, running as a regression check, will at least show that this very first statement has not suddenly started crashing the server, or something unexpectedly changed in the behavior.

            Creating an rdiff file is simple:

            {{diff -u suite/storage_engine/alter_tablespace.result suite/storage_engine/alter_tablespace.reject > ../storage/myisam/mysql-test/storage_engine/alter_tablespace.rdiff}}

            Next failed test: *check_table*.

            Its difference is simple:

            {noformat}
             INSERT INTO t1 (a,b) VALUES (6,'f');
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a,b) VALUES (7,'g');
             INSERT INTO t2 (a,b) VALUES (8,'h');
             CHECK TABLE t2, t1 MEDIUM;
            @@ -52,7 +52,7 @@
             INSERT INTO t1 (a) VALUES (17),(120),(132);
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a) VALUES (801),(900),(7714);
             CHECK TABLE t1 MEDIUM;
             Table Op Msg_type Msg_text
            {noformat}

            No harm if the engine realizes that the table is up to date and says so; adding a diff.

            {{diff -u suite/storage_engine/check_table.result suite/storage_engine/check_table.reject > ../storage/myisam/mysql-test/storage_engine/check_table.rdiff}}


            Next failing test: *foreign_keys*

            Just like alter_tablespace, it produces a lot of messages about possibly unsupported functionality, which is natural, since MyISAM doesn't have foreign keys. Adding a diff.


            Next failing test: *index_type_hash*

            It produces mismatches where HASH type is replaced by BTREE type:

            {noformat}
             SHOW KEYS IN t1;
             Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
            -t1 1 a 1 a # # NULL NULL # HASH
            +t1 1 a 1 a # # NULL NULL # BTREE
            {noformat}

            Since MyISAM doesn't support HASH type, it is fine. Adding a diff.

            Next failing test: *show_engine*

            This test is optimistic and expects that SHOW ENGINE <engine name> STATUS command returns _something_ -- that is, the test obfuscates the output, but expects that it contains a row. For MyISAM it is not the case, hence the diff looks like a missing row:

            {noformat}
            @@ -4,7 +4,6 @@
             SHOW ENGINE <STORAGE_ENGINE> STATUS;
             Type Name Status
            -<STORAGE_ENGINE> ### Engine status, can be long and changeable ###
             # For SHOW MUTEX even the number of lines is volatile, so the result logging is disabled,
            {noformat}

            Adding a diff.


            Next failing tests: *tbl_opt_insert_method*, *tbl_opt_union*

            MyISAM does not use the table option INSERT_METHOD, it's the MERGE engine thing. But table creation does not normally fail on unsupported options, they are simply ignored. That's what we see here:

            {noformat}
            @@ -5,7 +5,7 @@
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL,
               `b` char(8) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1 INSERT_METHOD=FIRST
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
            {noformat}

            SHOW ENGINE does not show the option. This is fine, adding rdiff. Exactly the same for UNION.

            These were all 7 failures.

            Now it's time to take care of subsuites. Currently there are two of them: {{parts}} (stands for 'partitions'), and {{trx}} (stands for 'transactions').

            MyISAM definitely supports partitioning, so lets try them first.

            (we are still in {{<basedir>/mysql-test}})

            {{mkdir ../storage/myisam/mysql-test/storage_engine/parts}}

            This will show MTR that our engine is interested in the {{storage_engine/parts}} subsuite.

            No additional parameters or tricks should be needed for MyISAM to run partition tests, since the suite already sets {{--partition}} option. So, we'll just run it:

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/parts-myisam

            ...

            Spent 1.168 of 5 seconds executing testcases

            Completed: All 8 tests were successful.
            {noformat}

            _Note: For now, it is a very basic suite, only contains a few tests and literally takes several seconds_

            All good, tests passed, nothing needs to be done here.

            Finally, there is the transactions subsuite (also contains tests for XA and snapshots). It is also very basic, but we know that MyISAM doesn't support transactions, XA and snapshots, so results won't be this pretty. Maybe we don't even need to run it, but why not try.

            {{mkdir ../storage/myisam/mysql-test/storage_engine/trx}}

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/trx-myisam

            Spent 0.000 of 10 seconds executing testcases

            Completed: Failed 13/13 tests, 0.00% were successful.

            Failing test(s): storage_engine/trx-myisam.cons_snapshot_repeatable_read storage_engine/trx-myisam.cons_snapshot_serializable storage_engine/trx-myisam.delete storage_engine/trx-myisam.insert storage_engine/trx-myisam.level_read_committed storage_engine/trx-myisam.level_read_uncommitted storage_engine/trx-myisam.level_repeatable_read storage_engine/trx-myisam.level_serializable storage_engine/trx-myisam.select_for_update storage_engine/trx-myisam.select_lock_in_share_mode storage_engine/trx-myisam.update storage_engine/trx-myisam.xa storage_engine/trx-myisam.xa_recovery
            {noformat}

            The results are mess, as expected. All differences start with an extra warning (sometimes more than one, as the same is produced for snapshots and XA):

            {noformat}
            +# -- WARNING ----------------------------------------------------------------
            +# According to I_S.ENGINES, MyISAM does not support transactions.
            +# If it is true, the test will most likely fail; you can
            +# either create an rdiff file, or add the test to disabled.def.
            +# If transactions should be supported, check the data in Information Schema.
            +# ---------------------------------------------------------------------------
            {noformat}

            The rest is a mix of wrong results (missing rows, extra rows, different rows), warnings about rollback not working, etc. With 3rd-party engines, it would be up to a maintainer whether to add diffs for all tests or just ignore the whole thing since transactions are not supported. If you choose to ignore, simply delete the newly created {{../storage/<your_engine>/mysql-test/storage_engine/trx}}, and the subsuite won't be executed for the engine. I choose to keep it, so I'll add rdiffs for all 13 files, it's not that much work, after all.

            {noformat}
            diff -u suite/storage_engine/trx/cons_snapshot_repeatable_read.result suite/storage_engine/trx/cons_snapshot_repeatable_read.reject > ../storage/myisam/mysql-test/storage_engine/trx/cons_snapshot_repeatable_read.rdiff
            ...
            diff -u suite/storage_engine/trx/xa_recovery.result suite/storage_engine/trx/xa_recovery.reject > ../storage/myisam/mysql-test/storage_engine/trx/xa_recovery.rdiff
            {noformat}

            _Tip: If you decide to do the same, take time to go through the reject files and see that your engine works as expected, under the circumstances; sometimes making things do what they are not normally expected to do reveals hidden problems._


            Now we can run everything together:

            {noformat}

            perl ./mtr --suite=storage_engine-myisam,storage_engine/*-myisam

            ...

            Spent 46.249 of 70 seconds executing testcases

            Completed: All 120 tests were successful.
            {noformat}

            That's all. Now just stay out of failures.

            {anchor:innodb}

            h3. Intermediate level: InnoDB plugin

            A little bit more work is required to create an overlay for InnoDB. Lets try to do it for InnoDB plugin (which is not loaded by default as of 5.5.25, but is built there).

            Again, start with creating the overlay directory:

            {{mkdir -p ../storage/innobase/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/innobase/mysql-test/storage_engine/}}
            Edit {{../storage/innobase/mysql-test/storage_engine/define_engine.inc}}

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = InnoDB;
             #
             ################################
             #
            {noformat}

            As for MyISAM, all defaults are fine for InnoDB. But now we also need to server startup options to run server with the InnoDB plugin.

            create the file {{../storage/innobase/mysql-test/storage_engine/suite.opt}}:

            {noformat}
            --ignore-builtin-innodb
            --plugin-load=ha_innodb
            --innodb
            {noformat}

            It should be enough for the base suite. Lets run the 1st test now:

            {noformat}

            perl ./mtr --suite=storage_engine-innobase 1st

            ...

            storage_engine-innobase.1st [ pass ] 852
            {noformat}

            And then the whole suite:

            {noformat}

            perl ./mtr --suite=storage_engine-innobase --max-test-fail=0 --force

            ...

            Spent 153.712 of 402 seconds executing testcases

            Completed: Failed 28/99 tests, 71.72% were successful.

            Failing test(s): storage_engine-innobase.alter_table_online storage_engine-innobase.alter_tablespace storage_engine-innobase.autoinc_secondary storage_engine-innobase.autoinc_vars storage_engine-innobase.cache_index storage_engine-innobase.checksum_table_live storage_engine-innobase.delete_low_prio storage_engine-innobase.fulltext_search storage_engine-innobase.index_enable_disable storage_engine-innobase.index_type_hash storage_engine-innobase.insert_delayed storage_engine-innobase.insert_high_prio storage_engine-innobase.insert_low_prio storage_engine-innobase.lock_concurrent storage_engine-innobase.optimize_table storage_engine-innobase.repair_table storage_engine-innobase.select_high_prio storage_engine-innobase.tbl_opt_ai storage_engine-innobase.tbl_opt_data_index_dir storage_engine-innobase.tbl_opt_insert_method storage_engine-innobase.tbl_opt_key_block_size storage_engine-innobase.tbl_opt_row_format storage_engine-innobase.tbl_opt_union storage_engine-innobase.type_char_indexes storage_engine-innobase.type_float_indexes storage_engine-innobase.type_spatial_indexes storage_engine-innobase.update_low_prio storage_engine-innobase.vcol
            {noformat}

            Not as great as it was with MyISAM. Lets see the details.

            Some mismatches are either identical or similar to those in MyISAM, and caused by unsupported functionality (e.g. fulltext search, hash indexes, optimize_table, etc.). I won't go through them here, will just add rdiff files.

            But some deserve attention.

            *alter_table_online*:

            {noformat}
             ALTER ONLINE TABLE t1 CHANGE b new_name <INT_COLUMN>;
            +ERROR HY000: Can't execute the given 'ALTER' command as online
            +# ERROR: Statement ended with errno 1915, errname ER_CANT_DO_ONLINE (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command finished with ER_CANT_DO_ONLINE.
            +# Functionality or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            It's hard to say whether all engines that support ALTER ONLINE should support them for the same set of changes; most likely not, and what we see here is just an InnoDB limitation. On the other hand, we know that MariaDB supports ALTER ONLINE, and namely renaming a column (see http://kb.askmonty.org/en/alter-table), and InnoDB supports at least some ALTER ONLINE operations (e.g. CHANGE COLUMN i i INT DEFAULT 1 works); so I think it's worth filing it as a low-priority bug, at least to make sure it works as expected: https://mariadb.atlassian.net/browse/MDEV-397

            For now, I will add the test to {{../storage/innobase/mysql-test/storage_engine/disabled.def}} list (need to create it, since it's the first test we disable for the engine):

            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            {noformat}

            If later it turns out to be expected behavior or limitation, I will remove the line from {{disabled.def}}, and will instead add an rdiff file.


            *alter_tablespace*:

            {noformat}
            +# ERROR: Statement ended with errno 1030, errname ER_GET_ERRNO (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
            +# The statement|command finished with ER_GET_ERRNO.
            +# Tablespace operations or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Now, that seems unexpected. But then again, tablespace operations are only applicable when InnoDB works in {{innodb-file-per-table}} mode, which we did not set in our options. Unless we want to use it for all tests, lets set it for this one only:

            {{../storage/innobase/mysql-test/storage_engine/alter_tablespace.opt}}
            {noformat}
            --innodb-file-per-table=1
            {noformat}

            *autoinc_vars*:

            {noformat}
             INSERT INTO t1 (a,b) VALUES (NULL,'g'),(NULL,'h'),(NULL,'i');
             SELECT LAST_INSERT_ID();
             LAST_INSERT_ID()
            -850
            +1100
             SELECT * FROM t1;
             a b
             1 a
            +1100 g
            +1150 h
            +1200 i
             2 b
             200 d
             3 c
             500 e
             800 f
            -850 g
            -900 h
            -950 i
             DROP TABLE t1;
             SET auto_increment_increment = 500;
             SET auto_increment_offset = 300;
            {noformat}

            This is weird. Now real investigation starts -- there is a good reason to look at the reject file to see the continuous flow:

            {noformat}
            ...

            SET auto_increment_increment = 300;
            INSERT INTO t1 (a,b) VALUES (NULL,'d'),(NULL,'e'),(NULL,'f');
            SELECT LAST_INSERT_ID();
            LAST_INSERT_ID()
            200
            SELECT * FROM t1;
            a b
            1 a
            2 b
            200 d
            3 c
            500 e
            800 f
            SET auto_increment_increment = 50;
            INSERT INTO t1 (a,b) VALUES (NULL,'g'),(NULL,'h'),(NULL,'i');
            SELECT LAST_INSERT_ID();
            LAST_INSERT_ID()
            1100
            SELECT * FROM t1;
            a b
            1 a
            1100 g
            1150 h
            1200 i
            2 b
            200 d
            3 c
            500 e
            800 f
            DROP TABLE t1;
            {noformat}

            The first insert works all right with {{auto_increment_increment = 300}}. Then we change it to {{50}}, but the following insert still uses {{300}} for the first value it inserts, and only then switches to {{50}}. Thus we get {{1100}} instead of {{850}}, and following values also differ. This smells like a bug, although not a very serious one. Since a brief check shows it's also reproducible on Oracle MySQL, we will file it on bugs.mysql.com: http://bugs.mysql.com/bug.php?id=65225 (I actually did it some time ago, when I tried to run the storage engine suite for InnoDB for the first time, that's why it's not brand new).

            And we will also add the test to {{../storage/innobase/mysql-test/storage_engine/disabled.def}}:
            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
            {noformat}


            *delete_low_prio*, *insert_high_prio*, *insert_low_prio*, *select_high_prio*, *update_low_prio*:

            They all have similar fragments in their output:

            {noformat}
            +# Timeout in include/wait_show_condition.inc for = 'DELETE FROM t1'
            +# show_statement : SHOW PROCESSLIST
            +# field : Info
            +# condition : = 'DELETE FROM t1'
            +# max_run_time : 3
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command finished with timeout in wait_show_condition.inc.
            +# DELETE or table locking or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            As the documentation says, the high|low priority functionality (e.g. DELETE LOW_PRIORITY) only works for table-level locking, and the whole test is based on this assumption. InnoDB uses row-level locking, so the entire flow does not work quite as expected. We still can add rdiff files, but, unlike the most of other tests, these ones take relatively long (probably over 10 seconds each). Besides, since locking works entirely different here, the test results are likely to be unstable, as it will be all about timing. So, it makes more sense to disable the tests by adding them to {{../storage/innobase/mysql-test/storage_engine/disabled.def}}:

            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
            delete_low_prio : InnoDB does not use table-level locking
            insert_high_prio : InnoDB does not use table-level locking
            insert_low_prio : InnoDB does not use table-level locking
            select_high_prio : InnoDB does not use table-level locking
            update_low_prio : InnoDB does not use table-level locking
            {noformat}


            *tbl_opt_ai*:

            {noformat}
             Table Create Table
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> AUTO_INCREMENT=10 DEFAULT CHARSET=latin1
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
             ALTER TABLE t1 AUTO_INCREMENT=100;
             SHOW CREATE TABLE t1;
             Table Create Table
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> AUTO_INCREMENT=100 DEFAULT CHARSET=latin1
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
             DROP TABLE t1;
            {noformat}

            We already looked at ignored table options in MyISAM tests, but this one is different. Why would AUTO_INCREMENT be ignored, it should be supported all right by InnoDB? (Brief manual check confirms it). Some digging shows, however, that in our case it is _truly_ ignored. It is reproducible with Oracle MySQL, filing a bug on bugs.mysql.com: http://bugs.mysql.com/bug.php?id=65901

            Adding the test to {{../storage/innobase/mysql-test/storage_engine/disabled.def}}:

            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
            delete_low_prio : InnoDB does not use table-level locking
            insert_high_prio : InnoDB does not use table-level locking
            insert_low_prio : InnoDB does not use table-level locking
            select_high_prio : InnoDB does not use table-level locking
            update_low_prio : InnoDB does not use table-level locking
            tbl_opt_ai : MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)
            {noformat}


            *tbl_opt_key_block_size*, *tbl_opt_row_format*:

            {noformat}
             CREATE TABLE t1 (a <INT_COLUMN>, b <CHAR_COLUMN>) ENGINE=<STORAGE_ENGINE> <CUSTOM_TABLE_OPTIONS> KEY_BLOCK_SIZE=8;
            +Warnings:
            +Warning 1478 InnoDB: KEY_BLOCK_SIZE requires innodb_file_per_table.
            +Warning 1478 InnoDB: KEY_BLOCK_SIZE requires innodb_file_format > Antelope.
            +Warning 1478 InnoDB: ignoring KEY_BLOCK_SIZE=8.
            {noformat}

            Doing the same as we did for alter_tablespace, only now adding both {{innodb_file_per_table}} and {{innodb_file_format}}:

            {{../storage/innobase/mysql-test/storage_engine/tbl_opt_key_block_size.opt}}:

            {noformat}
            --innodb-file-per-table=1
            --innodb-file-format=Barracuda
            {noformat}


            *type_char_indexes*:

            {noformat}
             SET SESSION optimizer_switch = 'engine_condition_pushdown=on';
             EXPLAIN SELECT * FROM t1 WHERE c > 'a';
             id select_type table type possible_keys key key_len ref rows Extra
            -# # # range c_v c_v # # # Using index condition
            +# # # range c_v c_v # # # Using where
             SELECT * FROM t1 WHERE c > 'a';
             c c20 v16 v128
             b char3 varchar1a varchar1b
            @@ -135,7 +135,7 @@
             r3a
             EXPLAIN SELECT * FROM t1 WHERE v16 = 'varchar1a' OR v16 = 'varchar3a' ORDER BY v16;
             id select_type table type possible_keys key key_len ref rows Extra
            -# # # range # v16 # # # #
            +# # # ALL # NULL # # # #
             SELECT * FROM t1 WHERE v16 = 'varchar1a' OR v16 = 'varchar3a' ORDER BY v16;
             c c20 v16 v128
             a char1 varchar1a varchar1b
            {noformat}

            _Note: For now we assume that inside one engine, statistics is stable enough to produce consistent results on each test run, which is why we show certain fields in explain to let you decide whether you are satisfied with them or not. If further experience shows that even for the same engine, these tests routinely produce different results, and more often than not it's valid behavior, we might change it._

            For now, I will consider these results acceptable, and will add rdiff.

            As I said before, the rest of failures do not deserve verbose analysis, they are pretty straightforward, I just added rdiff for each of them.


            Now working with {{storage_engine/parts}} and {{storage_engine/trx}}.

            {{mkdir ../storage/innobase/mysql-test/storage_engine/trx}}
            {{mkdir ../storage/innobase/mysql-test/storage_engine/parts}}

            Copy your previously created {{suite.opt}} file to each of the subfolders: as far as MTR is concerned, they are separate suites.

            {{cp ../storage/innobase/mysql-test/storage_engine/suite.opt ../storage/innobase/mysql-test/storage_engine/trx/}}
            {{cp ../storage/innobase/mysql-test/storage_engine/suite.opt ../storage/innobase/mysql-test/storage_engine/parts/}}

            Maybe you'll want to add something else to those options. I, for one, will add {{\-\-innodb-lock-wait-timeout=1}} to {{../storage/innobase/mysql-test/storage_engine/trx/suite.opt}}. Probably it should have been done for other suites, too -- but it's never late, if there are any timeout issues observed.

            When you add rdiff files for subsuites, don't forget to put them in the subfolders:

            {{diff -u suite/storage_engine/parts/checksum_table.result suite/storage_engine/parts/checksum_table.reject > ../storage/innobase/mysql-test/storage_engine/parts/checksum_table.rdiff}}
            etc.

            Again, mostly failures are mismatches due to different output or unsupported functionality.
            _Note: Also note that repair_table test results are likely to differ, even if repair is supported, since the test tries to corrupt existing table files, which are different for each engine._

            *trx/cons_snapshot_serializable*:

            {noformat}
             # If consistent read works on this isolation level (SERIALIZABLE), the following SELECT should not return the value we inserted (1)
             SELECT * FROM t1;
             a
            +1
             COMMIT;
            {noformat}

            It is a bug. Filing as http://bugs.mysql.com/bug.php?id=65146 and adding to disabled.def (don't forget that it should be under trx folder now:
            {{../storage/innobase/mysql-test/storage_engine/trx/disabled.def}}:

            {noformat}
            cons_snapshot_serializable : MySQL:65146 (CONSISTENT SNAPSHOT does not work with SERIALIZABLE)
            {noformat}

            Now, running the whole set:

            {noformat}
            perl ./mtr --suite=storage_engine-innobase,storage_engine/*-innobase

            ...

            Spent 300.715 of 364 seconds executing testcases

            Completed: All 111 tests were successful.
            {noformat}

            Much slower than for MyISAM, but that's how it is usually is.

            {anchor:merge}

            h3. Advanced level: MERGE


            Yet more tricks would be required to tune the same suite for the MERGE engine, because now we will also have to think about how a table is created.
            We can't just create a plain MERGE table and work with it, it needs to have an underlying table, at least one; and if we alter the merge table, underlying tables need to be altered accordingly, otherwise the merge table will become non-functional.

            Start the same way as we started for other engines, by creating the overlay folder:

            {{mkdir -p ../storage/myisammrg/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/myisammrg/mysql-test/storage_engine/}}

            We know that we'll need INSERT_METHOD and UNION in our table options; in other circumstances, they should have been added to {{$default_tbl_opts}}; but we cannot set a global UNION, because it will contain different underlying tables, and since we will be modifying the creation procedure anyway, there is no point at adding INSERT_METHOD here, either.

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = MRG_MYISAM;
             #
             ################################
             #
            {noformat}

            What happens if we now run the 1st test if we did before?

            {{perl ./mtr --suite=storage_engine-myisammrg 1st}}

            {noformat}
             SHOW COLUMNS IN t1;
             INSERT INTO t1 VALUES (1,'a');
            +ERROR HY000: Table 't1' is read only
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command finished with ER_OPEN_AS_READONLY.
            +# INSERT INTO .. VALUES or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            That's because we don't have underlying tables under the merge table. We need to modify table creation procedure.
            First, we need to decide how to do it. There can be many ways, I will choose a simple one, as I think:
            - before each test, I will create a special {{mrg}} schema, which will contain underlying tables, so I don't need to remember all the names when it's time to cleanup;
            - at the end of the test, i will drop the {{mrg}} schema, and thus will get rid of all additional objects at once;
            - whenever a new test table has to be created, I will create a MyISAM table with the same name in {{mrg}} schema, and will point my test table at it;
            - whenever a test table has to be altered, I will also alter the MyISAM table with the same name in {{mrg}} schema.

            In order to achieve this, we need to override 3 files, and modify our already created {{../storage/myisammrg/mysql-test/storage_engine/define_engine.inc}}. Lets start with the latter.

            {{define_engine.inc}} is the include file which is executed before each test. So, it's the place to put the logic which precedes a test.
            At the end of {{../storage/myisammrg/mysql-test/storage_engine/define_engine.inc}} I will {{mrg}} schema creation:

            {noformat}
            @@ -40,6 +40,10 @@
             # Here you can place your custom MTR code which needs to be executed before each test,
             # e.g. creation of an additional schema or table, etc.
             # The cleanup part should be defined in cleanup_engine.inc
            +--disable_warnings
            +DROP DATABASE IF EXISTS mrg;
            +--enable_warnings
            +CREATE DATABASE mrg;
            {noformat}

            Now, it's time for the 3 files to override:

            {{cp suite/storage_engine/cleanup_engine.inc ../storage/myisammrg/mysql-test/storage_engine/}}
            {{cp suite/storage_engine/create_table.inc ../storage/myisammrg/mysql-test/storage_engine/}}
            {{cp suite/storage_engine/alter_table.inc ../storage/myisammrg/mysql-test/storage_engine/}}
             
            {{cleanup_engine.inc}} is the file which is executed after each test; so, in {{../storage/myisammrg/mysql-test/storage_engine/cleanup_engine.inc}} I will be dropping my {{mrg}} schema:

            {noformat}
            @@ -8,4 +8,9 @@
             # Here you can add whatever is needed to cleanup
             # in case your define_engine.inc created any artefacts,
             # e.g. an additional schema and/or tables.
            +--disable_query_log
            +--disable_warnings
            +DROP DATABASE IF EXISTS mrg;
            +--enable_warnings
            +--enable_query_log
            {noformat}

            Now, the actual table creation.
            Tests do not run {{CREATE TABLE}} / {{ALTER TABLE}} statements directly, they always call {{create_table.inc}} or {{alter_table.inc}}, correspondingly. So, if we edit them properly, it will affect all tests at once -- the gain is worth spending some effort.

            Below I will show the changes I had made; in fact, there are many ways to achieve the same goal, probably some of them more efficient. Be creative when the time comes.

            {noformat}
            --- suite/storage_engine/create_table.inc 2012-07-15 17:46:03.638461728 +0400
            +++ ../storage/myisammrg/mysql-test/storage_engine/create_table.inc 2012-07-15 22:08:29.324511647 +0400
            @@ -54,6 +54,15 @@
               --let $table_name = t1
             }
             
            +# Child statement is a statement that will create an underlying table.
            +# From this point, it will deviate from the main statement, that's why
            +# we start creating it here in parallel with the main one.
            +# For underlying tables, we will create a table in mrg schema, e.g.
            +# for table t1 the underlying table will be mrg.t1, etc.
            +# Since we will only create one child here, it should be enough. If we want more,
            +# we can always add a suffix, e.g. mrg.t1_child1, mrg.t1_child2, etc.
            +
            +--let $child_statement = $create_statement mrg.$table_name
             --let $create_statement = $create_statement $table_name
             
             if (!$create_definition)
            @@ -70,6 +79,9 @@
             if ($create_definition)
             {
               --let $create_statement = $create_statement ($create_definition)
            + # Table definition for the underlying table should be the same
            + # as for the MERGE table
            + --let $child_statement = $child_statement ($create_definition)
             }
             
             # If $default_engine is set, we will rely on the default storage engine
            @@ -78,6 +90,12 @@
             {
               --let $create_statement = $create_statement ENGINE=$storage_engine
             }
            +# Engine for an underlying table differs
            +--let $child_statement = $child_statement ENGINE=MyISAM
            +
            +# Save default table options, we will want to restore them later
            +--let $default_tbl_opts_saved = $default_tbl_opts
            +--let $default_tbl_opts = $default_tbl_opts UNION(mrg.$table_name) INSERT_METHOD=LAST
             
             # Default table options from define_engine.inc
             --let $create_statement = $create_statement $default_tbl_opts
            @@ -86,6 +104,7 @@
             if ($table_options)
             {
               --let $create_statement = $create_statement $table_options
            + --let $child_statement = $child_statement $table_options
             }
             
             # The difference between $extra_tbl_opts and $table_options
            @@ -98,16 +117,19 @@
             if ($extra_tbl_opts)
             {
               --let $create_statement = $create_statement $extra_tbl_opts
            + --let $child_statement = $child_statement $extra_tbl_opts
             }
             
             if ($as_select)
             {
               --let $create_statement = $create_statement AS $as_select
            + --let $child_statement = $child_statement AS $as_select
             }
             
             if ($partition_options)
             {
               --let $create_statement = $create_statement $partition_options
            + --let $child_statement = $child_statement $partition_options
             }
             
             # We now have the complete CREATE statement in $create_statement.
            @@ -120,6 +142,12 @@
             # Surround it by --disable_query_log/--enable_query_log
             # if you don't want it to appear in the result output.
             #####################
            +--disable_warnings
            +--disable_query_log
            +eval DROP TABLE IF EXISTS mrg.$table_name;
            +eval $child_statement;
            +--enable_query_log
            +--enable_warnings
             
             if ($disable_query_log)
             {
            @@ -166,6 +194,10 @@
             --let $temporary = 0
             --let $disable_query_log = 0
             
            +# Restore default table options now
            +--let $default_tbl_opts = $default_tbl_opts_saved
            +
            +
             # Restore the error codes of the main statement
             --let $mysql_errno = $my_errno
             --let $mysql_errname = $my_errname
            {noformat}

            We know we also need to modify {{alter_table.inc}}, but it's interesting to see if our changes actually work.

            {noformat}

            perl ./mtr --suite=storage_engine-myisammrg 1st

            ...

            storage_engine-myisammrg.1st [ pass ] 26
            {noformat}


            Great. Lets now modify {{../storage/myisammrg/mysql-test/storage_engine/alter_table.inc}}:

            {noformat}
            @@ -20,9 +20,12 @@
             # --let $alter_definition = ADD COLUMN b $char_col DEFAULT ''
             #
             
            +--let $child_alter_definition = $alter_definition
            +
             if ($rename_to)
             {
               --let $alter_definition = RENAME TO $rename_to
            + --let $child_alter_definition = RENAME TO mrg.$rename_to
             }
             
             if (!$alter_definition)
            @@ -43,6 +46,9 @@
             }
             
             --let $alter_statement = $alter_statement TABLE $table_name $alter_definition
            +# We don't want to do ONLINE on underlying tables, we are not testing MyISAM
            +--let $child_statement = ALTER TABLE mrg.$table_name $child_alter_definition
            +
             
             
             # We now have the complete ALTER statement in $alter_statement.
            @@ -75,6 +81,20 @@
             # Surround it by --disable_query_log/--enable_query_log
             # if you don't want it to appear in the result output.
             #####################
            +--disable_query_log
            +--disable_warnings
            +
            +# We will only try to alter the underlying table if the main alter was successful
            +if (!$my_errno)
            +{
            + if ($rename_to)
            + {
            + eval ALTER TABLE $rename_to UNION(mrg.$rename_to);
            + }
            + eval $child_statement;
            +}
            +--enable_warnings
            +--enable_query_log
             
             # Unset the parameters, we don't want them to be accidentally reused later
             --let $alter_definition =
            {noformat}

            {quote}
            Note that in both create_table and alter_table we run our additional code with {{disable_query_log}} / {{disable_result_log}}. It's a tradeoff: this way we reduce the number of mismatches (because our additional code does not produce anything), but it will also make investigation more difficult, should a problem start somewhere in this code. It's up to the person who maintains the engine suite to decide what's best.

            Example:
            We have a MERGE table which points to an underlying table containing non-unique values. Normally, the test assumes that the table under test contains these values, of course; but in our case it's actually the underlying MyISAM table.
            Then, the test performs {{ALTER TABLE .. ADD UNIQUE INDEX ...}} and expects it to fail.
            In our case, the statement on the MERGE table will succeed, but the statement on the underlying table will fail quietly; if the test tries to do something else afterwards, it reveal that the merge table and the underlying table diverged, but it won't be clear from the test output why it happened.
            {quote}

            Now lets try to run the suite:

            {noformat}
            perl ./mtr --suite=storage_engine-myisammrg --force --max-test-fail=0

            Spent 34.141 of 80 seconds executing testcases

            Completed: Failed 41/98 tests, 58.16% were successful.

            {noformat}
            Not great, but not that bad either, considering. Lets look at the results.

            *alter_table* and some other tests produce the following mismatch on SHOW CREATE TABLE:

            {noformat}
            @@ -127,7 +127,7 @@
               `a` int(11) DEFAULT NULL,
               `b` char(8) DEFAULT NULL,
               `c` char(8) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=utf8
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=utf8 INSERT_METHOD=LAST UNION=(`mrg`.`t1`)
             ALTER TABLE t1 DEFAULT CHARACTER SET = latin1 COLLATE latin1_general_ci;
            {noformat}

            Quite as expected, since we have additional options on our tables; requires adding an rdiff.


            *alter_table_online*:

            {noformat}
             ALTER ONLINE TABLE t1 MODIFY b <INT_COLUMN> DEFAULT 5;
            -ERROR HY000: Can't execute the given 'ALTER' command as online
            +# ERROR: Statement succeeded (expected results: ER_CANT_DO_ONLINE)
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command succeeded unexpectedly.
            +# Functionality or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            This is all right I guess. It's good that online alter can be done, right?
            But this is bad:

            {noformat}
             ALTER ONLINE TABLE t1 CHANGE b new_name <INT_COLUMN>;
            -ERROR HY000: Can't execute the given 'ALTER' command as online
            +ERROR HY000: Unable to open underlying table which is differently defined or of non-MyISAM type or doesn't exist
            +# ERROR: Statement ended with errno 1168, errname ER_WRONG_MRG_TABLE (expected results: ER_CANT_DO_ONLINE)
             ALTER ONLINE TABLE t1 COMMENT 'new comment';
            {noformat}

            Looking earlier in the test output, we find out that we are working with temporary tables here. And there is the bug MySQL:57657 which says that altering a temporary MERGE table is broken in 5.5. Whether to add an rdiff or disable the test -- it's a question. I think I will disable, after all, although it's a bit sad. You can choose to be smarter, and since you have your own {{alter_table.inc}} anyway, add some logic in there, checking whether a table is temporary or not.


            *create_table*:

            {noformat}
             CREATE TABLE t1 ENGINE=<STORAGE_ENGINE> <CUSTOM_TABLE_OPTIONS> AS SELECT 1 UNION SELECT 2;
            -SHOW CREATE TABLE t1;
            -Table Create Table
            -t1 CREATE TABLE `t1` (
            - `1` bigint(20) NOT NULL DEFAULT '0'
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
            -SELECT * FROM t1;
            -1
            -1
            -2
            -DROP TABLE t1;
            +ERROR HY000: 'test.t1' is not BASE TABLE
            +# ERROR: Statement ended with errno 1347, errname ER_WRONG_OBJECT (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command finished with ER_WRONG_OBJECT.
            +# CREATE TABLE .. AS SELECT or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            {{AS SELECT}} doesn't work with MERGE tables; we didn't consider it in our simple changes of {{create_table.inc}}, because we only do {{AS SELECT}} a few times in the suite, so it seems easier just to accept this difference here. Although in general, it's up to the person who modifies the creation procedure.


            *lock*:

            The test is quite messed up, because merge children are locked through the parent tables, which the test of course does not expect. E.g. if it locks two tables and then drops them, it expects that nothing is locked any longer, which is not true for the merge tables. Adding an rdiff, anyway locking is very specific for merge tables and needs to be tested as an engine feature rather than as basic functionality.

            The rest are usual mismatches due to unsupported functionality etc.


            MERGE engine doesn't support partitions and transactions, but again, lets see what happens, since it's nearly for free:

            {{mkdir ../storage/myisammrg/mysql-test/storage_engine/parts}}
            {{mkdir ../storage/myisammrg/mysql-test/storage_engine/trx}}

            {noformat}
            perl ./mtr --suite=storage_engine/*-myisammrg --force --max-test-fail=0
            {noformat}

            All tests failed, of course.

            For all partitioned tables:

            {noformat}
            +ERROR HY000: Engine cannot be used in partitioned tables
            +# ERROR: Statement ended with errno 1572, errname ER_PARTITION_MERGE_ERROR (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ CREATE TABLE t1 (a INT(11) /*!*/ /*Custom column options*/) ENGINE=MRG_MYISAM /*!*/ /*Custom table options*/ UNION(mrg.t1) INSERT_METHOD=LAST PARTITION BY HASH(a) PARTITIONS 2 ]
            +# The statement|command finished with ER_PARTITION_MERGE_ERROR.
            +# Partitions or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Transactional tests run somehow, but of course diffs are as extensive as they were for MyISAM. All this is expected, and can be solved either by removing the nearly created {{trx}} and {{parts}} subdirs, or adding rdiffs. It seems reasonable to remove {{parts}} and keep {{trx}}, but with the paranoic assumption that one day an attempt to create a partitioned MERGE table will crash the server, I will keep {{parts}} too; anyway they all together take less than a second (rejecting table creation and failing everything with "table doesn't exist" is fast). So, I will add rdiffs for each file.

            Running all at once now:

            {noformat}
            perl ./mtr --suite=storage_engine-myisammrg,storage_engine/*-myisammrg

            Spent 46.994 of 70 seconds executing testcases

            Completed: All 119 tests were successful.
            {noformat}
            [Goal|#goal]
            [Problems to solve|#problems]
            [... Varying result files|#problem1]
            [...... Solution|#solution1]
            [... Unsupported features|#problem2]
            [...... Solution|#solution2]
            [... Different primitives|#problem3]
            [...... Solution|#solution3]
            [... Filed bugs|#bugs]
            [Tuning|#tuning]
            [... Assumptions|#assumptions]
            [... Common tuning steps|#common]
            [... Examples|#examples]
            [...... MyISAM|#myisam]
            [...... InnoDB plugin|#innodb]
            [...... MERGE|#merge]

            {anchor:goal}

            h2. Goal

            The goal of this task is to create a set of tests which could be used for acceptance/conformance testing of a storage engine.
            The suite it not supposed to provide exhaustive engine testing, and certainly cannot test non-standard engine-specific features (due to its very nature of being agnostic to the engine under test), but rather perform a relatively quick evaluation of standard functionality expected from a storage engine.

            {anchor:problems}

            h2. Problems to solve

            Existing MTR tests are not very suitable for running on different storage engines.

            {anchor:problem1}

            h3. Problem 1: Varying result files

            Traditional MTR/mysqltest have very strict requirements in regard to test output. Since different storage engines are likely to produce a slightly different output (even if the difference is innocent), the tests start failing, which causes false positives.

            Example:

            {noformat}
            OPTIMIZE TABLE t1;

            # One engine might in certain situations say

            Table Op Msg_type Msg_text
            test.t1 optimize status Table is already up to date

            # Another always says

            Table Op Msg_type Msg_text
            test.t1 optimize status OK
            {noformat}

            Neither is wrong, and yet if we have one variant in the result file, for another engine the test will fail.

            Till recently the only solution was to copy the whole suite somewhere and recreate result files, thus introducing usually unwanted code duplication.

            {anchor:solution1}
            h4. Solution

            To solve this problem, we will use functionality developed in scope of MDEV-30.
            With this task, the original test results can be adapted to the storage engine by creating diff files (basically, patches). These patches are stored in the engine folder rather than in the MTR suite folder, so no changes in the original test suite is performed.

            {anchor:problem2}
            h3. Problem 2: Unsupported features

            Most of tests use combinations of various features, even if not all of them are strictly necessary in scope of this exact test. It means that if an engine does not support at least one feature, either by design or because it has not been implemented yet, the whole test becomes inapplicable. For example, if a traditional MTR test for ALTER TABLE uses at least one key, and keys are not supported, the test will fail -- not just produce a result mismatch which we could patch, but actually fail in the middle of execution, -- so we will have to stop running it, and ALTER TABLE will not be tested at all; and the same will happen to any test which uses keys, even if it's used briefly for an unimportant part of the test.

            This problem is even bigger than the first one, because not only does it currently require copying the test suite and maintaining a separate set of *result* files, but also modifying the test files themselves; and often modifications will be extensive enough to rule out even the possibility of future merges with the original test suite.

            {anchor:solution2}
            h4. Solution

            We will solve this problem by allowing all tests to run through intermediate failures to the end (unless the server crashes or a syntax error in the test itself occurs). Thus, the tests will also produce mismatches, and the engine maintainer will be able to decide whether the test is really inapplicable and needs to be disabled, or the partial failures are acceptable and can be stored in the result diff file.

            Example:

            We are testing that a table option is supported by the engine (merely is accepted and stored); lets say AUTO_INCREMENT. We create a table with the option and check that it is not lost; but we also want to see that it works with ALTER; so, we add an alter table statement where the option is modified.

            Now, if we try to run such a test with FEDERATED engine, it will break on ALTER -- not because the option is unsupported, but because ALTER is. In the storage engine test suite, the test will continue to the end and will produce a mismatch (because the ALTER statement will cause an error). Then, someone who tunes the suite for FEDERATED engine, can see the difference, decide that it's reasonable, since ALTER is not supposed to work, and create an rdiff file. Thus, we do not skip the test for the table option entirely, but simply limit the testing to functionality the engine supports.

            The tests will try to be smart in their internal logic, but within reason. When a statement is likely to fail (e.g. a feature is often unsupported), the test will check the result of the statement and will produce a verbose error message, including versions why it could happen. Also, if a failure happened on table creation, or in some other key moment, so that a part of test becomes useless (if a table was not created, there is no point at trying to alter it, etc.), the test will ignore a part of the flow, and proceed to the next part. But it does not happen always, the checks are only performed when the probability of a failure is reasonably high.
            Of course, if a part of the test is bypassed, it will also cause a mismatch, so the maintainer will be aware of that and will be able to check why it happens.

            Additionally, some tests will check the value of the default index (see below notes about configuration), and if it is unset, certain tests will be completely skipped, because every part of them requires indexes (e.g. index.test).

            For the functionality described in the ENGINES table (transactions, savepoints, XA), if a test requires one of the features and it is shown as unsupported in the table, the test will produce a warning. But unlike with the indexes, it will not be skipped, because we assume that for a new engine information in the ENGINES table might be inaccurate.

            {anchor:problem3}
            h3. Problem 3: Different primitives

            Some engines have specifics which makes even simplest hardcoded statements inapplicable. For example, whenever a test contains
            {{CREATE TABLE t (i INT)}}
            it will fail for CSV engine, because it does not allow NULL-able columns. Or, an engine might require certain table options, like a connection string for FEDERATED engine. Since there is, obviously, no connection string in generic tests, every table creation for FEDERATED will fail, and nothing will be tested. Currently, there is no way to work around that apart from copying the tests and modifying them manually.

            In even more complicated scenario, in order to create a table properly, additional actions need to be performed; e.g. to create a functional MERGE table, we need to also create an underlying MyISAM table; and if we want to alter a MERGE table, we should alter the underlying table(s) accordingly.

            {anchor:solution3}
            h4. Solution

            We will provide the engine maintainer with several tools to tune the suite for their engine.

            * Some variables can be set to configure the basic test behavior:
            ** the engine name (to be used in {{CREATE TABLE}} and be masked in {{SHOW CREATE TABLE}});
            ** default column options (when any are required, e.g. NOT NULL for CSV);
            ** default table options (when any are required, e.g. connection for FEDERATED);
            ** default index (INDEX, UNIQUE INDEX, PRIMARY KEY, whichever supported, or a special index);
            ** default types (int type, char type, in case standard ones are not supported);

            * if any actions need to be performed before/after each test, they can be described in include files which are always executed (e.g. creation of a server for FEDERATED before a test, and dropping it after a test);

            * if a server requires non-standard procedures for table creation or modification, they can be tuned in include files which tests to use to perform these actions (CREATE and ALTER table are not run directly in the tests, only by calling the include files, so they are configurable);

            * the engine can have its own set of disabled files, so that there is no need to provide only selected test names on the MTR command line -- the whole suite can be run, and unnecessary files can be disabled through the list;

            * server options can be configured;

            * non-default combinations can be configured for the engine;

            * subsuites can be completely ignored for the engine, so that if there is no interest in running partitioning tests or transactional tests, they can be simply omitted without any effort.

            For more details, see the 'Tuning' section.

            {anchor:bugs}
            h3. Bugs filed while working on the suite:

            LP:973039 / MDEV-211 (Assertion `share->in_trans == 0' failed in maria_close on DROP TABLE under LOCK)
            LP:976104 / MDEV-216 (Assertion `0' failed in my_message_sql on UPDATE IGNORE, or unknown error on release build)
            LP:990187 / MDEV-237 (Assertion `share->reopen == 1' failed at maria_extra on ADD PARTITION)
            LP:994275 / MDEV-248 (Assertion `real->type() == Item::FIELD_ITEM' failed in add_not_null_conds(JOIN*))
            LP:994854 / MDEV-254 (Server hangs on updating an XtraDB table after FLUSH TABLES WITH READ LOCK AND DISABLE CHECKPOINT)
            LP:997397 (TRUNCATE on a partitioned Aria table does not reset AUTO_INCREMENT)
            MDEV-365 (Assertion `block->type == PAGECACHE_EMPTY_PAGE ...' failed in pagecache_read on ADD PARTITION)
            MDEV-366 (Assertion `share->reopen == 1' failed in maria_extra on DROP TABLE which is locked twice)
            MDEV-388 (Creating a federated table with a non-existing server returns a random error code)
            MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            MySQL:64888 (Inconsistent behavior of dynamic concurrent_insert values)
            MySQL:64892 (Session-level low_priority_updates does not work for INSERT)
            MySQL:65146 (WITH CONSISTENT SNAPSHOT does not work with isolation level SERIALIZABLE)
            MySQL:65225 (InnoDB miscalculates auto-increment after changing auto_increment_increment)
            MySQL:65429 / LP:1004910 (Assertion failure block->page.space == page_get_space_id(page_align(ptr)))
            MySQL:65431 / LP:1005052 (Error 1430 while selecting from a federated table with index on a bit column)
            MySQL:65846 (INSERT DELAYED on a BLACKHOLE table hangs forever)
            MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)

            _The list might be incomplete_

            {anchor:tuning}
            h2. Tuning

            Please note that the test suite is synthetic. It means that it contains tests for a set of features which (the whole set) is not supported by any known engine. The same is true for result files -- they are taken from different engines, depending on which produce the most sensible result, and in rare cases are even artificial. This means that there is no engine which would pass the whole suite without any tuning -- not even MyISAM, which contributed to the result files the most.

            {anchor:assumptions}
            h3. Assumptions

            We presume that the tests are set on a source tree build (which is reasonable, considering that the main goal is to help with storage engine development). We want to configure the tests for some engine OurENGINE.

            The storage engine code is located in {{<basedir>/storage/<ourengine>}} folder.

            The storage_engine test suite is located in {{<basedir>/mysql-test/suite/storage_engine}} folder and contains subsuites in subfolders of the corresponding names (currently {{<basedir>/mysql-test/suite/storage_engine/parts}} and {{<basedir>/mysql-test/suite/storage_engine/trx}}.

            {anchor:common}
            h3. Common tuning steps

            1. Create {{<basedir>/storage/ourengine/mysql-test/storage_engine}} folder.

            2. Copy {{<basedir>/mysql-test/suite/storage_engine/define_engine.inc}} to {{<basedir>/storage/ourengine/mysql-test/storage_engine}} and edit the file (at the very least set ENGINE variable; check other variables which you find there, and modify as needed.

            3. If the engine requires any additional server options (e.g. to load the engine, or to tune it, or both), create {{<basedir>/storage/ourengine/mysql-test/storage_engine/suite.opt}} file and add the options there.

            4. If you know in advance that the engine requires additional steps before a test, add them at the end of {{define_engine.inc}}.

            5. If you created any SQL objects in {{define_engine.inc}}, create file {{<basedir>/storage/ourengine/mysql-test/storage_engine/cleanup_engine.inc}}, or copy a stub from {{<basedir>/mysql-test/suite/storage_engine}}, and add the logic to drop the objects.

            6. If you know in advance that the engine requires non-standard table creation and/or modification procedure, copy {{<basedir>/mysql-test/suite/storage_engine/create_table.inc}} and {{<basedir>/mysql-test/suite/storage_engine/alter_table.inc}} into {{<basedir>/storage/ourengine/mysql-test/storage_engine/}} and modify them as needed.

            7. Try to run the 1st test:
            {{perl ./mtr --suite=storage_engine-ourengine 1st}}

            8. If the test produces a mismatch, analyze it and decide whether table creation or test options or server options require more tuning, or the difference is expected.

            9. If the difference is expected, create an rdiff file as
            {{ diff -u {{<basedir>/mysql-test/suite/storage_engine/1st.result <basedir>/mysql-test/suite/storage_engine/1st.reject > <basedir>/storage/ourengine/mysql-test/storage_engine/1st.rdiff}}

            10. When the 1st test passes, run the whole suite:

            {{perl ./mtr --suite=storage_engine-ourengine --force --max-test-fail=0}}

            11. Analyze failures, modify parameters or include files as needed, create rdiff files.

            12. If any tests requires specific non-standard server/engine options, create files {{<testname>.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine}}.

            13. If any tests have to be skipped, add them to {{<basedir>/storage/ourengine/mysql-test/storage_engine/disabled.def}}.

            14. When you are satisfied with the results of storage_engine suite, proceed to the subsuites. If you are interested in running partitions tests, create folder {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            15. Repeat step 3, if needed, only now create {{suite.opt}} in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}.

            16. Run the subsuite as {{perl ./mtr --suite=storage_engine/parts-ourengine --force --max-test-fail=0}}

            17. Repeat steps 11-13, only now create files in {{<basedir>/storage/ourengine/mysql-test/storage_engine/parts}}

            18. If you want to run transactional tests, repeat steps 14-17 for trx subsuite.

            19. To execute the whole set of tests, run {{perl ./mtr --suite=storage_engine-ourengine,storage_engine/*-ourengine}}

            h2. Examples

            Below are some real-life experiences of tuning the test suite for engines that currently come with MariaDB. Actual exact steps are always unique and depend on the storage engine (otherwise it wouldn't be tuning). In some cases only very basic actions are required; in others, it takes more work. In general, the more a storage engine is "different", the trickier is the task.


            h3. Easy level: MyISAM

            Lets see how to make the suite work for a relatively standard engine, in terms of behavior similar to main MySQL engines.
            We will create an overlay for MyISAM.
            _Note: "overlay" is a term introduced by MDEV\-30, and it basically means a test suite or set of suites adapted for a certain engine_

            {{cd <basedir>/mysql-test}}
            {{mkdir -p ../storage/myisam/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/myisam/mysql-test/storage_engine/}}

            Edit the copied version of {{define_engine.inc}} to set {{ENGINE}} to MyISAM:

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = MyISAM;
             #
             ################################
             #
            {noformat}

            All other parameters look good: MyISAM does not require any specific table or column options, and it supports non-unique indexes, INT and CHAR types. These all are defaults (you can see it in {{define_engine.inc}}).

            Also, we do not need any additional server options to activate the engine, since MyISAM is always there.

            So, now we can try to run the 1st test:

            {noformat}
            perl ./mtr --suite=storage_engine-myisam 1st

            ...

            storage_engine-myisam.1st [ pass ] 20
            {noformat}

            The first test passed. Okay, now we can run the whole suite. Some tests will fail, this is expected; we need to see the results
            so we can decide whether we should accept the difference, or disable the test, or patch the code.
            So, we will run it with {{\-\-force}} and {{\-\-max-test-fail=0}}, to see all at once (you might also want to redirect the output to a file, because you will need it):

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine-myisam

            ...

            Spent 42.193 of 64 seconds executing testcases

            Completed: Failed 7/99 tests, 92.93% were successful.

            Failing test(s): storage_engine-myisam.alter_tablespace storage_engine-myisam.check_table storage_engine-myisam.foreign_keys storage_engine-myisam.index_type_hash storage_engine-myisam.show_engine storage_engine-myisam.tbl_opt_insert_method storage_engine-myisam.tbl_opt_union
            {noformat}

            _Note: Total execution time depends on the engine and on the machine. For MyISAM it's pretty fast, for other engines might take longer._

            7 failing tests on the first iteration is extremely good, of course it won't be that bright with other engines. But it's MyISAM, what did you expect...

            (Please keep in mind that the suite results are synthetic, meaning that they belong to different engines, and in rare cases are even artificial, e.g. when the only available engine supporting the functionality currently has a bug. So, no engine is expected to pass all tests without tuning. Not even MyISAM.)

            Now it's time to analyze results.

            Result handling in this suite is somewhat different from standard MTR tests. Unless you managed to crash the server or to hit a syntax error in the test itself, all results will be mismatches -- that is, a test will not abort on a failed statement, but instead will try to proceed. The big benefit of it is that if your engine does not support everything, the tests are still usable, you'll just need to approve the difference in the results (by creating an rdiff file).

            Of course, it also means that the tests produce more noise than usual -- e.g. if your engine does not support ALTER, a standard MTR test would break on the first ALTER statement, while ours will continue and will allow you to use the logic, but will also produce a bunch of mismatches due to failing statements. Internal logic in tests does its best to make it cleaner, but still, the noise is expected.

            So, with this knowledge, lets find the failures and go through them one by one.
            Tip: if you saved the output to a file, failures can easily be found by {{' fail '}} search string (without quote marks).

            The first failing test is *alter_tablespace*. Well, naturally -- no tablespaces for MyISAM. But lets look at the output.

            Mismatch says that some stuff is missing, and instead the test produces this:

            {noformat}
            +ERROR HY000: Table storage engine for 't1' doesn't have this option
            +# ERROR: Statement ended with errno 1031, errname ER_ILLEGAL_HA (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
            +# The statement|command finished with ER_ILLEGAL_HA.
            +# Tablespace operations or the syntax or the mix could be unsupported.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Since MyISAM is not supposed to support tablespaces, changing the engine code is not an option. You can decide whether to disable the test entirely, or to create an rdiff file for it. Unless you have really tough restrictions on suite execution time, I recommend keeping tests and creating rdiffs. Even if you don't care about the result, the test, running as a regression check, will at least show that this very first statement has not suddenly started crashing the server, or something unexpectedly changed in the behavior.

            Creating an rdiff file is simple:

            {{diff -u suite/storage_engine/alter_tablespace.result suite/storage_engine/alter_tablespace.reject > ../storage/myisam/mysql-test/storage_engine/alter_tablespace.rdiff}}

            Next failed test: *check_table*.

            Its difference is simple:

            {noformat}
             INSERT INTO t1 (a,b) VALUES (6,'f');
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a,b) VALUES (7,'g');
             INSERT INTO t2 (a,b) VALUES (8,'h');
             CHECK TABLE t2, t1 MEDIUM;
            @@ -52,7 +52,7 @@
             INSERT INTO t1 (a) VALUES (17),(120),(132);
             CHECK TABLE t1 FAST;
             Table Op Msg_type Msg_text
            -test.t1 check status OK
            +test.t1 check status Table is already up to date
             INSERT INTO t1 (a) VALUES (801),(900),(7714);
             CHECK TABLE t1 MEDIUM;
             Table Op Msg_type Msg_text
            {noformat}

            No harm if the engine realizes that the table is up to date and says so; adding a diff.

            {{diff -u suite/storage_engine/check_table.result suite/storage_engine/check_table.reject > ../storage/myisam/mysql-test/storage_engine/check_table.rdiff}}


            Next failing test: *foreign_keys*

            Just like alter_tablespace, it produces a lot of messages about possibly unsupported functionality, which is natural, since MyISAM doesn't have foreign keys. Adding a diff.


            Next failing test: *index_type_hash*

            It produces mismatches where HASH type is replaced by BTREE type:

            {noformat}
             SHOW KEYS IN t1;
             Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment
            -t1 1 a 1 a # # NULL NULL # HASH
            +t1 1 a 1 a # # NULL NULL # BTREE
            {noformat}

            Since MyISAM doesn't support HASH type, it is fine. Adding a diff.

            Next failing test: *show_engine*

            This test is optimistic and expects that SHOW ENGINE <engine name> STATUS command returns _something_ -- that is, the test obfuscates the output, but expects that it contains a row. For MyISAM it is not the case, hence the diff looks like a missing row:

            {noformat}
            @@ -4,7 +4,6 @@
             SHOW ENGINE <STORAGE_ENGINE> STATUS;
             Type Name Status
            -<STORAGE_ENGINE> ### Engine status, can be long and changeable ###
             # For SHOW MUTEX even the number of lines is volatile, so the result logging is disabled,
            {noformat}

            Adding a diff.


            Next failing tests: *tbl_opt_insert_method*, *tbl_opt_union*

            MyISAM does not use the table option INSERT_METHOD, it's the MERGE engine thing. But table creation does not normally fail on unsupported options, they are simply ignored. That's what we see here:

            {noformat}
            @@ -5,7 +5,7 @@
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL,
               `b` char(8) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1 INSERT_METHOD=FIRST
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
            {noformat}

            SHOW ENGINE does not show the option. This is fine, adding rdiff. Exactly the same for UNION.

            These were all 7 failures.

            Now it's time to take care of subsuites. Currently there are two of them: {{parts}} (stands for 'partitions'), and {{trx}} (stands for 'transactions').

            MyISAM definitely supports partitioning, so lets try them first.

            (we are still in {{<basedir>/mysql-test}})

            {{mkdir ../storage/myisam/mysql-test/storage_engine/parts}}

            This will show MTR that our engine is interested in the {{storage_engine/parts}} subsuite.

            No additional parameters or tricks should be needed for MyISAM to run partition tests, since the suite already sets {{--partition}} option. So, we'll just run it:

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/parts-myisam

            ...

            Spent 1.168 of 5 seconds executing testcases

            Completed: All 8 tests were successful.
            {noformat}

            _Note: For now, it is a very basic suite, only contains a few tests and literally takes several seconds_

            All good, tests passed, nothing needs to be done here.

            Finally, there is the transactions subsuite (also contains tests for XA and snapshots). It is also very basic, but we know that MyISAM doesn't support transactions, XA and snapshots, so results won't be this pretty. Maybe we don't even need to run it, but why not try.

            {{mkdir ../storage/myisam/mysql-test/storage_engine/trx}}

            {noformat}

            perl ./mtr --force --max-test-fail=0 --suite=storage_engine/trx-myisam

            Spent 0.000 of 10 seconds executing testcases

            Completed: Failed 13/13 tests, 0.00% were successful.

            Failing test(s): storage_engine/trx-myisam.cons_snapshot_repeatable_read storage_engine/trx-myisam.cons_snapshot_serializable storage_engine/trx-myisam.delete storage_engine/trx-myisam.insert storage_engine/trx-myisam.level_read_committed storage_engine/trx-myisam.level_read_uncommitted storage_engine/trx-myisam.level_repeatable_read storage_engine/trx-myisam.level_serializable storage_engine/trx-myisam.select_for_update storage_engine/trx-myisam.select_lock_in_share_mode storage_engine/trx-myisam.update storage_engine/trx-myisam.xa storage_engine/trx-myisam.xa_recovery
            {noformat}

            The results are mess, as expected. All differences start with an extra warning (sometimes more than one, as the same is produced for snapshots and XA):

            {noformat}
            +# -- WARNING ----------------------------------------------------------------
            +# According to I_S.ENGINES, MyISAM does not support transactions.
            +# If it is true, the test will most likely fail; you can
            +# either create an rdiff file, or add the test to disabled.def.
            +# If transactions should be supported, check the data in Information Schema.
            +# ---------------------------------------------------------------------------
            {noformat}

            The rest is a mix of wrong results (missing rows, extra rows, different rows), warnings about rollback not working, etc. With 3rd-party engines, it would be up to a maintainer whether to add diffs for all tests or just ignore the whole thing since transactions are not supported. If you choose to ignore, simply delete the newly created {{../storage/<your_engine>/mysql-test/storage_engine/trx}}, and the subsuite won't be executed for the engine. I choose to keep it, so I'll add rdiffs for all 13 files, it's not that much work, after all.

            {noformat}
            diff -u suite/storage_engine/trx/cons_snapshot_repeatable_read.result suite/storage_engine/trx/cons_snapshot_repeatable_read.reject > ../storage/myisam/mysql-test/storage_engine/trx/cons_snapshot_repeatable_read.rdiff
            ...
            diff -u suite/storage_engine/trx/xa_recovery.result suite/storage_engine/trx/xa_recovery.reject > ../storage/myisam/mysql-test/storage_engine/trx/xa_recovery.rdiff
            {noformat}

            _Tip: If you decide to do the same, take time to go through the reject files and see that your engine works as expected, under the circumstances; sometimes making things do what they are not normally expected to do reveals hidden problems._


            Now we can run everything together:

            {noformat}

            perl ./mtr --suite=storage_engine-myisam,storage_engine/*-myisam

            ...

            Spent 46.249 of 70 seconds executing testcases

            Completed: All 120 tests were successful.
            {noformat}

            That's all. Now just stay out of failures.

            {anchor:innodb}

            h3. Intermediate level: InnoDB plugin

            A little bit more work is required to create an overlay for InnoDB. Lets try to do it for InnoDB plugin (which is not loaded by default as of 5.5.25, but is built there).

            Again, start with creating the overlay directory:

            {{mkdir -p ../storage/innobase/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/innobase/mysql-test/storage_engine/}}
            Edit {{../storage/innobase/mysql-test/storage_engine/define_engine.inc}}

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = InnoDB;
             #
             ################################
             #
            {noformat}

            As for MyISAM, all defaults are fine for InnoDB. But now we also need to server startup options to run server with the InnoDB plugin.

            create the file {{../storage/innobase/mysql-test/storage_engine/suite.opt}}:

            {noformat}
            --ignore-builtin-innodb
            --plugin-load=ha_innodb
            --innodb
            {noformat}

            It should be enough for the base suite. Lets run the 1st test now:

            {noformat}

            perl ./mtr --suite=storage_engine-innobase 1st

            ...

            storage_engine-innobase.1st [ pass ] 852
            {noformat}

            And then the whole suite:

            {noformat}

            perl ./mtr --suite=storage_engine-innobase --max-test-fail=0 --force

            ...

            Spent 153.712 of 402 seconds executing testcases

            Completed: Failed 28/99 tests, 71.72% were successful.

            Failing test(s): storage_engine-innobase.alter_table_online storage_engine-innobase.alter_tablespace storage_engine-innobase.autoinc_secondary storage_engine-innobase.autoinc_vars storage_engine-innobase.cache_index storage_engine-innobase.checksum_table_live storage_engine-innobase.delete_low_prio storage_engine-innobase.fulltext_search storage_engine-innobase.index_enable_disable storage_engine-innobase.index_type_hash storage_engine-innobase.insert_delayed storage_engine-innobase.insert_high_prio storage_engine-innobase.insert_low_prio storage_engine-innobase.lock_concurrent storage_engine-innobase.optimize_table storage_engine-innobase.repair_table storage_engine-innobase.select_high_prio storage_engine-innobase.tbl_opt_ai storage_engine-innobase.tbl_opt_data_index_dir storage_engine-innobase.tbl_opt_insert_method storage_engine-innobase.tbl_opt_key_block_size storage_engine-innobase.tbl_opt_row_format storage_engine-innobase.tbl_opt_union storage_engine-innobase.type_char_indexes storage_engine-innobase.type_float_indexes storage_engine-innobase.type_spatial_indexes storage_engine-innobase.update_low_prio storage_engine-innobase.vcol
            {noformat}

            Not as great as it was with MyISAM. Lets see the details.

            Some mismatches are either identical or similar to those in MyISAM, and caused by unsupported functionality (e.g. fulltext search, hash indexes, optimize_table, etc.). I won't go through them here, will just add rdiff files.

            But some deserve attention.

            *alter_table_online*:

            {noformat}
             ALTER ONLINE TABLE t1 CHANGE b new_name <INT_COLUMN>;
            +ERROR HY000: Can't execute the given 'ALTER' command as online
            +# ERROR: Statement ended with errno 1915, errname ER_CANT_DO_ONLINE (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command finished with ER_CANT_DO_ONLINE.
            +# Functionality or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            It's hard to say whether all engines that support ALTER ONLINE should support them for the same set of changes; most likely not, and what we see here is just an InnoDB limitation. On the other hand, we know that MariaDB supports ALTER ONLINE, and namely renaming a column (see http://kb.askmonty.org/en/alter-table), and InnoDB supports at least some ALTER ONLINE operations (e.g. CHANGE COLUMN i i INT DEFAULT 1 works); so I think it's worth filing it as a low-priority bug, at least to make sure it works as expected: https://mariadb.atlassian.net/browse/MDEV-397

            For now, I will add the test to {{../storage/innobase/mysql-test/storage_engine/disabled.def}} list (need to create it, since it's the first test we disable for the engine):

            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            {noformat}

            If later it turns out to be expected behavior or limitation, I will remove the line from {{disabled.def}}, and will instead add an rdiff file.


            *alter_tablespace*:

            {noformat}
            +# ERROR: Statement ended with errno 1030, errname ER_GET_ERRNO (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ ALTER TABLE t1 DISCARD TABLESPACE ]
            +# The statement|command finished with ER_GET_ERRNO.
            +# Tablespace operations or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Now, that seems unexpected. But then again, tablespace operations are only applicable when InnoDB works in {{innodb-file-per-table}} mode, which we did not set in our options. Unless we want to use it for all tests, lets set it for this one only:

            {{../storage/innobase/mysql-test/storage_engine/alter_tablespace.opt}}
            {noformat}
            --innodb-file-per-table=1
            {noformat}

            *autoinc_vars*:

            {noformat}
             INSERT INTO t1 (a,b) VALUES (NULL,'g'),(NULL,'h'),(NULL,'i');
             SELECT LAST_INSERT_ID();
             LAST_INSERT_ID()
            -850
            +1100
             SELECT * FROM t1;
             a b
             1 a
            +1100 g
            +1150 h
            +1200 i
             2 b
             200 d
             3 c
             500 e
             800 f
            -850 g
            -900 h
            -950 i
             DROP TABLE t1;
             SET auto_increment_increment = 500;
             SET auto_increment_offset = 300;
            {noformat}

            This is weird. Now real investigation starts -- there is a good reason to look at the reject file to see the continuous flow:

            {noformat}
            ...

            SET auto_increment_increment = 300;
            INSERT INTO t1 (a,b) VALUES (NULL,'d'),(NULL,'e'),(NULL,'f');
            SELECT LAST_INSERT_ID();
            LAST_INSERT_ID()
            200
            SELECT * FROM t1;
            a b
            1 a
            2 b
            200 d
            3 c
            500 e
            800 f
            SET auto_increment_increment = 50;
            INSERT INTO t1 (a,b) VALUES (NULL,'g'),(NULL,'h'),(NULL,'i');
            SELECT LAST_INSERT_ID();
            LAST_INSERT_ID()
            1100
            SELECT * FROM t1;
            a b
            1 a
            1100 g
            1150 h
            1200 i
            2 b
            200 d
            3 c
            500 e
            800 f
            DROP TABLE t1;
            {noformat}

            The first insert works all right with {{auto_increment_increment = 300}}. Then we change it to {{50}}, but the following insert still uses {{300}} for the first value it inserts, and only then switches to {{50}}. Thus we get {{1100}} instead of {{850}}, and following values also differ. This smells like a bug, although not a very serious one. Since a brief check shows it's also reproducible on Oracle MySQL, we will file it on bugs.mysql.com: http://bugs.mysql.com/bug.php?id=65225 (I actually did it some time ago, when I tried to run the storage engine suite for InnoDB for the first time, that's why it's not brand new).

            And we will also add the test to {{../storage/innobase/mysql-test/storage_engine/disabled.def}}:
            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
            {noformat}


            *delete_low_prio*, *insert_high_prio*, *insert_low_prio*, *select_high_prio*, *update_low_prio*:

            They all have similar fragments in their output:

            {noformat}
            +# Timeout in include/wait_show_condition.inc for = 'DELETE FROM t1'
            +# show_statement : SHOW PROCESSLIST
            +# field : Info
            +# condition : = 'DELETE FROM t1'
            +# max_run_time : 3
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command finished with timeout in wait_show_condition.inc.
            +# DELETE or table locking or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            As the documentation says, the high|low priority functionality (e.g. DELETE LOW_PRIORITY) only works for table-level locking, and the whole test is based on this assumption. InnoDB uses row-level locking, so the entire flow does not work quite as expected. We still can add rdiff files, but, unlike the most of other tests, these ones take relatively long (probably over 10 seconds each). Besides, since locking works entirely different here, the test results are likely to be unstable, as it will be all about timing. So, it makes more sense to disable the tests by adding them to {{../storage/innobase/mysql-test/storage_engine/disabled.def}}:

            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
            delete_low_prio : InnoDB does not use table-level locking
            insert_high_prio : InnoDB does not use table-level locking
            insert_low_prio : InnoDB does not use table-level locking
            select_high_prio : InnoDB does not use table-level locking
            update_low_prio : InnoDB does not use table-level locking
            {noformat}


            *tbl_opt_ai*:

            {noformat}
             Table Create Table
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> AUTO_INCREMENT=10 DEFAULT CHARSET=latin1
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
             ALTER TABLE t1 AUTO_INCREMENT=100;
             SHOW CREATE TABLE t1;
             Table Create Table
             t1 CREATE TABLE `t1` (
               `a` int(11) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> AUTO_INCREMENT=100 DEFAULT CHARSET=latin1
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
             DROP TABLE t1;
            {noformat}

            We already looked at ignored table options in MyISAM tests, but this one is different. Why would AUTO_INCREMENT be ignored, it should be supported all right by InnoDB? (Brief manual check confirms it). Some digging shows, however, that in our case it is _truly_ ignored. It is reproducible with Oracle MySQL, filing a bug on bugs.mysql.com: http://bugs.mysql.com/bug.php?id=65901

            Adding the test to {{../storage/innobase/mysql-test/storage_engine/disabled.def}}:

            {noformat}
            alter_table_online : MDEV-397 (Changing a column name via ALTER ONLINE does not work for InnoDB)
            autoinc_vars : MySQL:65225 (InnoDB miscalculates auto-increment)
            delete_low_prio : InnoDB does not use table-level locking
            insert_high_prio : InnoDB does not use table-level locking
            insert_low_prio : InnoDB does not use table-level locking
            select_high_prio : InnoDB does not use table-level locking
            update_low_prio : InnoDB does not use table-level locking
            tbl_opt_ai : MySQL:65901 (AUTO_INCREMENT option on InnoDB table is ignored if added before autoinc column)
            {noformat}


            *tbl_opt_key_block_size*, *tbl_opt_row_format*:

            {noformat}
             CREATE TABLE t1 (a <INT_COLUMN>, b <CHAR_COLUMN>) ENGINE=<STORAGE_ENGINE> <CUSTOM_TABLE_OPTIONS> KEY_BLOCK_SIZE=8;
            +Warnings:
            +Warning 1478 InnoDB: KEY_BLOCK_SIZE requires innodb_file_per_table.
            +Warning 1478 InnoDB: KEY_BLOCK_SIZE requires innodb_file_format > Antelope.
            +Warning 1478 InnoDB: ignoring KEY_BLOCK_SIZE=8.
            {noformat}

            Doing the same as we did for alter_tablespace, only now adding both {{innodb_file_per_table}} and {{innodb_file_format}}:

            {{../storage/innobase/mysql-test/storage_engine/tbl_opt_key_block_size.opt}}:

            {noformat}
            --innodb-file-per-table=1
            --innodb-file-format=Barracuda
            {noformat}


            *type_char_indexes*:

            {noformat}
             SET SESSION optimizer_switch = 'engine_condition_pushdown=on';
             EXPLAIN SELECT * FROM t1 WHERE c > 'a';
             id select_type table type possible_keys key key_len ref rows Extra
            -# # # range c_v c_v # # # Using index condition
            +# # # range c_v c_v # # # Using where
             SELECT * FROM t1 WHERE c > 'a';
             c c20 v16 v128
             b char3 varchar1a varchar1b
            @@ -135,7 +135,7 @@
             r3a
             EXPLAIN SELECT * FROM t1 WHERE v16 = 'varchar1a' OR v16 = 'varchar3a' ORDER BY v16;
             id select_type table type possible_keys key key_len ref rows Extra
            -# # # range # v16 # # # #
            +# # # ALL # NULL # # # #
             SELECT * FROM t1 WHERE v16 = 'varchar1a' OR v16 = 'varchar3a' ORDER BY v16;
             c c20 v16 v128
             a char1 varchar1a varchar1b
            {noformat}

            _Note: For now we assume that inside one engine, statistics is stable enough to produce consistent results on each test run, which is why we show certain fields in explain to let you decide whether you are satisfied with them or not. If further experience shows that even for the same engine, these tests routinely produce different results, and more often than not it's valid behavior, we might change it._

            For now, I will consider these results acceptable, and will add rdiff.

            As I said before, the rest of failures do not deserve verbose analysis, they are pretty straightforward, I just added rdiff for each of them.


            Now working with {{storage_engine/parts}} and {{storage_engine/trx}}.

            {{mkdir ../storage/innobase/mysql-test/storage_engine/trx}}
            {{mkdir ../storage/innobase/mysql-test/storage_engine/parts}}

            Copy your previously created {{suite.opt}} file to each of the subfolders: as far as MTR is concerned, they are separate suites.

            {{cp ../storage/innobase/mysql-test/storage_engine/suite.opt ../storage/innobase/mysql-test/storage_engine/trx/}}
            {{cp ../storage/innobase/mysql-test/storage_engine/suite.opt ../storage/innobase/mysql-test/storage_engine/parts/}}

            Maybe you'll want to add something else to those options. I, for one, will add {{\-\-innodb-lock-wait-timeout=1}} to {{../storage/innobase/mysql-test/storage_engine/trx/suite.opt}}. Probably it should have been done for other suites, too -- but it's never late, if there are any timeout issues observed.

            When you add rdiff files for subsuites, don't forget to put them in the subfolders:

            {{diff -u suite/storage_engine/parts/checksum_table.result suite/storage_engine/parts/checksum_table.reject > ../storage/innobase/mysql-test/storage_engine/parts/checksum_table.rdiff}}
            etc.

            Again, mostly failures are mismatches due to different output or unsupported functionality.
            _Note: Also note that repair_table test results are likely to differ, even if repair is supported, since the test tries to corrupt existing table files, which are different for each engine._

            *trx/cons_snapshot_serializable*:

            {noformat}
             # If consistent read works on this isolation level (SERIALIZABLE), the following SELECT should not return the value we inserted (1)
             SELECT * FROM t1;
             a
            +1
             COMMIT;
            {noformat}

            It is a bug. Filing as http://bugs.mysql.com/bug.php?id=65146 and adding to disabled.def (don't forget that it should be under trx folder now:
            {{../storage/innobase/mysql-test/storage_engine/trx/disabled.def}}:

            {noformat}
            cons_snapshot_serializable : MySQL:65146 (CONSISTENT SNAPSHOT does not work with SERIALIZABLE)
            {noformat}

            Now, running the whole set:

            {noformat}
            perl ./mtr --suite=storage_engine-innobase,storage_engine/*-innobase

            ...

            Spent 300.715 of 364 seconds executing testcases

            Completed: All 111 tests were successful.
            {noformat}

            Much slower than for MyISAM, but that's how it is usually is.

            {anchor:merge}

            h3. Advanced level: MERGE


            Yet more tricks would be required to tune the same suite for the MERGE engine, because now we will also have to think about how a table is created.
            We can't just create a plain MERGE table and work with it, it needs to have an underlying table, at least one; and if we alter the merge table, underlying tables need to be altered accordingly, otherwise the merge table will become non-functional.

            Start the same way as we started for other engines, by creating the overlay folder:

            {{mkdir -p ../storage/myisammrg/mysql-test/storage_engine}}
            {{cp suite/storage_engine/define_engine.inc ../storage/myisammrg/mysql-test/storage_engine/}}

            We know that we'll need INSERT_METHOD and UNION in our table options; in other circumstances, they should have been added to {{$default_tbl_opts}}; but we cannot set a global UNION, because it will contain different underlying tables, and since we will be modifying the creation procedure anyway, there is no point at adding INSERT_METHOD here, either.

            {noformat}
            @@ -8,7 +8,7 @@
             # The name of the engine under test must be defined in $ENGINE variable.
             # You can set it either here (uncomment and edit) or in your environment.
             #
            -# let $ENGINE =;
            +let $ENGINE = MRG_MYISAM;
             #
             ################################
             #
            {noformat}

            What happens if we now run the 1st test if we did before?

            {{perl ./mtr --suite=storage_engine-myisammrg 1st}}

            {noformat}
             SHOW COLUMNS IN t1;
             INSERT INTO t1 VALUES (1,'a');
            +ERROR HY000: Table 't1' is read only
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command finished with ER_OPEN_AS_READONLY.
            +# INSERT INTO .. VALUES or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            That's because we don't have underlying tables under the merge table. We need to modify table creation procedure.
            First, we need to decide how to do it. There can be many ways, I will choose a simple one, as I think:
            - before each test, I will create a special {{mrg}} schema, which will contain underlying tables, so I don't need to remember all the names when it's time to cleanup;
            - at the end of the test, i will drop the {{mrg}} schema, and thus will get rid of all additional objects at once;
            - whenever a new test table has to be created, I will create a MyISAM table with the same name in {{mrg}} schema, and will point my test table at it;
            - whenever a test table has to be altered, I will also alter the MyISAM table with the same name in {{mrg}} schema.

            In order to achieve this, we need to override 3 files, and modify our already created {{../storage/myisammrg/mysql-test/storage_engine/define_engine.inc}}. Lets start with the latter.

            {{define_engine.inc}} is the include file which is executed before each test. So, it's the place to put the logic which precedes a test.
            At the end of {{../storage/myisammrg/mysql-test/storage_engine/define_engine.inc}} I will {{mrg}} schema creation:

            {noformat}
            @@ -40,6 +40,10 @@
             # Here you can place your custom MTR code which needs to be executed before each test,
             # e.g. creation of an additional schema or table, etc.
             # The cleanup part should be defined in cleanup_engine.inc
            +--disable_warnings
            +DROP DATABASE IF EXISTS mrg;
            +--enable_warnings
            +CREATE DATABASE mrg;
            {noformat}

            Now, it's time for the 3 files to override:

            {{cp suite/storage_engine/cleanup_engine.inc ../storage/myisammrg/mysql-test/storage_engine/}}
            {{cp suite/storage_engine/create_table.inc ../storage/myisammrg/mysql-test/storage_engine/}}
            {{cp suite/storage_engine/alter_table.inc ../storage/myisammrg/mysql-test/storage_engine/}}
             
            {{cleanup_engine.inc}} is the file which is executed after each test; so, in {{../storage/myisammrg/mysql-test/storage_engine/cleanup_engine.inc}} I will be dropping my {{mrg}} schema:

            {noformat}
            @@ -8,4 +8,9 @@
             # Here you can add whatever is needed to cleanup
             # in case your define_engine.inc created any artefacts,
             # e.g. an additional schema and/or tables.
            +--disable_query_log
            +--disable_warnings
            +DROP DATABASE IF EXISTS mrg;
            +--enable_warnings
            +--enable_query_log
            {noformat}

            Now, the actual table creation.
            Tests do not run {{CREATE TABLE}} / {{ALTER TABLE}} statements directly, they always call {{create_table.inc}} or {{alter_table.inc}}, correspondingly. So, if we edit them properly, it will affect all tests at once -- the gain is worth spending some effort.

            Below I will show the changes I had made; in fact, there are many ways to achieve the same goal, probably some of them more efficient. Be creative when the time comes.

            {noformat}
            --- suite/storage_engine/create_table.inc 2012-07-15 17:46:03.638461728 +0400
            +++ ../storage/myisammrg/mysql-test/storage_engine/create_table.inc 2012-07-15 22:08:29.324511647 +0400
            @@ -54,6 +54,15 @@
               --let $table_name = t1
             }
             
            +# Child statement is a statement that will create an underlying table.
            +# From this point, it will deviate from the main statement, that's why
            +# we start creating it here in parallel with the main one.
            +# For underlying tables, we will create a table in mrg schema, e.g.
            +# for table t1 the underlying table will be mrg.t1, etc.
            +# Since we will only create one child here, it should be enough. If we want more,
            +# we can always add a suffix, e.g. mrg.t1_child1, mrg.t1_child2, etc.
            +
            +--let $child_statement = $create_statement mrg.$table_name
             --let $create_statement = $create_statement $table_name
             
             if (!$create_definition)
            @@ -70,6 +79,9 @@
             if ($create_definition)
             {
               --let $create_statement = $create_statement ($create_definition)
            + # Table definition for the underlying table should be the same
            + # as for the MERGE table
            + --let $child_statement = $child_statement ($create_definition)
             }
             
             # If $default_engine is set, we will rely on the default storage engine
            @@ -78,6 +90,12 @@
             {
               --let $create_statement = $create_statement ENGINE=$storage_engine
             }
            +# Engine for an underlying table differs
            +--let $child_statement = $child_statement ENGINE=MyISAM
            +
            +# Save default table options, we will want to restore them later
            +--let $default_tbl_opts_saved = $default_tbl_opts
            +--let $default_tbl_opts = $default_tbl_opts UNION(mrg.$table_name) INSERT_METHOD=LAST
             
             # Default table options from define_engine.inc
             --let $create_statement = $create_statement $default_tbl_opts
            @@ -86,6 +104,7 @@
             if ($table_options)
             {
               --let $create_statement = $create_statement $table_options
            + --let $child_statement = $child_statement $table_options
             }
             
             # The difference between $extra_tbl_opts and $table_options
            @@ -98,16 +117,19 @@
             if ($extra_tbl_opts)
             {
               --let $create_statement = $create_statement $extra_tbl_opts
            + --let $child_statement = $child_statement $extra_tbl_opts
             }
             
             if ($as_select)
             {
               --let $create_statement = $create_statement AS $as_select
            + --let $child_statement = $child_statement AS $as_select
             }
             
             if ($partition_options)
             {
               --let $create_statement = $create_statement $partition_options
            + --let $child_statement = $child_statement $partition_options
             }
             
             # We now have the complete CREATE statement in $create_statement.
            @@ -120,6 +142,12 @@
             # Surround it by --disable_query_log/--enable_query_log
             # if you don't want it to appear in the result output.
             #####################
            +--disable_warnings
            +--disable_query_log
            +eval DROP TABLE IF EXISTS mrg.$table_name;
            +eval $child_statement;
            +--enable_query_log
            +--enable_warnings
             
             if ($disable_query_log)
             {
            @@ -166,6 +194,10 @@
             --let $temporary = 0
             --let $disable_query_log = 0
             
            +# Restore default table options now
            +--let $default_tbl_opts = $default_tbl_opts_saved
            +
            +
             # Restore the error codes of the main statement
             --let $mysql_errno = $my_errno
             --let $mysql_errname = $my_errname
            {noformat}

            We know we also need to modify {{alter_table.inc}}, but it's interesting to see if our changes actually work.

            {noformat}

            perl ./mtr --suite=storage_engine-myisammrg 1st

            ...

            storage_engine-myisammrg.1st [ pass ] 26
            {noformat}


            Great. Lets now modify {{../storage/myisammrg/mysql-test/storage_engine/alter_table.inc}}:

            {noformat}
            @@ -20,9 +20,12 @@
             # --let $alter_definition = ADD COLUMN b $char_col DEFAULT ''
             #
             
            +--let $child_alter_definition = $alter_definition
            +
             if ($rename_to)
             {
               --let $alter_definition = RENAME TO $rename_to
            + --let $child_alter_definition = RENAME TO mrg.$rename_to
             }
             
             if (!$alter_definition)
            @@ -43,6 +46,9 @@
             }
             
             --let $alter_statement = $alter_statement TABLE $table_name $alter_definition
            +# We don't want to do ONLINE on underlying tables, we are not testing MyISAM
            +--let $child_statement = ALTER TABLE mrg.$table_name $child_alter_definition
            +
             
             
             # We now have the complete ALTER statement in $alter_statement.
            @@ -75,6 +81,20 @@
             # Surround it by --disable_query_log/--enable_query_log
             # if you don't want it to appear in the result output.
             #####################
            +--disable_query_log
            +--disable_warnings
            +
            +# We will only try to alter the underlying table if the main alter was successful
            +if (!$my_errno)
            +{
            + if ($rename_to)
            + {
            + eval ALTER TABLE $rename_to UNION(mrg.$rename_to);
            + }
            + eval $child_statement;
            +}
            +--enable_warnings
            +--enable_query_log
             
             # Unset the parameters, we don't want them to be accidentally reused later
             --let $alter_definition =
            {noformat}

            {quote}
            Note that in both create_table and alter_table we run our additional code with {{disable_query_log}} / {{disable_result_log}}. It's a tradeoff: this way we reduce the number of mismatches (because our additional code does not produce anything), but it will also make investigation more difficult, should a problem start somewhere in this code. It's up to the person who maintains the engine suite to decide what's best.

            Example:
            We have a MERGE table which points to an underlying table containing non-unique values. Normally, the test assumes that the table under test contains these values, of course; but in our case it's actually the underlying MyISAM table.
            Then, the test performs {{ALTER TABLE .. ADD UNIQUE INDEX ...}} and expects it to fail.
            In our case, the statement on the MERGE table will succeed, but the statement on the underlying table will fail quietly; if the test tries to do something else afterwards, it reveal that the merge table and the underlying table diverged, but it won't be clear from the test output why it happened.
            {quote}

            Now lets try to run the suite:

            {noformat}
            perl ./mtr --suite=storage_engine-myisammrg --force --max-test-fail=0

            Spent 34.141 of 80 seconds executing testcases

            Completed: Failed 41/98 tests, 58.16% were successful.

            {noformat}
            Not great, but not that bad either, considering. Lets look at the results.

            *alter_table* and some other tests produce the following mismatch on SHOW CREATE TABLE:

            {noformat}
            @@ -127,7 +127,7 @@
               `a` int(11) DEFAULT NULL,
               `b` char(8) DEFAULT NULL,
               `c` char(8) DEFAULT NULL
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=utf8
            +) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=utf8 INSERT_METHOD=LAST UNION=(`mrg`.`t1`)
             ALTER TABLE t1 DEFAULT CHARACTER SET = latin1 COLLATE latin1_general_ci;
            {noformat}

            Quite as expected, since we have additional options on our tables; requires adding an rdiff.


            *alter_table_online*:

            {noformat}
             ALTER ONLINE TABLE t1 MODIFY b <INT_COLUMN> DEFAULT 5;
            -ERROR HY000: Can't execute the given 'ALTER' command as online
            +# ERROR: Statement succeeded (expected results: ER_CANT_DO_ONLINE)
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command succeeded unexpectedly.
            +# Functionality or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            This is all right I guess. It's good that online alter can be done, right?
            But this is bad:

            {noformat}
             ALTER ONLINE TABLE t1 CHANGE b new_name <INT_COLUMN>;
            -ERROR HY000: Can't execute the given 'ALTER' command as online
            +ERROR HY000: Unable to open underlying table which is differently defined or of non-MyISAM type or doesn't exist
            +# ERROR: Statement ended with errno 1168, errname ER_WRONG_MRG_TABLE (expected results: ER_CANT_DO_ONLINE)
             ALTER ONLINE TABLE t1 COMMENT 'new comment';
            {noformat}

            Looking earlier in the test output, we find out that we are working with temporary tables here. And there is the bug MySQL:57657 which says that altering a temporary MERGE table is broken in 5.5. Whether to add an rdiff or disable the test -- it's a question. I think I will disable, after all, although it's a bit sad. You can choose to be smarter, and since you have your own {{alter_table.inc}} anyway, add some logic in there, checking whether a table is temporary or not.


            *create_table*:

            {noformat}
             CREATE TABLE t1 ENGINE=<STORAGE_ENGINE> <CUSTOM_TABLE_OPTIONS> AS SELECT 1 UNION SELECT 2;
            -SHOW CREATE TABLE t1;
            -Table Create Table
            -t1 CREATE TABLE `t1` (
            - `1` bigint(20) NOT NULL DEFAULT '0'
            -) ENGINE=<STORAGE_ENGINE> DEFAULT CHARSET=latin1
            -SELECT * FROM t1;
            -1
            -1
            -2
            -DROP TABLE t1;
            +ERROR HY000: 'test.t1' is not BASE TABLE
            +# ERROR: Statement ended with errno 1347, errname ER_WRONG_OBJECT (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# The statement|command finished with ER_WRONG_OBJECT.
            +# CREATE TABLE .. AS SELECT or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            {{AS SELECT}} doesn't work with MERGE tables; we didn't consider it in our simple changes of {{create_table.inc}}, because we only do {{AS SELECT}} a few times in the suite, so it seems easier just to accept this difference here. Although in general, it's up to the person who modifies the creation procedure.


            *lock*:

            The test is quite messed up, because merge children are locked through the parent tables, which the test of course does not expect. E.g. if it locks two tables and then drops them, it expects that nothing is locked any longer, which is not true for the merge tables. Adding an rdiff, anyway locking is very specific for merge tables and needs to be tested as an engine feature rather than as basic functionality.

            The rest are usual mismatches due to unsupported functionality etc.


            MERGE engine doesn't support partitions and transactions, but again, lets see what happens, since it's nearly for free:

            {{mkdir ../storage/myisammrg/mysql-test/storage_engine/parts}}
            {{mkdir ../storage/myisammrg/mysql-test/storage_engine/trx}}

            {noformat}
            perl ./mtr --suite=storage_engine/*-myisammrg --force --max-test-fail=0
            {noformat}

            All tests failed, of course.

            For all partitioned tables:

            {noformat}
            +ERROR HY000: Engine cannot be used in partitioned tables
            +# ERROR: Statement ended with errno 1572, errname ER_PARTITION_MERGE_ERROR (expected to succeed)
            +# ------------ UNEXPECTED RESULT ------------
            +# [ CREATE TABLE t1 (a INT(11) /*!*/ /*Custom column options*/) ENGINE=MRG_MYISAM /*!*/ /*Custom table options*/ UNION(mrg.t1) INSERT_METHOD=LAST PARTITION BY HASH(a) PARTITIONS 2 ]
            +# The statement|command finished with ER_PARTITION_MERGE_ERROR.
            +# Partitions or the mix could be unsupported|malfunctioning, or the problem was caused by previous errors.
            +# You can change the engine code, or create an rdiff, or disable the test by adding it to disabled.def.
            +# Further in this test, the message might sometimes be suppressed; a part of the test might be skipped.
            +# Also, this problem may cause a chain effect (more errors of different kinds in the test).
            +# -------------------------------------------
            {noformat}

            Transactional tests run somehow, but of course diffs are as extensive as they were for MyISAM. All this is expected, and can be solved either by removing the nearly created {{trx}} and {{parts}} subdirs, or adding rdiffs. It seems reasonable to remove {{parts}} and keep {{trx}}, but with the paranoic assumption that one day an attempt to create a partitioned MERGE table will crash the server, I will keep {{parts}} too; anyway they all together take less than a second (rejecting table creation and failing everything with "table doesn't exist" is fast). So, I will add rdiffs for each file.

            Running all at once now:

            {noformat}
            perl ./mtr --suite=storage_engine-myisammrg,storage_engine/*-myisammrg

            Spent 46.994 of 70 seconds executing testcases

            Completed: All 119 tests were successful.
            {noformat}
            axel Axel Schwenke made changes -
            Assignee Axel Schwenke [ axel ] Elena Stepanova [ elenst ]

            Since the suite proved to be very useful for testing Cassandra and especially LevelDB, it makes sense to make some more changes that will make it far more universal.
            Most importantly, I need to generalize INSERT the same way it's done for CREATE / ALTER table. Now CREATE can be redefined for an engine so that it's possible to add mandatory columns, indexes, etc.; but insert is usually done without a list of fields, so if we added a column to the table, all inserts will fail due to not matching number of columns. And even those inserts that do define the field list will only be possible if our added fields can be populated automatically.
            Instead, INSERT should be extracted into an include file and be able to handle both cases – explicit and implicit list of fields.

            Additionally, there were some test cases among LevelDB bug reports that are generic enough and might be worth adding to the suite.

            elenst Elena Stepanova added a comment - Since the suite proved to be very useful for testing Cassandra and especially LevelDB, it makes sense to make some more changes that will make it far more universal. Most importantly, I need to generalize INSERT the same way it's done for CREATE / ALTER table. Now CREATE can be redefined for an engine so that it's possible to add mandatory columns, indexes, etc.; but insert is usually done without a list of fields, so if we added a column to the table, all inserts will fail due to not matching number of columns. And even those inserts that do define the field list will only be possible if our added fields can be populated automatically. Instead, INSERT should be extracted into an include file and be able to handle both cases – explicit and implicit list of fields. Additionally, there were some test cases among LevelDB bug reports that are generic enough and might be worth adding to the suite.

            I decided for now to go with a simple (and quick) solution, to modify remaining INSERT statements to use an explicit set of columns. It was already partially done before, but many statements remained, especially in inc files. Now all INSERTs (except for those in insert.test and 1st.test that specifically test INSERT INTO <table name> VALUES) should provide the list of columns. It will help with engines like LevelDB (and possibly Cassandra), since now we can modify create_table.inc to add mandatory columns, in case of LevelDB a primary key column when it's missing, and also create a trigger which will populate it upon INSERT. Thus, nearly all test functionality should be applicable.

            elenst Elena Stepanova added a comment - I decided for now to go with a simple (and quick) solution, to modify remaining INSERT statements to use an explicit set of columns. It was already partially done before, but many statements remained, especially in inc files. Now all INSERTs (except for those in insert.test and 1st.test that specifically test INSERT INTO <table name> VALUES) should provide the list of columns. It will help with engines like LevelDB (and possibly Cassandra), since now we can modify create_table.inc to add mandatory columns, in case of LevelDB a primary key column when it's missing, and also create a trigger which will populate it upon INSERT. Thus, nearly all test functionality should be applicable.

            Pushed the described change into maria/5.5

            elenst Elena Stepanova added a comment - Pushed the described change into maria/5.5
            serg Sergei Golubchik made changes -
            Resolution Fixed [ 1 ]
            Status Open [ 1 ] Closed [ 6 ]
            serg Sergei Golubchik made changes -
            Workflow defaullt [ 10647 ] MariaDB v2 [ 42613 ]
            ratzpo Rasmus Johansson (Inactive) made changes -
            Workflow MariaDB v2 [ 42613 ] MariaDB v3 [ 61591 ]
            elenst Elena Stepanova made changes -
            Labels tests
            elenst Elena Stepanova made changes -
            Component/s Tests [ 10800 ]
            serg Sergei Golubchik made changes -
            Workflow MariaDB v3 [ 61591 ] MariaDB v4 [ 131866 ]

            People

              elenst Elena Stepanova
              ratzpo Rasmus Johansson (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.