Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-27339

Upgrade 10.5 -> 10.7 fails MariaDB tried to use the ... compression, but its provider plugin is not loaded

Details

    • Bug
    • Status: Stalled (View Workflow)
    • Major
    • Resolution: Unresolved
    • 10.7.2
    • 10.11
    • Plugins
    • None

    Description

      Workflow
      1. Bootstrap and start DB server  MariaDB 10.5.14
           origin/10.5 2776635cb98d35867447d375fdc04a44ef11a697 2021-12-16
      2. Create some initial data and run some DDL/DML in one session.
      3. Dump schemas and table data
      4. Send SIGTERM to DB server and wait till its finished
      5. Restart with 10.7.2
          origin/10.7 92a4e76a2c1c15fb44dc0cb05e06d5aa408a8e35 2021-12-14
          fails like
           2021-12-20 18:51:01 0 [Warning] mysqld: MariaDB tried to use the LZO compression, but its provider plugin is not loaded
           2021-12-20 18:51:01 0 [ERROR] InnoDB: Failed to read page 3 from file './test/t7.ibd': Table is compressed or encrypted but uncompress or decrypt failed.
           ...    <=== IMHO already this sufficient bad.
           2021-12-20 18:51:02 0 [ERROR] Incorrect definition of table mysql.event: expected column 'definer' at position 3 to have type varchar(, found type char(141).
           2021-12-20 18:51:02 0 [ERROR] mysqld: Event Scheduler: An error occurred when initializing system tables. Disabling the Event Scheduler.
                   <== This might be from whatever good or bad reason.
      pluto:/data/results/1640025904/TBR-1313A/dev/shm/rqg/1640025904/62/1/rr
      _RR_TRACE_DIR="." rr replay --mark-stdio mysqld-1
                # Fate of the server startet with 10.5 till arrival of SIGTERM
      _RR_TRACE_DIR="." rr replay --mark-stdio mysqld-2
                # Fate of the server startet with 10.7
       
      pluto:/data/results/1640025904/TBR-1313A/dev/shm/rqg/1640025904/62/1/data_orig/
                # Copy of the data directory before the restart attempt with 10.7
       
      Reproducing the problem with RQG is not difficult.
      And roughly any compression algorithm used within the test 
      (Snappy, LZMA, LZO, ...) showed somewhere up.
       
      RQG
      ====
      # git clone https://github.com/mleich1/rqg --branch experimental RQG
      #
      # GIT_SHOW: HEAD -> experimental, origin/experimental 62e549d1b378ee71215e439c95eb5b1519434137 2021-12-20T18:19:03+01:00
      # rqg.pl  : Version 4.0.4 (2021-12)
      #
      # $RQG_HOME/rqg.pl \
      # --grammar=conf/mariadb/innodb_compression_encryption.yy \
      # --gendata=conf/mariadb/innodb_compression_encryption.zz \
      # --max_gd_duration=1800 \
      # --mysqld=--loose-innodb_encryption_rotate_key_age=2 \
      # --mysqld=--loose-innodb_lock_schedule_algorithm=fcfs \
      # --mysqld=--loose-idle_write_transaction_timeout=0 \
      # --mysqld=--loose-idle_transaction_timeout=0 \
      # --mysqld=--loose-idle_readonly_transaction_timeout=0 \
      # --mysqld=--connect_timeout=60 \
      # --mysqld=--interactive_timeout=28800 \
      # --mysqld=--slave_net_timeout=60 \
      # --mysqld=--net_read_timeout=30 \
      # --mysqld=--net_write_timeout=60 \
      # --mysqld=--loose-table_lock_wait_timeout=50 \
      # --mysqld=--wait_timeout=28800 \
      # --mysqld=--lock-wait-timeout=86400 \
      # --mysqld=--innodb-lock-wait-timeout=50 \
      # --no-mask \
      # --queries=10000000 \
      # --seed=random \
      # --reporters=Backtrace \
      # --reporters=ErrorLog \
      # --reporters=Deadlock1 \
      # --reporters=Upgrade1 \
      # --validators=None \
      # --mysqld=--log_output=none \
      # --mysqld=--log_bin_trust_function_creators=1 \
      # --mysqld=--loose-debug_assert_on_not_freed_memory=0 \
      # --engine=InnoDB \
      # --restart_timeout=240 \
      # --upgrade-test \
      # --mysqld=--plugin-load-add=file_key_management.so \
      # --mysqld=--loose-file-key-management-filename=$RQG_HOME/conf/mariadb/encryption_keys.txt \
      # --duration=120 \
      # --mysqld=--loose-innodb_fatal_semaphore_wait_threshold=300 \
      # --mysqld=--loose-innodb_read_only_compressed=OFF \
      # --mysqld=--innodb_stats_persistent=on \
      # --mysqld=--innodb_adaptive_hash_index=off \
      # --mysqld=--log-bin \
      # --mysqld=--sync-binlog=1 \
      # --mysqld=--loose-innodb_evict_tables_on_commit_debug=off \
      # --mysqld=--loose-max-statement-time=30 \
      # --threads=9 \
      # --mysqld=--innodb-use-native-aio=0 \
      # --mysqld=--loose-gdb \
      # --mysqld=--loose-debug-gdb \
      # --rr=Extended \
      # --rr_options=--chaos --wait \
      # --mysqld=--innodb_undo_tablespaces=3 \
      # --mysqld=--innodb_undo_log_truncate=ON \
      # --vardir_type=fast \
      # --mysqld=--innodb_page_size=32K \
      # --mysqld=--innodb-buffer-pool-size=24M \
      # --mysqld=--loose-innodb_log_files_in_group=2 \
      # --no_mask \
      # <local settings>
      

      Attachments

        Issue Links

          Activity

            mleich,

            But you really don't load the provider plugin on the upgraded server, do you? I don't see it anywhere in the test options.
            In this case, there is no need of rr or a stress test to repeat this. Create an LZO-compressed table on 10.5, shut down, start 10.7 without lzo provider, observe the error.

            elenst Elena Stepanova added a comment - mleich , But you really don't load the provider plugin on the upgraded server, do you? I don't see it anywhere in the test options. In this case, there is no need of rr or a stress test to repeat this. Create an LZO-compressed table on 10.5, shut down, start 10.7 without lzo provider, observe the error.

            And how is it a bug then?

            serg Sergei Golubchik added a comment - And how is it a bug then?
            mleich Matthias Leich added a comment - - edited

            Assigning
                --mysqld=--plugin-load-add=provider_lzo.so --mysqld=--plugin-load-add=provider_bzip2.so --mysqld=--plugin-load-add=provider_lzma --mysqld=--plugin-load-add=provider_snappy --mysqld=--plugin-load-add=provider_lz4
            to both server starts (RQG just does that) has the following effect
            1. The 10.5 server comes up even though having error messages like
                [ERROR] mysqld: Can't open shared library '/data/Server_bin/10.5D_asan/lib/plugin/provider_lzo.so' (errno: 2, cannot open shared object file: No such file or directory)
                within the error log. The message by itself is correct because lib/plugin/provider_lzo.so does really not exist.
            2. The effects described on top "MariaDB tried to use the LZO compression, but its provider plugin ..."
                 disappears.
            
            

            mleich Matthias Leich added a comment - - edited Assigning --mysqld=--plugin-load-add=provider_lzo.so --mysqld=--plugin-load-add=provider_bzip2.so --mysqld=--plugin-load-add=provider_lzma --mysqld=--plugin-load-add=provider_snappy --mysqld=--plugin-load-add=provider_lz4 to both server starts (RQG just does that) has the following effect 1. The 10.5 server comes up even though having error messages like [ERROR] mysqld: Can't open shared library '/data/Server_bin/10.5D_asan/lib/plugin/provider_lzo.so' (errno: 2, cannot open shared object file: No such file or directory) within the error log. The message by itself is correct because lib/plugin/provider_lzo.so does really not exist. 2. The effects described on top "MariaDB tried to use the LZO compression, but its provider plugin ..." disappears.

            MDEV-27887 is another attempt at explaining how this is a bug. Basically, as noted in MDEV-15912 and MDEV-23755, an upgrade after a normal shutdown is expected to work. Earlier, there had been instructions to execute a slow shutdown (innodb_fast_shutdown=0) to ensure that the history of all transactions will be purged.

            As noted in MDEV-27887, it is possible that some history of old transactions remains to be processed after the upgrade. If the compression libraries are unavailable, this would lead to an obscure error message not on the first SQL-layer access to affected tables, but possibly before any table has been accessed from SQL.

            marko Marko Mäkelä added a comment - MDEV-27887 is another attempt at explaining how this is a bug. Basically, as noted in MDEV-15912 and MDEV-23755 , an upgrade after a normal shutdown is expected to work. Earlier, there had been instructions to execute a slow shutdown ( innodb_fast_shutdown=0 ) to ensure that the history of all transactions will be purged. As noted in MDEV-27887 , it is possible that some history of old transactions remains to be processed after the upgrade. If the compression libraries are unavailable, this would lead to an obscure error message not on the first SQL-layer access to affected tables, but possibly before any table has been accessed from SQL.

            Here is a test case change that repeats the problem:

            ./mtr innodb.innodb-page_compression_snappy
            

            10.7 9e314fcf6e666ce6fdbdd8ca1ae23d6c4b389b21

            At line 105: query 'SET GLOBAL innodb_fast_shutdown=0' failed: <Unknown> (2006): Server has gone away
            

            These changes are also speeding up the test by working around MDEV-24813 (no locks will be acquired on InnoDB temporary tables) and by actually triggering a log checkpoint via page flushing.

            diff --git a/mysql-test/suite/innodb/include/innodb-page-compression.inc b/mysql-test/suite/innodb/include/innodb-page-compression.inc
            index b16edcf2a28..df93fe4cfd0 100644
            --- a/mysql-test/suite/innodb/include/innodb-page-compression.inc
            +++ b/mysql-test/suite/innodb/include/innodb-page-compression.inc
            @@ -4,7 +4,7 @@
             # This may be triggered on a slow system or one that lacks native AIO.
             call mtr.add_suppression("InnoDB: Trying to delete tablespace.*pending operations");
             --enable_query_log
            -create table innodb_normal (c1 int not null auto_increment primary key, b char(200)) engine=innodb;
            +create temporary table innodb_normal (c1 int not null auto_increment primary key, b char(200)) engine=innodb;
             create table innodb_page_compressed1 (c1 int not null auto_increment primary key, b char(200)) engine=innodb page_compressed=1 page_compression_level=1;
             create table innodb_page_compressed2 (c1 int not null auto_increment primary key, b char(200)) engine=innodb page_compressed=1 page_compression_level=2;
             create table innodb_page_compressed3 (c1 int not null auto_increment primary key, b char(200)) engine=innodb page_compressed=1 page_compression_level=3;
            @@ -15,6 +15,10 @@ create table innodb_page_compressed7 (c1 int not null auto_increment primary key
             create table innodb_page_compressed8 (c1 int not null auto_increment primary key, b char(200)) engine=innodb page_compressed=1 page_compression_level=8;
             create table innodb_page_compressed9 (c1 int not null auto_increment primary key, b char(200)) engine=innodb page_compressed=1 page_compression_level=9;
             
            +connect (prevent_purge,localhost,root);
            +START TRANSACTION WITH CONSISTENT SNAPSHOT;
            +connection default;
            +
             --disable_query_log
             begin;
             let $i = 2000;
            @@ -40,20 +44,15 @@ insert into innodb_page_compressed9 select * from innodb_normal;
             commit;
             --enable_query_log
             
            -select count(*) from innodb_page_compressed1;
            -select count(*) from innodb_page_compressed3;
            -select count(*) from innodb_page_compressed4;
            -select count(*) from innodb_page_compressed5;
            -select count(*) from innodb_page_compressed6;
            -select count(*) from innodb_page_compressed6;
            -select count(*) from innodb_page_compressed7;
            -select count(*) from innodb_page_compressed8;
            -select count(*) from innodb_page_compressed9;
            -
             #
             # Wait until pages are really compressed
             #
            -let $wait_condition= select variable_value > 0 from information_schema.global_status where variable_name = 'INNODB_NUM_PAGES_PAGE_COMPRESSED';
            +SET GLOBAL innodb_max_dirty_pages_pct=0.0;
            +
            +let $wait_condition =
            +SELECT variable_value = 0
            +FROM information_schema.global_status
            +WHERE variable_name = 'INNODB_BUFFER_POOL_PAGES_DIRTY';
             --source include/wait_condition.inc
             
             --let $MYSQLD_DATADIR=`select @@datadir`
            @@ -62,12 +61,7 @@ let $wait_condition= select variable_value > 0 from information_schema.global_st
             
             --source include/shutdown_mysqld.inc
             
            ---let t1_IBD = $MYSQLD_DATADIR/test/innodb_normal.ibd
            ---let SEARCH_RANGE = 10000000
             --let SEARCH_PATTERN=AaAaAaAa
            ---echo # innodb_normal expected FOUND
            --- let SEARCH_FILE=$t1_IBD
            --- source include/search_pattern_in_file.inc
             --let t1_IBD = $MYSQLD_DATADIR/test/innodb_page_compressed1.ibd
             --echo # innodb_page_compressed1 page compressed expected NOT FOUND
             -- let SEARCH_FILE=$t1_IBD
            @@ -105,7 +99,11 @@ let $wait_condition= select variable_value > 0 from information_schema.global_st
             -- let SEARCH_FILE=$t1_IBD
             -- source include/search_pattern_in_file.inc
             
            +let $restart_parameters = --disable-provider-snappy
             -- source include/start_mysqld.inc
            +disconnect prevent_purge;
            +SET GLOBAL innodb_fast_shutdown=0;
            +-- source include/restart_mysqld.inc
             
             select count(*) from innodb_page_compressed1;
             select count(*) from innodb_page_compressed3;
            @@ -120,7 +118,6 @@ select count(*) from innodb_page_compressed9;
             let $wait_condition= select variable_value > 0 from information_schema.global_status where variable_name = 'INNODB_NUM_PAGES_PAGE_DECOMPRESSED';
             --source include/wait_condition.inc
             
            -drop table innodb_normal;
             drop table innodb_page_compressed1;
             drop table innodb_page_compressed2;
             drop table innodb_page_compressed3;
            

            If you remove the $restart_parameters, the test should not crash, but the SELECT statements after the restart should report errors.

            Side note: The size of this test could be reduced further, by writing just one record to each table, now that the test should no longer depend on a small buffer pool size. I will do the test case cleanup separately.

            marko Marko Mäkelä added a comment - Here is a test case change that repeats the problem: ./mtr innodb.innodb-page_compression_snappy 10.7 9e314fcf6e666ce6fdbdd8ca1ae23d6c4b389b21 At line 105: query 'SET GLOBAL innodb_fast_shutdown=0' failed: <Unknown> (2006): Server has gone away These changes are also speeding up the test by working around MDEV-24813 (no locks will be acquired on InnoDB temporary tables) and by actually triggering a log checkpoint via page flushing. diff --git a/mysql-test/suite/innodb/include/innodb-page-compression.inc b/mysql-test/suite/innodb/include/innodb-page-compression.inc index b16edcf2a28..df93fe4cfd0 100644 --- a/mysql-test/suite/innodb/include/innodb-page-compression.inc +++ b/mysql-test/suite/innodb/include/innodb-page-compression.inc @@ -4,7 +4,7 @@ # This may be triggered on a slow system or one that lacks native AIO. call mtr.add_suppression("InnoDB: Trying to delete tablespace.*pending operations"); --enable_query_log -create table innodb_normal (c1 int not null auto_increment primary key, b char(200)) engine=innodb; +create temporary table innodb_normal (c1 int not null auto_increment primary key, b char(200)) engine=innodb; create table innodb_page_compressed1 (c1 int not null auto_increment primary key, b char(200)) engine=innodb page_compressed=1 page_compression_level=1; create table innodb_page_compressed2 (c1 int not null auto_increment primary key, b char(200)) engine=innodb page_compressed=1 page_compression_level=2; create table innodb_page_compressed3 (c1 int not null auto_increment primary key, b char(200)) engine=innodb page_compressed=1 page_compression_level=3; @@ -15,6 +15,10 @@ create table innodb_page_compressed7 (c1 int not null auto_increment primary key create table innodb_page_compressed8 (c1 int not null auto_increment primary key, b char(200)) engine=innodb page_compressed=1 page_compression_level=8; create table innodb_page_compressed9 (c1 int not null auto_increment primary key, b char(200)) engine=innodb page_compressed=1 page_compression_level=9; +connect (prevent_purge,localhost,root); +START TRANSACTION WITH CONSISTENT SNAPSHOT; +connection default; + --disable_query_log begin; let $i = 2000; @@ -40,20 +44,15 @@ insert into innodb_page_compressed9 select * from innodb_normal; commit; --enable_query_log -select count(*) from innodb_page_compressed1; -select count(*) from innodb_page_compressed3; -select count(*) from innodb_page_compressed4; -select count(*) from innodb_page_compressed5; -select count(*) from innodb_page_compressed6; -select count(*) from innodb_page_compressed6; -select count(*) from innodb_page_compressed7; -select count(*) from innodb_page_compressed8; -select count(*) from innodb_page_compressed9; - # # Wait until pages are really compressed # -let $wait_condition= select variable_value > 0 from information_schema.global_status where variable_name = 'INNODB_NUM_PAGES_PAGE_COMPRESSED'; +SET GLOBAL innodb_max_dirty_pages_pct=0.0; + +let $wait_condition = +SELECT variable_value = 0 +FROM information_schema.global_status +WHERE variable_name = 'INNODB_BUFFER_POOL_PAGES_DIRTY'; --source include/wait_condition.inc --let $MYSQLD_DATADIR=`select @@datadir` @@ -62,12 +61,7 @@ let $wait_condition= select variable_value > 0 from information_schema.global_st --source include/shutdown_mysqld.inc ---let t1_IBD = $MYSQLD_DATADIR/test/innodb_normal.ibd ---let SEARCH_RANGE = 10000000 --let SEARCH_PATTERN=AaAaAaAa ---echo # innodb_normal expected FOUND --- let SEARCH_FILE=$t1_IBD --- source include/search_pattern_in_file.inc --let t1_IBD = $MYSQLD_DATADIR/test/innodb_page_compressed1.ibd --echo # innodb_page_compressed1 page compressed expected NOT FOUND -- let SEARCH_FILE=$t1_IBD @@ -105,7 +99,11 @@ let $wait_condition= select variable_value > 0 from information_schema.global_st -- let SEARCH_FILE=$t1_IBD -- source include/search_pattern_in_file.inc +let $restart_parameters = --disable-provider-snappy -- source include/start_mysqld.inc +disconnect prevent_purge; +SET GLOBAL innodb_fast_shutdown=0; +-- source include/restart_mysqld.inc select count(*) from innodb_page_compressed1; select count(*) from innodb_page_compressed3; @@ -120,7 +118,6 @@ select count(*) from innodb_page_compressed9; let $wait_condition= select variable_value > 0 from information_schema.global_status where variable_name = 'INNODB_NUM_PAGES_PAGE_DECOMPRESSED'; --source include/wait_condition.inc -drop table innodb_normal; drop table innodb_page_compressed1; drop table innodb_page_compressed2; drop table innodb_page_compressed3; If you remove the $restart_parameters , the test should not crash, but the SELECT statements after the restart should report errors. Side note: The size of this test could be reduced further, by writing just one record to each table, now that the test should no longer depend on a small buffer pool size. I will do the test case cleanup separately.

            Sorry, I made a mistake in my test: the relevant snippet should have been

            let $restart_parameters = --disable-provider-snappy;
            -- source include/start_mysqld.inc
            disconnect prevent_purge;
            SET GLOBAL innodb_fast_shutdown=0;
            -- source include/restart_mysqld.inc
            

            A semicolon was missing semicolon at the end of the let line, and the SQL statement was being submitted while no server was running.

            In a corrected form, this test does show a form of corruption even though it does not lead to a server crash:

            let $restart_parameters = --disable-provider-snappy;
            -- source include/start_mysqld.inc
            disconnect prevent_purge;
            SET GLOBAL innodb_fast_shutdown=0;
            let $restart_parameters =;
            -- source include/restart_mysqld.inc
             
            select count(*) from innodb_page_compressed1;
            

            During the slow shutdown, the compressed tables are inaccessible, and a number of errors like the following will be emitted by purge:

            10.7 9e314fcf6e666ce6fdbdd8ca1ae23d6c4b389b21

            2022-04-25  9:04:09 0 [ERROR] InnoDB: Failed to read page 2 from file './test/innodb_page_compressed9.ibd': Table is compressed or encrypted but uncompress or decrypt failed.
            2022-04-25  9:04:09 0 [ERROR] InnoDB: Failed to read page 3 from file './test/innodb_page_compressed9.ibd': Table is compressed or encrypted but uncompress or decrypt failed.
            2022-04-25  9:04:09 0 [ERROR] InnoDB: Failed to read page 69 from file './test/innodb_page_compressed9.ibd': Table is compressed or encrypted but uncompress or decrypt failed.
            2022-04-25  9:04:09 0 [ERROR] InnoDB: Failed to read page 122 from file './test/innodb_page_compressed9.ibd': Table is compressed or encrypted but uncompress or decrypt failed.
            

            After the restart, the first leaf page that will be accessed by the SELECT (for me: page number 4) shows that the history had not been reset during the slow shutdown:

            00000060: 0200 1b69 6e66 696d 756d 0003 000b 0000  ...infimum......
            00000070: 7375 7072 656d 756d 0000 0010 00df 8000  supremum........
            00000080: 0001 0000 0000 0036 9400 0001 4701 1041  .......6....G..A
            00000090: 6141 6141 6141 6141 6141 6141 6141 6141  aAaAaAaAaAaAaAaA
            

            The next-record pointer at 0x61 points to 0x63+0x1b = 0x7e. At that offset we can find (c1,DB_TRX_ID,DB_ROLL_PTR,b)=(1,0x36,0x94…,'AaAa…'). If the history had been purged as expected, the DB_TRX_ID and DB_ROLL_PTR would be 0 and 2⁵⁵.

            I think that a proper fix of this bug would be to make InnoDB refuse to start up if any of the undo log records traversed during trx_lists_init_at_db_start() point to compressed tables whose algorithm has not been loaded. Such a fix would only be possible if the .frm file stores the required compression algorithm. InnoDB itself only stores it in individual data pages, not in its data dictionary.

            marko Marko Mäkelä added a comment - Sorry, I made a mistake in my test: the relevant snippet should have been let $restart_parameters = --disable-provider-snappy; -- source include/start_mysqld.inc disconnect prevent_purge; SET GLOBAL innodb_fast_shutdown=0; -- source include/restart_mysqld.inc A semicolon was missing semicolon at the end of the let line, and the SQL statement was being submitted while no server was running. In a corrected form, this test does show a form of corruption even though it does not lead to a server crash: let $restart_parameters = --disable-provider-snappy; -- source include/start_mysqld.inc disconnect prevent_purge; SET GLOBAL innodb_fast_shutdown=0; let $restart_parameters =; -- source include/restart_mysqld.inc   select count (*) from innodb_page_compressed1; During the slow shutdown, the compressed tables are inaccessible, and a number of errors like the following will be emitted by purge: 10.7 9e314fcf6e666ce6fdbdd8ca1ae23d6c4b389b21 2022-04-25 9:04:09 0 [ERROR] InnoDB: Failed to read page 2 from file './test/innodb_page_compressed9.ibd': Table is compressed or encrypted but uncompress or decrypt failed. 2022-04-25 9:04:09 0 [ERROR] InnoDB: Failed to read page 3 from file './test/innodb_page_compressed9.ibd': Table is compressed or encrypted but uncompress or decrypt failed. 2022-04-25 9:04:09 0 [ERROR] InnoDB: Failed to read page 69 from file './test/innodb_page_compressed9.ibd': Table is compressed or encrypted but uncompress or decrypt failed. 2022-04-25 9:04:09 0 [ERROR] InnoDB: Failed to read page 122 from file './test/innodb_page_compressed9.ibd': Table is compressed or encrypted but uncompress or decrypt failed. After the restart, the first leaf page that will be accessed by the SELECT (for me: page number 4) shows that the history had not been reset during the slow shutdown: 00000060: 0200 1b69 6e66 696d 756d 0003 000b 0000 ...infimum...... 00000070: 7375 7072 656d 756d 0000 0010 00df 8000 supremum........ 00000080: 0001 0000 0000 0036 9400 0001 4701 1041 .......6....G..A 00000090: 6141 6141 6141 6141 6141 6141 6141 6141 aAaAaAaAaAaAaAaA The next-record pointer at 0x61 points to 0x63+0x1b = 0x7e. At that offset we can find (c1,DB_TRX_ID,DB_ROLL_PTR,b)=(1,0x36,0x94…,'AaAa…'). If the history had been purged as expected, the DB_TRX_ID and DB_ROLL_PTR would be 0 and 2⁵⁵. I think that a proper fix of this bug would be to make InnoDB refuse to start up if any of the undo log records traversed during trx_lists_init_at_db_start() point to compressed tables whose algorithm has not been loaded. Such a fix would only be possible if the .frm file stores the required compression algorithm. InnoDB itself only stores it in individual data pages, not in its data dictionary.
            marko Marko Mäkelä added a comment - I introduced a smaller, non-restarting test innodb.innodb_page_compressed that will replace the file that the above patch is for.

            People

              serg Sergei Golubchik
              mleich Matthias Leich
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.