Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-36433

file_key_management.so: .frm key storage items

Details

    Description

      $ cat keyfile 
      1;d2826644f7ce1fadbb11534b062618e100a3e8a3eea671faaffd7cb20196829d
      2;a1ecfbe7b6118151200e28d10c496fae6f8b48665c5ae040f82242b4cd66f280
      

      Then start a server with a fresh data dir with:

      # Update path as needed
      --plugin-load-add=file_key_management.so --file-key-management-filename="${PWD}/keyfile"
      

      Then execute:

      SET SESSION innodb_default_encryption_key_id=1;
      SET GLOBAL innodb_encrypt_tables=ON;
      CREATE TABLE t1 (c INT) ENGINE=InnoDB;
       
      SET SESSION innodb_default_encryption_key_id=2;
      SET GLOBAL innodb_encrypt_tables=OFF;
      SELECT @@GLOBAL.innodb_encrypt_tables;  # Confirmed off (and there is no @@SESSION.innodb_encrypt_tables var)
      CREATE TABLE t2 (c INT) ENGINE=InnoDB;
       
      SELECT NAME, ENCRYPTION_SCHEME, CURRENT_KEY_ID FROM information_schema.INNODB_TABLESPACES_ENCRYPTION WHERE NAME LIKE 't%';
      

      Stop the server and observe that ENCRYPTION_KEY_ID 2 was used:

      $ grep --binary-files=text -o 'ENCRYPTION_KEY_ID...' data/test/t2.frm
      ENCRYPTION_KEY_ID2
      

      This should not be the case as innodb_encrypt_tables was set to OFF.

      Additionally, the SELECT query reports only t1 to be using encryption:

      CS 11.8.1 33e0796e7a154e02a5e53c55cefc5d6feb4f5e6d (Optimized) Build 15/02/2025

      11.8.1-opt>SELECT NAME, ENCRYPTION_SCHEME, CURRENT_KEY_ID FROM information_schema.INNODB_TABLESPACES_ENCRYPTION WHERE NAME LIKE 't%';
      +---------+-------------------+----------------+
      | NAME    | ENCRYPTION_SCHEME | CURRENT_KEY_ID |
      +---------+-------------------+----------------+
      | test/t1 |                 1 |              1 |
      +---------+-------------------+----------------+
      

      Present in 10.5-12.0

      Attachments

        Issue Links

          Activity

            The following test shows how t1 is not encrypted even after setting innodb-encryption-threads to 4 (i.e. item #6 above):

            --source include/have_innodb.inc
            --source include/have_file_key_management.inc
             
            SET SESSION innodb_default_encryption_key_id=1;
            SET GLOBAL innodb_encrypt_tables=ON;
            CREATE TABLE t1 (c INT) ENGINE=InnoDB;
             
            SET SESSION innodb_default_encryption_key_id=2;
            SET GLOBAL innodb_encrypt_tables=OFF;
            CREATE TABLE t2 (c INT) ENGINE=InnoDB;
             
            --let $restart_parameters=--innodb-encryption-threads=4
            --source include/restart_mysqld.inc
             
            SET GLOBAL innodb_encrypt_tables=ON;
            

            Run as:

            # Update path as needed
            ./mtr --mysqld=--plugin-load-add=file_key_management.so --mysqld=--file-key-management-filename="${PWD}/../keyfile" test   # assuming main/test.test
            

            Then check the outcome by inspecting the contents of t1.ibd (not encrypted) and t2.idb (encrypted). Checking towards the end of the file makes it immediately clear.

            Roel Roel Van de Paar added a comment - The following test shows how t1 is not encrypted even after setting innodb-encryption-threads to 4 (i.e. item #6 above): --source include/have_innodb.inc --source include/have_file_key_management.inc SET SESSION innodb_default_encryption_key_id=1; SET GLOBAL innodb_encrypt_tables= ON ; CREATE TABLE t1 (c INT ) ENGINE=InnoDB;   SET SESSION innodb_default_encryption_key_id=2; SET GLOBAL innodb_encrypt_tables= OFF ; CREATE TABLE t2 (c INT ) ENGINE=InnoDB;   --let $restart_parameters=--innodb-encryption-threads=4 --source include/restart_mysqld.inc   SET GLOBAL innodb_encrypt_tables= ON ; Run as: # Update path as needed . /mtr --mysqld=--plugin-load-add=file_key_management.so --mysqld=-- file -key-management-filename= "${PWD}/../keyfile" test # assuming main/test.test Then check the outcome by inspecting the contents of t1.ibd (not encrypted) and t2.idb (encrypted). Checking towards the end of the file makes it immediately clear.

            I ran the following test, placed in the mysql-test/suite/encryption/t/ directory:

            --source include/have_innodb.inc
            --source include/have_file_key_management_plugin.inc
             
            SET SESSION innodb_default_encryption_key_id=1;
            SET GLOBAL innodb_encrypt_tables=ON;
            CREATE TABLE t1 (c INT) ENGINE=InnoDB;
             
            SET SESSION innodb_default_encryption_key_id=2;
            SET GLOBAL innodb_encrypt_tables=OFF;
            CREATE TABLE t2 (c INT) ENGINE=InnoDB;
             
            --let $restart_parameters=--innodb-encryption-threads=4
            SET GLOBAL innodb_fast_shutdown=0;
            --source include/restart_mysqld.inc
             
            SET GLOBAL innodb_encrypt_tables=ON;
            

            When I run it with ./mtr --rr and check rr replay var/log/mysqld.1.rr/latest-trace, the buf_pool.flush_list.count will grow from 0 to 181, all due to the "encryption threads" that do not encrypt anything but actually only mark some buffer pool pages as dirty. During shutdown, these pages would be written back to the data files. With some GDB commands we can check the page identifiers:

            set $b=buf_pool.flush_list.start
            while ($b)
            print/x $b->id_
            set $b=$b->list.next
            end
            

            We can see that most of these pages are for the InnoDB system tablespace. For any other tablespaces, the only dirtied pages are the first pages, in my case, of tablespaces 1, 2, 3, and 6. These pages are not encrypted.

            If I add a SELECT SLEEP(10) to the test, then the encryption threads would have more time to mark pages as dirty, but we’d still end up with 181 dirty pages before innobase_end() is invoked.

            I was curious to see which pages would be loaded into the buffer pool during the server restart. For test/t2.ibd (tablespace 6), we actually get several pages loaded via buf_dump_load_task() because I did not disable the buffer pool dump/reload function. The "encryption threads" do not seem to be interested in those pages. The tablespace test/t1.ibd was not being accessed at all.

            If I run the test with skip_innodb_buffer_pool_load_at_startup, then I can observe the following:

            1. pages 0,1,2,3 (and possibly beyond) of tablespace 2 (mysql/innodb_index_stats.ibd), 1 (mysql/innodb_table_stats.ibd) and 3 (mysql/transaction_registry.ibd) will be loaded by encryption threads
            2. pages 0,1,2,3 of tablespace 6 (test/t2.ibd) will be loaded by the encryption threads
            3. tablespace 5 (test/t1.ibd) will not be accessed at all
            4. the buf_pool.flush_list.count still only grows to 181 by the time innobase_end() is called. There are some writes before that; for example, page 3 of tablespace 6 would have been marked dirty by fil_crypt_rotate_page() and written back by buf_flush_page_cleaner().

            All tablespace metadata is being added to fil_system in dict_check_tablespaces_and_store_max_id(), including that of test/t1.ibd. That file was actually opened during the execution of the SET GLOBAL statement:

            10.11 669f719cc21286020c95eec11f0d09b74f96639e

            #0  0x00005598d46d585f in fil_node_open_file_low (node=node@entry=0x5598dfcb7be8, page=page@entry=0x0, no_lsn=<optimized out>) at /mariadb/10.11/storage/innobase/fil/fil0fil.cc:355
            #1  0x00005598d46d36f2 in fil_node_open_file (node=node@entry=0x5598dfcb7be8, page=page@entry=0x0, no_lsn=0x0) at /mariadb/10.11/storage/innobase/fil/fil0fil.cc:461
            #2  0x00005598d3d3366d in fil_space_t::prepare_acquired (this=0x5598dfcb7a78) at /mariadb/10.11/storage/innobase/fil/fil0fil.cc:671
            #3  0x00005598d46e3847 in fil_space_t::acquire_if_not_stopped (this=0x5598dfcb7a78) at /mariadb/10.11/storage/innobase/fil/fil0crypt.cc:1348
            #4  fil_crypt_default_encrypt_tables_fill () at /mariadb/10.11/storage/innobase/fil/fil0crypt.cc:2155
            #5  0x00005598d46e3b97 in fil_crypt_set_encrypt_tables (val=0x1) at /mariadb/10.11/storage/innobase/fil/fil0crypt.cc:2252
            #6  0x00005598d4443226 in innodb_encrypt_tables_update (save=0x7fe7a40187b8) at /mariadb/10.11/storage/innobase/handler/ha_innodb.cc:18751
            

            serg, I understand that you must have been involved with the design of innodb_encrypt_tables when it was developed for MariaDB Server 10.1. Can you clarify what the intended behaviour here would be regarding determining which tablespaces to mark as dirty, and why such a fuzzy asynchronous interface was chosen to be implemented in the first place?

            marko Marko Mäkelä added a comment - I ran the following test, placed in the mysql-test/suite/encryption/t/ directory: --source include/have_innodb.inc --source include/have_file_key_management_plugin.inc   SET SESSION innodb_default_encryption_key_id=1; SET GLOBAL innodb_encrypt_tables= ON ; CREATE TABLE t1 (c INT ) ENGINE=InnoDB;   SET SESSION innodb_default_encryption_key_id=2; SET GLOBAL innodb_encrypt_tables= OFF ; CREATE TABLE t2 (c INT ) ENGINE=InnoDB;   --let $restart_parameters=--innodb-encryption-threads=4 SET GLOBAL innodb_fast_shutdown=0; --source include/restart_mysqld.inc   SET GLOBAL innodb_encrypt_tables= ON ; When I run it with ./mtr --rr and check rr replay var/log/mysqld.1.rr/latest-trace , the buf_pool.flush_list.count will grow from 0 to 181, all due to the "encryption threads" that do not encrypt anything but actually only mark some buffer pool pages as dirty. During shutdown, these pages would be written back to the data files. With some GDB commands we can check the page identifiers: set $b=buf_pool.flush_list.start while ($b) print/x $b->id_ set $b=$b->list.next end We can see that most of these pages are for the InnoDB system tablespace. For any other tablespaces, the only dirtied pages are the first pages, in my case, of tablespaces 1, 2, 3, and 6. These pages are not encrypted. If I add a SELECT SLEEP(10) to the test, then the encryption threads would have more time to mark pages as dirty, but we’d still end up with 181 dirty pages before innobase_end() is invoked. I was curious to see which pages would be loaded into the buffer pool during the server restart. For test/t2.ibd (tablespace 6), we actually get several pages loaded via buf_dump_load_task() because I did not disable the buffer pool dump/reload function. The "encryption threads" do not seem to be interested in those pages. The tablespace test/t1.ibd was not being accessed at all. If I run the test with skip_innodb_buffer_pool_load_at_startup , then I can observe the following: pages 0,1,2,3 (and possibly beyond) of tablespace 2 ( mysql/innodb_index_stats.ibd ), 1 ( mysql/innodb_table_stats.ibd ) and 3 ( mysql/transaction_registry.ibd ) will be loaded by encryption threads pages 0,1,2,3 of tablespace 6 ( test/t2.ibd ) will be loaded by the encryption threads tablespace 5 ( test/t1.ibd ) will not be accessed at all the buf_pool.flush_list.count still only grows to 181 by the time innobase_end() is called. There are some writes before that; for example, page 3 of tablespace 6 would have been marked dirty by fil_crypt_rotate_page() and written back by buf_flush_page_cleaner() . All tablespace metadata is being added to fil_system in dict_check_tablespaces_and_store_max_id() , including that of test/t1.ibd . That file was actually opened during the execution of the SET GLOBAL statement: 10.11 669f719cc21286020c95eec11f0d09b74f96639e #0 0x00005598d46d585f in fil_node_open_file_low (node=node@entry=0x5598dfcb7be8, page=page@entry=0x0, no_lsn=<optimized out>) at /mariadb/10.11/storage/innobase/fil/fil0fil.cc:355 #1 0x00005598d46d36f2 in fil_node_open_file (node=node@entry=0x5598dfcb7be8, page=page@entry=0x0, no_lsn=0x0) at /mariadb/10.11/storage/innobase/fil/fil0fil.cc:461 #2 0x00005598d3d3366d in fil_space_t::prepare_acquired (this=0x5598dfcb7a78) at /mariadb/10.11/storage/innobase/fil/fil0fil.cc:671 #3 0x00005598d46e3847 in fil_space_t::acquire_if_not_stopped (this=0x5598dfcb7a78) at /mariadb/10.11/storage/innobase/fil/fil0crypt.cc:1348 #4 fil_crypt_default_encrypt_tables_fill () at /mariadb/10.11/storage/innobase/fil/fil0crypt.cc:2155 #5 0x00005598d46e3b97 in fil_crypt_set_encrypt_tables (val=0x1) at /mariadb/10.11/storage/innobase/fil/fil0crypt.cc:2252 #6 0x00005598d4443226 in innodb_encrypt_tables_update (save=0x7fe7a40187b8) at /mariadb/10.11/storage/innobase/handler/ha_innodb.cc:18751 serg , I understand that you must have been involved with the design of innodb_encrypt_tables when it was developed for MariaDB Server 10.1. Can you clarify what the intended behaviour here would be regarding determining which tablespaces to mark as dirty, and why such a fuzzy asynchronous interface was chosen to be implemented in the first place?

            Why was this reopened? ENCRYPTION_KEY_ID in the frm behavior doesn't seem to be a bug.

            frm corruption likely is, must be reported separately.

            background re-encryption also is off-topic here.

            serg Sergei Golubchik added a comment - Why was this reopened? ENCRYPTION_KEY_ID in the frm behavior doesn't seem to be a bug. frm corruption likely is, must be reported separately. background re-encryption also is off-topic here.
            Roel Roel Van de Paar added a comment - - edited

            (As mentioned, leaving out background encryption, incorrect or unkown key being used for encryption after restart, and .frm corruption related items)

            If all of the following are correct:
            (Note: for items 1,2 and 3 it is assumed that no ENCRYPTION_KEY_ID is specified in the CREATE TABLE command)
            1. If innodb_default_encryption_key_id is set to (or left at) the default (1), then no ENCRYPTION_KEY_ID should be written to the .frm file
            2. However, in all other cases, (i.e. innodb_default_encryption_key_id is set >1 then ENCRYPTION_KEY_ID should be written to the .frm file
            3. Even if innodb_encrypt_tables=OFF (and innodb_default_encryption_key_id>1), then ENCRYPTION_KEY_ID should be written to the .frm file
            4. If ENCRYPTION_KEY_ID is manually specified in CREATE TABLE, it should be written to the .frm file, even if it is 1 (in contrast to item #1)
            5. Similarly to 4, if ALTER TABLE...ENCRYPTION_KEY_ID=1 is used on a ENCRYPTION_KEY_ID=2 table, then ENCRYPTION_KEY_ID 1 should be written to the .frm file (in contrast to item #1)

            Then it would be ok to close this ticket, as this is the current status of things.

            Roel Roel Van de Paar added a comment - - edited (As mentioned, leaving out background encryption, incorrect or unkown key being used for encryption after restart, and .frm corruption related items) If all of the following are correct: (Note: for items 1,2 and 3 it is assumed that no ENCRYPTION_KEY_ID is specified in the CREATE TABLE command) 1. If innodb_default_encryption_key_id is set to (or left at) the default ( 1 ), then no ENCRYPTION_KEY_ID should be written to the .frm file 2. However, in all other cases, (i.e. innodb_default_encryption_key_id is set >1 then ENCRYPTION_KEY_ID should be written to the .frm file 3. Even if innodb_encrypt_tables=OFF (and innodb_default_encryption_key_id >1), then ENCRYPTION_KEY_ID should be written to the .frm file 4. If ENCRYPTION_KEY_ID is manually specified in CREATE TABLE, it should be written to the .frm file, even if it is 1 (in contrast to item #1) 5. Similarly to 4, if ALTER TABLE...ENCRYPTION_KEY_ID=1 is used on a ENCRYPTION_KEY_ID=2 table, then ENCRYPTION_KEY_ID 1 should be written to the .frm file (in contrast to item #1) Then it would be ok to close this ticket, as this is the current status of things.

            Yes, this is the common behavior for all engine-defined attributes

            serg Sergei Golubchik added a comment - Yes, this is the common behavior for all engine-defined attributes

            People

              serg Sergei Golubchik
              Roel Roel Van de Paar
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.