Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-12933

sort out the compression library chaos

Details

    Description

      As MariaDB is getting more storage engines and as they're getting more features, MariaDB can optionally use more and more compression libraries for various purposes.

      InnoDB, TokuDB, RocksDB — they all can use different sets of compression libraries. Compiling them all in would result in a lot of run-time/rpm/deb dependencies, most of which will be never used by most of the users. Not compiling them in, would result in requests to compile them in. While most users don't use all these libraries, many users use some of these libraries.

      A solution could be to load these libraries on request, without creating a packaging dependency. There are different ways to do it

      • hide all compression libraries behind a single unified compression API. Either develop our own or use something like Squash. This would require changing all engines to use this API
      • use the same approach as in server services — create a service per compression library, a service implementation will just return an error code for any function invocation if the corresponding library is not installed. this way — may be — we could avoid modifying all affected storage engines

      Attachments

        Issue Links

          Activity

            robertbindar Robert Bindar added a comment -

            This was a GSoC project in 2020 where Kartik Soneji has done significant progress. There is a PR in the server which is currently in review.

            robertbindar Robert Bindar added a comment - This was a GSoC project in 2020 where Kartik Soneji has done significant progress. There is a PR in the server which is currently in review.

            I think that before we can advance on this, we must demonstrate the usefulness of each compression implementation that we would intend to support in the packages that we distribute.

            Enabling support for some compression library will practically extend our file format. If we later remove support, there will be complaints for those who would have to convert their files to a supported format.

            Enabling support for all thinkable compression libraries would unnecessarily bloat the code.

            Because InnoDB is a widely used storage engine, I think that this task is blocked by MDEV-11068.

            marko Marko Mäkelä added a comment - I think that before we can advance on this, we must demonstrate the usefulness of each compression implementation that we would intend to support in the packages that we distribute. Enabling support for some compression library will practically extend our file format. If we later remove support, there will be complaints for those who would have to convert their files to a supported format. Enabling support for all thinkable compression libraries would unnecessarily bloat the code. Because InnoDB is a widely used storage engine, I think that this task is blocked by MDEV-11068 .
            robertbindar Robert Bindar added a comment -

            Hey marko! I love that you still keep an eye on this, thanks a lot! I'm probably looking at this from the wrong perspective or something because I still don't understand how this refactoring job enables support for other compression libraries except the ones that we already have support for.

            In my understanding, assuming we merge this project in the server, it's hard to say that the project will enable support for a new compression library XYZ (xyz that we don't currently support) because in order to consider we have support for such library, we would have to first implement a service for XYZ, then add lines of code in a storage engine where we would want to use that compression method and only after these two steps are done, the users can install libXYZ on their system and launch the server with --use-compression=XYZ and configure whatever variables to make a storage engine compress with XYZ.

            Let me know if I'm wrong. If you can explain in a bit more details so that my silly brain understands it too, I would appreciate a lot.

            I do agree though that this task should be blocked by MDEV-11068, solving 11068 might generate less work for this task.

            robertbindar Robert Bindar added a comment - Hey marko ! I love that you still keep an eye on this, thanks a lot! I'm probably looking at this from the wrong perspective or something because I still don't understand how this refactoring job enables support for other compression libraries except the ones that we already have support for. In my understanding, assuming we merge this project in the server, it's hard to say that the project will enable support for a new compression library XYZ (xyz that we don't currently support) because in order to consider we have support for such library, we would have to first implement a service for XYZ, then add lines of code in a storage engine where we would want to use that compression method and only after these two steps are done, the users can install libXYZ on their system and launch the server with --use-compression=XYZ and configure whatever variables to make a storage engine compress with XYZ. Let me know if I'm wrong. If you can explain in a bit more details so that my silly brain understands it too, I would appreciate a lot. I do agree though that this task should be blocked by MDEV-11068 , solving 11068 might generate less work for this task.

            No, I don't think that you should wait for MDEV-11068. Let's start pushing this at last. Move at least one library into a run-time dependency, push. Rinse and repeat for all other compression libraries. When you'll get to bzip2 — then you'll get to MDEV-11068.

            serg Sergei Golubchik added a comment - No, I don't think that you should wait for MDEV-11068 . Let's start pushing this at last . Move at least one library into a run-time dependency, push. Rinse and repeat for all other compression libraries. When you'll get to bzip2 — then you'll get to MDEV-11068 .

            The preliminary results that I have seen for MDEV-11068 suggest that LZ4 could be a reasonably fast alternative to zlib. But its compression is worse than zlib even at innodb_compression_level=1. Snappy did not seem any better.

            marko Marko Mäkelä added a comment - The preliminary results that I have seen for MDEV-11068 suggest that LZ4 could be a reasonably fast alternative to zlib. But its compression is worse than zlib even at innodb_compression_level=1 . Snappy did not seem any better.
            serg Sergei Golubchik added a comment - - edited

            to do:

            • static plugin builds
            • prevent unloading
            • debian
            serg Sergei Golubchik added a comment - - edited to do: static plugin builds prevent unloading debian
            danblack Daniel Black added a comment -

            pushed bb-10.7-danielblack-mdev-12933-fixup - tested locally on clang-12

            As fix for compile error:

            https://buildbot.mariadb.org/#/builders/168/builds/5190/steps/7/logs/stdio

            [ 76%] Building C object storage/maria/CMakeFiles/aria_chk.dir/aria_chk.c.o
            /buildbot/amd64-ubuntu-1804-clang10-asan/build/storage/innobase/row/row0import.cc:3508:36: error: operator '?:' has lower precedence than '+'; '+' will be evaluated first [-Werror,-Wparentheses]
                            (provider_service_lzo->is_loaded)?
                            ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
            /buildbot/amd64-ubuntu-1804-clang10-asan/build/storage/innobase/row/row0import.cc:3508:36: note: place parentheses around the '+' expression to silence this warning
                            (provider_service_lzo->is_loaded)?
                                                             ^
                                                             )
            /buildbot/amd64-ubuntu-1804-clang10-asan/build/storage/innobase/row/row0import.cc:3508:36: note: place parentheses around the '?:' expression to evaluate it first
                            (provider_service_lzo->is_loaded)?
                                                             ^
                            (
            1 error generated.
            storage/innobase/CMakeFiles/innobase.dir/build.make:1958: recipe for target 'storage/innobase/CMakeFiles/innobase.dir/row/row0import.cc.o' failed
            make[2]: *** [storage/innobase/CMakeFiles/innobase.dir/row/row0import.cc.o] Error 1
            make[2]: *** Waiting for unfinished jobs....
            

            danblack Daniel Black added a comment - pushed bb-10.7-danielblack-mdev-12933-fixup - tested locally on clang-12 As fix for compile error: https://buildbot.mariadb.org/#/builders/168/builds/5190/steps/7/logs/stdio [ 76%] Building C object storage/maria/CMakeFiles/aria_chk.dir/aria_chk.c.o /buildbot/amd64-ubuntu-1804-clang10-asan/build/storage/innobase/row/row0import.cc:3508:36: error: operator '?:' has lower precedence than '+'; '+' will be evaluated first [-Werror,-Wparentheses] (provider_service_lzo->is_loaded)? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ /buildbot/amd64-ubuntu-1804-clang10-asan/build/storage/innobase/row/row0import.cc:3508:36: note: place parentheses around the '+' expression to silence this warning (provider_service_lzo->is_loaded)? ^ ) /buildbot/amd64-ubuntu-1804-clang10-asan/build/storage/innobase/row/row0import.cc:3508:36: note: place parentheses around the '?:' expression to evaluate it first (provider_service_lzo->is_loaded)? ^ ( 1 error generated. storage/innobase/CMakeFiles/innobase.dir/build.make:1958: recipe for target 'storage/innobase/CMakeFiles/innobase.dir/row/row0import.cc.o' failed make[2]: *** [storage/innobase/CMakeFiles/innobase.dir/row/row0import.cc.o] Error 1 make[2]: *** Waiting for unfinished jobs....
            danblack Daniel Black added a comment -

            Second embedded compilation fix added to same branch. Please review/cherry-pick as needed.

            danblack Daniel Black added a comment - Second embedded compilation fix added to same branch. Please review/cherry-pick as needed.

            marko, could you please review InnoDB changes in this patch?

            serg Sergei Golubchik added a comment - marko , could you please review InnoDB changes in this patch?
            marko Marko Mäkelä added a comment - - edited

            I am sorry, but MDEV-24258 changed the code in dict_table_open_on_name(). The changes to that function will have to be rebased.

            In fil_node_open_file_low() I would suggest to invoke sql_print_error() directly and to follow the common formatting rules (no TABs, and have { in a separate line).

            In innodb_init_params() and innodb_compression_algorithm_validate() we could use switch and page_compression_algorithms to avoid code duplication.

            Otherwise the InnoDB changes look fine to me.

            marko Marko Mäkelä added a comment - - edited I am sorry, but MDEV-24258 changed the code in dict_table_open_on_name() . The changes to that function will have to be rebased. In fil_node_open_file_low() I would suggest to invoke sql_print_error() directly and to follow the common formatting rules (no TABs, and have { in a separate line). In innodb_init_params() and innodb_compression_algorithm_validate() we could use switch and page_compression_algorithms to avoid code duplication. Otherwise the InnoDB changes look fine to me.

            rebased. fixed formatting. removed code duplication.

            I didn't change fil_node_open_file_low() to use sql_print_error() because this function uses ib:: just few lines above, so it'd look rather inconsistent.

            serg Sergei Golubchik added a comment - rebased. fixed formatting. removed code duplication. I didn't change fil_node_open_file_low() to use sql_print_error() because this function uses ib:: just few lines above, so it'd look rather inconsistent.
            A description of what was done:

            bzip2/lz4/lzma/lzo/snappy compression is now provided via services

            they're almost like normal services, but in include/providers/ and they're supposed to provide exactly the same interface as original compression libraries (but not everything, only enough of if for the code to compile).

            the services are implemented via dummy functions that return corresponding error values (LZMA_PROG_ERROR, LZO_E_INTERNAL_ERROR, etc).

            the actual compression libraries are linked into corresponding provider plugins. Providers are daemon plugins that when loaded replace service pointers to point to actual compression functions.

            That is, run-time dependency on compression libraries is now on plugins, and the server doesn't need any compression libraries to run, but will automatically support the compression when a plugin is loaded.

            InnoDB and Mroonga use compression plugins now. RocksDB doesn't, because it comes with standalone utility binaries that cannot load plugins.

            In other words, InnoDB (and Mroonga) support all compression algorithms now. The is no need for a special build to support, for example, snappy. One only needs to install the corresponding plugin. Server package (RPM of DEB) itself does not depend on any compression libraries anymore (besides zlib, and except libraries that other libraries might need indirectly). There is one package DEB/RPM per provider plugin, it depends on the corresponding compression library. When installed — the server gets the ability to use the compression. If not installed, using the compression will result in an error.

            serg Sergei Golubchik added a comment - A description of what was done: bzip2/lz4/lzma/lzo/snappy compression is now provided via services they're almost like normal services, but in include/providers/ and they're supposed to provide exactly the same interface as original compression libraries (but not everything, only enough of if for the code to compile). the services are implemented via dummy functions that return corresponding error values ( LZMA_PROG_ERROR , LZO_E_INTERNAL_ERROR , etc). the actual compression libraries are linked into corresponding provider plugins. Providers are daemon plugins that when loaded replace service pointers to point to actual compression functions. That is, run-time dependency on compression libraries is now on plugins, and the server doesn't need any compression libraries to run, but will automatically support the compression when a plugin is loaded. InnoDB and Mroonga use compression plugins now. RocksDB doesn't, because it comes with standalone utility binaries that cannot load plugins. In other words, InnoDB (and Mroonga) support all compression algorithms now. The is no need for a special build to support, for example, snappy. One only needs to install the corresponding plugin. Server package (RPM of DEB) itself does not depend on any compression libraries anymore (besides zlib, and except libraries that other libraries might need indirectly). There is one package DEB/RPM per provider plugin, it depends on the corresponding compression library. When installed — the server gets the ability to use the compression. If not installed, using the compression will result in an error.

            There is an obvious backward compatibility problem here. I assume it is expected, as it is inevitable with this solution, and I don't see a way around it.

            Due to that very library chaos that we have had so far, current releases have different subsets of compression algorithms on different systems. For example, deb-based ones have lz4 enabled, rpms have lzma and more.
            Since they have been enabled for long time, it is quite likely that there are some user tables created with these compression algorithms.

            Now, when upgrade to 10.7 is performed, all of them will become disabled, and to enable them again users need to install extra packages.
            However, users can hardly guess in advance to install extra packages upon upgrade. So, after they have upgraded, the tables will be unreadable, mysql_upgrade will fail, etc.

            And even if they do guess to check whether extra packages may be needed, it looks like there isn't an easy way to determine which compression algorithms are de-facto used in a given instance. Or, if there is, I couldn't find it so far.

            elenst Elena Stepanova added a comment - There is an obvious backward compatibility problem here. I assume it is expected, as it is inevitable with this solution, and I don't see a way around it. Due to that very library chaos that we have had so far, current releases have different subsets of compression algorithms on different systems. For example, deb-based ones have lz4 enabled, rpms have lzma and more. Since they have been enabled for long time, it is quite likely that there are some user tables created with these compression algorithms. Now, when upgrade to 10.7 is performed, all of them will become disabled, and to enable them again users need to install extra packages. However, users can hardly guess in advance to install extra packages upon upgrade. So, after they have upgraded, the tables will be unreadable, mysql_upgrade will fail, etc. And even if they do guess to check whether extra packages may be needed, it looks like there isn't an easy way to determine which compression algorithms are de-facto used in a given instance. Or, if there is, I couldn't find it so far.

            elenst, you are spot on. Before the introduction of innodb_checksum_algorithm=full_crc32 and MDEV-18644, each page in an InnoDB tablespace can be compressed with a different algorithm, depending on the value of the global variable innodb_checksum_algorithm when the page was written. The default value of the checksum algorithm was changed in MDEV-19534, and it only affects newly created files. If a file was created in the full_crc32 format, it will stay in that format. The full_crc32 format can be detected by a bitwise AND between the value 16 and FSP_SPACE_FLAGS, which we expose via INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES.FLAG. If the full_crc32 format is being used, then all pages in the file will be either uncompressed or compressed with the algorithm that is identified by (flags>>5)&7. If flags&16 is 0, then users are out of luck: basically any algorithm could have been used.

            marko Marko Mäkelä added a comment - elenst , you are spot on. Before the introduction of innodb_checksum_algorithm=full_crc32 and MDEV-18644 , each page in an InnoDB tablespace can be compressed with a different algorithm, depending on the value of the global variable innodb_checksum_algorithm when the page was written. The default value of the checksum algorithm was changed in MDEV-19534 , and it only affects newly created files. If a file was created in the full_crc32 format, it will stay in that format. The full_crc32 format can be detected by a bitwise AND between the value 16 and FSP_SPACE_FLAGS , which we expose via INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES.FLAG . If the full_crc32 format is being used, then all pages in the file will be either uncompressed or compressed with the algorithm that is identified by (flags>>5)&7 . If flags&16 is 0, then users are out of luck: basically any algorithm could have been used.
            elenst Elena Stepanova added a comment - - edited

            Other intermediate notes:

            • RPM builders are still missing some libraries, e.g. CentOS/RHEL only builds 2 providers, Fedora builds 3, SUSE builds 4. Apparently we need to install the missing ones, if the idea is to build all providers on all systems.
            • since RocksDB doesn't use the new mechanism, as a side-effect of the change in the build environment the plugin and the tools get new dependencies, e.g.

               === /usr/lib/mysql/plugin/ha_rocksdb.so
              +	libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 
              +	libzstd.so.1 => /usr/lib/x86_64-linux-gnu/libzstd.so.1 
              

              etc, and new supported algorithms

              -rocksdb_supported_compression_types	   Snappy,Zlib,LZ4,LZ4HC
              +rocksdb_supported_compression_types   Snappy,Zlib,BZip2,LZ4,LZ4HC,ZSTDNotFinal
              

              Extra dependencies are not a crime as it's a new major release, but it may be worth checking with the upstream whether the new "supported compression types" are really supported. For example, ZSTDNotFinal looks disturbing.

            • no provider plugins are currently built on Windows. But there are no supported compression algorithms in 10.6, either, so it's consistent.
            elenst Elena Stepanova added a comment - - edited Other intermediate notes: RPM builders are still missing some libraries, e.g. CentOS/RHEL only builds 2 providers, Fedora builds 3, SUSE builds 4. Apparently we need to install the missing ones, if the idea is to build all providers on all systems. since RocksDB doesn't use the new mechanism, as a side-effect of the change in the build environment the plugin and the tools get new dependencies, e.g. === /usr/lib/mysql/plugin/ha_rocksdb.so + libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 + libzstd.so.1 => /usr/lib/x86_64-linux-gnu/libzstd.so.1 etc, and new supported algorithms -rocksdb_supported_compression_types Snappy,Zlib,LZ4,LZ4HC +rocksdb_supported_compression_types Snappy,Zlib,BZip2,LZ4,LZ4HC,ZSTDNotFinal Extra dependencies are not a crime as it's a new major release, but it may be worth checking with the upstream whether the new "supported compression types" are really supported. For example, ZSTDNotFinal looks disturbing. no provider plugins are currently built on Windows. But there are no supported compression algorithms in 10.6, either, so it's consistent.

            This is btw a good way to identify whether anyone uses Innodb compression with any non-standard algorithm, Maybe the amount of those people is 0, and we never hear a complaint about backward compatibility. I did not notice too many bugs by snappy users

            wlad Vladislav Vaintroub added a comment - This is btw a good way to identify whether anyone uses Innodb compression with any non-standard algorithm, Maybe the amount of those people is 0, and we never hear a complaint about backward compatibility. I did not notice too many bugs by snappy users
            greenman Ian Gilfillan added a comment -

            For those wanting to try it out, this is now available as a preview release. See https://mariadb.org/10-7-preview-feature-provider-plugins/ and https://mariadb.com/kb/en/compression-plugins/.

            greenman Ian Gilfillan added a comment - For those wanting to try it out, this is now available as a preview release. See https://mariadb.org/10-7-preview-feature-provider-plugins/ and https://mariadb.com/kb/en/compression-plugins/ .
            elenst Elena Stepanova added a comment - - edited

            As far as I can tell for now, the functionality works as planned and thus can be pushed into 10.7 and released with 10.7.1.

            "As planned" also implies that we are knowingly breaking compatiblity to some extent, hopefully for the greater good in future. We can't know how many users will be affected, it depends on how much the non-zlib compression is currently used.
            Thus every effort should be made to document it and make it clear and visible to users.
            Here are some notes, they are far from complete. To be updated (serg, marko, feel free to edit as you deem fit).

            General server considerations

            If the server upgrade is performed in a usual manner (by replacing existing packages with new ones), all tables compressed with non-zlib compression algorithms will inevitably become unreadable.
            If the user knows in advance which algorithms are in use, the corresponding provider_xxxx packages should be installed right away.
            In any case, after the upgrade is performed, mysql_upgrade must be run – it must be run in any case, but this time it is highly recommended to run it manually, even with --force option if it claims it has already been done, and inspect the output – the exit code cannot be relied upon. Alternatively, mysqlcheck --all-databases can be run.
            If there is a problem with compression algorithms, it will demonstrate as something like

            Warning  : MariaDB tried to use the LZMA compression, but its provider plugin is not loaded
            Error    : Table 'test.t' doesn't exist in engine
            status   : Operation failed
            

            or

            Error    : Table test/t is compressed with lzma, which is not currently loaded. Please load the lzma provider plugin to open the table
            error    : Corrupt
            

            for each affected table. The user needs to pay attention to the mentioned algorithms and install all corresponding provider_xxxx packages.
            After plugin installation, the server will need to be restarted.
            Naturally until the tables are brought back to order, all incoming traffic must be disabled.

            Uninstallation of providers at runtime should be done with caution. Algorithms are still available till server restart, which can create false impression that the tables remain functional (not just to users, but to tools like MariaBackup or mysqldump).

            Config considerations

            If the server config has a non-default value of innodb_compression_algorithm, the corresponding provider needs to be installed, preferably simultaneously. Otherwise the upgrade will happen, but the server won't start afterwards.
            Even if the corresponding provider is installed simultaneously with the new server package, the installation can throw intermediate errors, particularly with deb packages

            Installing new version of config file /etc/mysql/mariadb.conf.d/50-server.cnf ...
            mariadb-extra.socket is a disabled or a static unit, not starting it.
            mariadb-extra.socket is a disabled or a static unit, not starting it.
            Job for mariadb.service failed because the control process exited with error code.
            See "systemctl status mariadb.service" and "journalctl -xe" for details.
            

            It seems to be harmless, the upgrade still continues and eventually succeeds.
            Alternatively, innodb_compression_algorithm setting can be (at least temporarily) disabled before the upgrade.

            MariaBackup

            MariaBackup in general is only expected to work with a matching version of the server. It is particularly important in this case, because old versions of mariabackup won't be able to deal with the provider libraries.
            With the latest commits in the feature tree I didn't come up with specifiic faulty scenarios involving MariaBackup, but I expect them to be possible, particularly involving runtime uninstallation of providers.

            Replication

            With upgrade through replication, when the replica is upgraded first, it is important not to enable replication until the compression libraries are sorted out, so that binlog events aren't attempted on currently inaccessible tables. Other than that, since the compression algorithm isn't passed over through the binary log, considerations are the same as for the general server – only tables which already exist in the replica are important.

            Galera

            Rolling upgrade or adding a 10.7 node to an older-version-based cluster can be tricky with physical methods of SST.
            Judging by the new node alone, it is impossible to say in advance which algorithms may be needed, so it is likely that not all necessary providers will be installed in advance.
            When the SST is performed, e.g. via MariaBackup, if the libraries are missing, it will throw some errors, but SST will still succeed, which means that the node will join the cluster and will start processing queries, including the queries against the tables which it cannot yet handle. At best (if the new nodes are minority) it will make the node leave the cluster; if too many nodes are upgraded/added at once, it can probably cause the entire cluster failure. Maybe Galera experts can offer advice on how it should be handled best on the user side.

            elenst Elena Stepanova added a comment - - edited As far as I can tell for now, the functionality works as planned and thus can be pushed into 10.7 and released with 10.7.1. "As planned" also implies that we are knowingly breaking compatiblity to some extent, hopefully for the greater good in future. We can't know how many users will be affected, it depends on how much the non-zlib compression is currently used. Thus every effort should be made to document it and make it clear and visible to users. Here are some notes, they are far from complete. To be updated ( serg , marko , feel free to edit as you deem fit). General server considerations If the server upgrade is performed in a usual manner (by replacing existing packages with new ones), all tables compressed with non-zlib compression algorithms will inevitably become unreadable. If the user knows in advance which algorithms are in use, the corresponding provider_xxxx packages should be installed right away. In any case, after the upgrade is performed, mysql_upgrade must be run – it must be run in any case, but this time it is highly recommended to run it manually, even with --force option if it claims it has already been done, and inspect the output – the exit code cannot be relied upon. Alternatively, mysqlcheck --all-databases can be run. If there is a problem with compression algorithms, it will demonstrate as something like Warning : MariaDB tried to use the LZMA compression, but its provider plugin is not loaded Error : Table 'test.t' doesn't exist in engine status : Operation failed or Error : Table test/t is compressed with lzma, which is not currently loaded. Please load the lzma provider plugin to open the table error : Corrupt for each affected table. The user needs to pay attention to the mentioned algorithms and install all corresponding provider_xxxx packages. After plugin installation, the server will need to be restarted. Naturally until the tables are brought back to order, all incoming traffic must be disabled . Uninstallation of providers at runtime should be done with caution. Algorithms are still available till server restart, which can create false impression that the tables remain functional (not just to users, but to tools like MariaBackup or mysqldump). Config considerations If the server config has a non-default value of innodb_compression_algorithm, the corresponding provider needs to be installed, preferably simultaneously. Otherwise the upgrade will happen, but the server won't start afterwards. Even if the corresponding provider is installed simultaneously with the new server package, the installation can throw intermediate errors, particularly with deb packages Installing new version of config file /etc/mysql/mariadb.conf.d/50-server.cnf ... mariadb-extra.socket is a disabled or a static unit, not starting it. mariadb-extra.socket is a disabled or a static unit, not starting it. Job for mariadb.service failed because the control process exited with error code. See "systemctl status mariadb.service" and "journalctl -xe" for details. It seems to be harmless, the upgrade still continues and eventually succeeds. Alternatively, innodb_compression_algorithm setting can be (at least temporarily) disabled before the upgrade. MariaBackup MariaBackup in general is only expected to work with a matching version of the server. It is particularly important in this case, because old versions of mariabackup won't be able to deal with the provider libraries. With the latest commits in the feature tree I didn't come up with specifiic faulty scenarios involving MariaBackup, but I expect them to be possible, particularly involving runtime uninstallation of providers. Replication With upgrade through replication, when the replica is upgraded first, it is important not to enable replication until the compression libraries are sorted out, so that binlog events aren't attempted on currently inaccessible tables. Other than that, since the compression algorithm isn't passed over through the binary log, considerations are the same as for the general server – only tables which already exist in the replica are important. Galera Rolling upgrade or adding a 10.7 node to an older-version-based cluster can be tricky with physical methods of SST. Judging by the new node alone, it is impossible to say in advance which algorithms may be needed, so it is likely that not all necessary providers will be installed in advance. When the SST is performed, e.g. via MariaBackup, if the libraries are missing, it will throw some errors, but SST will still succeed , which means that the node will join the cluster and will start processing queries, including the queries against the tables which it cannot yet handle. At best (if the new nodes are minority) it will make the node leave the cluster; if too many nodes are upgraded/added at once, it can probably cause the entire cluster failure. Maybe Galera experts can offer advice on how it should be handled best on the user side.
            mg MG added a comment - - edited

            serg, While this bug did mention the worrisome ZSTDNotFinal stub name used in RocksDB, the zstd Github says that it is "used continuously to compress large amounts of data in multiple formats and use cases. Zstandard is considered safe for production environments."

            I was hoping we would see zstd as an additional MariaDB compression library via this now closed bug. Would it make sense to have this feature request in a new MDEV?

            mg MG added a comment - - edited serg , While this bug did mention the worrisome ZSTDNotFinal stub name used in RocksDB, the zstd Github says that it is "used continuously to compress large amounts of data in multiple formats and use cases. Zstandard is considered safe for production environments." I was hoping we would see zstd as an additional MariaDB compression library via this now closed bug. Would it make sense to have this feature request in a new MDEV?

            mg, this issue didn't touch RocksDB at all, only InnoDB and Mroonga. Because currently it can only remove dependencies from server plugins, not from external utility executables. And RocksDB has two of them.

            But anyway, what do you mean "see zstd as an additional MariaDB compression library"? See where, in RocksDB? In InnoDB? In the protocol?

            serg Sergei Golubchik added a comment - mg , this issue didn't touch RocksDB at all, only InnoDB and Mroonga. Because currently it can only remove dependencies from server plugins, not from external utility executables. And RocksDB has two of them. But anyway, what do you mean "see zstd as an additional MariaDB compression library"? See where, in RocksDB? In InnoDB? In the protocol?
            mg MG added a comment - - edited

            serg, I did mean for InnoDB and meant to use the phrasing from the blog post accompanying this feature in release notes.

            mg MG added a comment - - edited serg , I did mean for InnoDB and meant to use the phrasing from the blog post accompanying this feature in release notes.

            People

              serg Sergei Golubchik
              serg Sergei Golubchik
              Votes:
              5 Vote for this issue
              Watchers:
              22 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.