Details

    • Technical task
    • Status: Closed (View Workflow)
    • Major
    • Resolution: Fixed
    • 10.3(EOL)
    • N/A
    • Tests
    • None
    • 10.3.1-2

    Description

      Development tasks: MDEV-11371, MDEV-11381
      Development tree (as of July 18th): bb-10.3-svoj
      Tentative patch (as of July 18th): https://github.com/MariaDB/server/commit/79e055f407d34f195e3fde20401f39033dfce51d

      Request before the second round of review (received by email):

      On Thu, Jun 29, 2017 at 12:22:15PM +0400, Sergey Vojtovich wrote:

      Elena: I added decent test for this feature, but it would be great if you could
      extend it, especially replication testing is missing.

      Note: Since there is no documentation for the feature, need to explore first.
      Note: Make sure it's documented before the release.
      Note: Have 'innodb' removed from MDEV-11371 subject.

      Initial exploration:

      • server restart
      • base for virt col
      • storage for dyncol
      • 2nd part of index
      • change column, alter column
      • views
      • analyze, optimize, check
      • create table .. select with unsupported engine haven't found an unsupported engine yet
      • partitions, partition by compressed col
      • timestamps, sets, enums (N/A)
      • zerofill, unsigned (N/A)
      • null/not null
      • default
      • charsets
      • compressed column + row_format=compressed
      • + table compressed
      • + encryption
      • aria, tokudb, rocksdb, connect, heap, federated
      • binary log, replication
      • mysqldump


      here does not mean "tested", it just means it appears to be supported and does not fail right away

      Extra MTR tests needed:

      • with partitions
        • PARTITION BY KEY + SELECT .. WHERE col = 'something' etc. – crashes
      • with binlog in row format (easy to check with replication)
        • some strange '\x00foo' shows up, length increases

      Attachments

        Issue Links

          Activity

            alice Alice Sherepa added a comment - - edited

            it works for 254, but not 255.

            connection slave;
            CREATE TABLE t1  (a tinyblob COMPRESSED);
            connection master;
            CREATE TABLE IF NOT EXISTS t1 (a tinyblob COMPRESSED);
            INSERT INTO t1(a)  VALUES(REPEAT('a',255));
            main.1dd 'mix'                           [ fail ]
                    Test ended at 2017-08-29 12:45:34
             
            CURRENT_TEST: main.1dd
            mysqltest: At line 15: query 'INSERT INTO t1(a)  VALUES(REPEAT('a',255))' failed: 1406: Data too long for column 'a' at row 1
            

            it indeed does not show an error with blobs while ALL_NON_LOSSY is set
            with varchars there is an error. In case with varchar(1000)->varchar(1000) compressed:

            Last_Errno	1677
            Last_Error	Column 0 of table 'test.t2' cannot be converted from type 'varchar(10001) compressed' to type 'varchar(9999) /*!100301 COMPRESS' 
            

            alice Alice Sherepa added a comment - - edited it works for 254, but not 255. connection slave; CREATE TABLE t1 (a tinyblob COMPRESSED); connection master; CREATE TABLE IF NOT EXISTS t1 (a tinyblob COMPRESSED); INSERT INTO t1(a) VALUES(REPEAT('a',255)); main.1dd 'mix' [ fail ] Test ended at 2017-08-29 12:45:34   CURRENT_TEST: main.1dd mysqltest: At line 15: query 'INSERT INTO t1(a) VALUES(REPEAT('a',255))' failed: 1406: Data too long for column 'a' at row 1 it indeed does not show an error with blobs while ALL_NON_LOSSY is set with varchars there is an error. In case with varchar(1000)->varchar(1000) compressed: Last_Errno 1677 Last_Error Column 0 of table 'test.t2' cannot be converted from type 'varchar(10001) compressed' to type 'varchar(9999) /*!100301 COMPRESS'

            It happens on master, which is kind of expected. Max data length for compressed blobs 1 byte shorter than for regular blobs. There's probably a way to fix it, but it was decided not to bother with this in first implementation.

            svoj Sergey Vojtovich added a comment - It happens on master, which is kind of expected. Max data length for compressed blobs 1 byte shorter than for regular blobs. There's probably a way to fix it, but it was decided not to bother with this in first implementation.

            If you want to try 255 bytes compressed blob you should do something like this:

            SET column_compression_threshold=255;
            INSERT INTO t1(a)  VALUES(REPEAT('a',254));
            

            svoj Sergey Vojtovich added a comment - If you want to try 255 bytes compressed blob you should do something like this: SET column_compression_threshold=255; INSERT INTO t1(a) VALUES (REPEAT( 'a' ,254));

            Are you testing recent bb-10.3-MDEV-11371? VARCHAR should have been fixed there.

            svoj Sergey Vojtovich added a comment - Are you testing recent bb-10.3- MDEV-11371 ? VARCHAR should have been fixed there.
            alice Alice Sherepa added a comment - column_compression_parts.result column_compression_parts.test column_compression_rpl.inc column_compression_rpl.result column_compression_rpl.test

            People

              alice Alice Sherepa
              elenst Elena Stepanova
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.