Details

    • Technical task
    • Status: Closed (View Workflow)
    • Major
    • Resolution: Fixed
    • None
    • 10.5.2
    • Optimizer
    • None

    Description

      This task in a part of MDEV-6915, where we would like the pack the values of the sort key inside the sort buffer for each record.

      EDIT: this is the spec of what got implemented:

      Contents:

      1. Background
      1.1 Implementation details
      1.1.1 Why fields and items are treated differently
      2. Solution : Packed Sort Keys
      2.1 Packed key format
      2.2 Which format to use
      3. Special cases
      3.1 Handling very long strings
      3.2 Handling for long binary strings
      3.3 Handling very long strings with Packed sort keys
      4. Sort key columns in addon_fields

      1. Background

      Before this MDEV, filesort() sorted the data using mem-comparable keys.

      That is, if we wanted to sort by

        ORDER BY col1, col2, ... colN
      

      then the filesort code would for each row generate one "Sort Key" and then sort the rows by their Sort Keys.

      The Sort Keys are mem-comparable (that is, are compared by memcmp()) and they are fixed size: the sort key has the same length regardless of what value it represents. This causes inefficient memory usage.

      1.1 Implementation details

      filesort.cc: make_sortkey() is the function that produces a sort key from a record.

      The function treats Field and Item objects differently.

      class Field has

        void make_sort_key(uchar *buff, uint length);
        virtual void sort_string(uchar *buff,uint length)=0;
      

      sort_string produces mem-comparable image of the field value for each datatype. make_sort_key is a non-virtual function which handles encoding of SQL null values.

      For Items, Type_handler has a virtual function:

        virtual void make_sort_key(uchar *to, Item *item,
                                   const SORT_FIELD_ATTR *sort_field,
                                   Sort_param *param) const= 0;
      

      which various datatypes overload.

      1.1.1 Why fields and items are treated differently

      My (Sergey P) guess is as follows: if we use fields we get more compact sort keys. For example:

      create table t7(a int);
      select * from t7 order by a; -- Q1
      select * from t7 order by a+1; -- Q2
      

      Q1 uses a Field. It's a Field_int, so the sort key is 4 bytes long.
      Q2 uses an Item. Its type handler is Type_handler_int_result, and the sort key is 8 bytes long.

      2. Solution : Packed Sort Keys

      Note that one can have mem-comparable keys are that are not fixed-size. MyRocks uses such encoding for example.

      However for this MDEV it was decided to store the original (non-mem-comparable) values instead, and use a datatype-aware key comparison function. (See the issue comments for the reasoning)

      2.1 Packed key format

      The keys are stored in a new variable-size data format called "packed".

      The format is as follows:

        sort_key_length packed_value_1 packed_value2 ...
      

      sort_key_length is the length of the whole key.
      Each packed value is encoded as follows:

        <null_byte=0>  // This is a an SQL NULL
        [<null_byte=1>] <packed_value>  // this a non-NULL value
      

      null_byte is present if the field/item is NULLable.
      SQL NULL is encoded as just one NULL-indicator byte. The value itself is omitted.

      The format of the packed_value depends on the datatype. For "non-packable" datatypes it is just their mem-comparable form, as before.

      The "packable" datatypes are currently variable-length strings and the packed format for them is (for binary blobs, see a note below):

      <length> <string>
      

      2.2 Which format to use

      The advantage of Packed Key Format is potential space savings for variable-length fields.
      The disadvantages are:

      • it may actually take more space, because of sort_key_length and length fields.
      • The comparison function is more expensive.

      Currently the logic is: use Packed Key Format if we would save 20 or more bytes when constructing a sort key from values that have empty string for each packable component.

      3. Special cases

      3.1 Handling very long strings

      Before this MDEV, the size of sort key was limited by @@max_sort_length variable.
      It is defined as:

      The number of bytes to use when sorting data values. The server uses only the first max_sort_length bytes of each value and ignores the rest.

      3.2 Handling for long binary strings

      Long binary strings receive special treatment. A sort key for the long binary string is truncated at max_sort_length bytes like described above, but then a "suffix" is appended which contains the total length of the value before the truncation.

      3.3 Handling very long strings with Packed sort keys

      Truncating multi-byte string at N bytes is not safe because one can cut in the middle of a character.
      One is tempted to solve this by discarding the partial character but that's also not a good idea as in some collations multiple characters may produce one weight (this is called "contraction").

      This combination of circumstances:

      • The string value is very long, so truncation is necessary
      • The collation is "complex", so truncation is dangerous

      is deemed to be relatively rare so it was decided to just use the non-packed sort keys in this case.

      4. Sort key columns in addon_fields

      Currently, each sort key column is actually stored twice
      1. as part of the sort key
      2. in the addon_fields
      This made total sense when sort key stored the mem-comparable image (from which one cannot restore the original value in general case). But since we now store the original value, we could also remove it from the addon_fields and further save space.
      This is a good idea but it is outside of the scope of this MDEV.

      Attachments

        Issue Links

          Activity

            varun Varun Gupta (Inactive) added a comment - - edited

            Implementation

            For each record the sort key would look like

            <sort_key_length><null_bytes><field1_length><field1_data><field2_length><field2_data>......<fieldN_length><fieldN_data>
            

            For fixed lengths there is no need to store length, but they will be packed too if the value IS NULL (same as packed addon fields)

            A new compare function needs to be added that would compare two dynamic length sort keys and return the comparison accordingly.

            varun Varun Gupta (Inactive) added a comment - - edited Implementation For each record the sort key would look like <sort_key_length><null_bytes><field1_length><field1_data><field2_length><field2_data>......<fieldN_length><fieldN_data> For fixed lengths there is no need to store length, but they will be packed too if the value IS NULL (same as packed addon fields) A new compare function needs to be added that would compare two dynamic length sort keys and return the comparison accordingly.
            varun Varun Gupta (Inactive) added a comment - - edited

            Structure to hold SORT ITEMS

            Currently done in SORT_FIELD, we may need to add additional parameters
            The parameters that may be needed are:

            • null_bit : to know if the field is NULL;
            • null_offset : offset from the start of the sort key (after sort length)
            • length_bytes: bytes required to store the length of a part of the sort key

            max_sort_length [user controllable variable]

            The number of bytes to use when sorting data values. The server uses only the first max_sort_length bytes of each value and ignores the rest

            Range: [2^2, 2^22]
            need 22 bits, so max_length for each type would be 4.

            For VARCHAR and CHAR length bytes of 1 and 2 would be fine
            The higher ones are needed for BLOBS and TEXT columns

            New compare function to compare sort keys

            In the new compare function we need to take into account the dynamic nature of the sort keys due to packing.
            So comparison would happen for each column in the sort key. We would compare to the minimum length of the values of the column for the two keys and use memcmp on the minimum length, if the length are same then we move to the next column , else return the comparison value.

            varun Varun Gupta (Inactive) added a comment - - edited Structure to hold SORT ITEMS Currently done in SORT_FIELD, we may need to add additional parameters The parameters that may be needed are: null_bit : to know if the field is NULL; null_offset : offset from the start of the sort key (after sort length) length_bytes: bytes required to store the length of a part of the sort key max_sort_length [user controllable variable] The number of bytes to use when sorting data values. The server uses only the first max_sort_length bytes of each value and ignores the rest Range: [2^2, 2^22] need 22 bits, so max_length for each type would be 4. For VARCHAR and CHAR length bytes of 1 and 2 would be fine The higher ones are needed for BLOBS and TEXT columns New compare function to compare sort keys In the new compare function we need to take into account the dynamic nature of the sort keys due to packing. So comparison would happen for each column in the sort key. We would compare to the minimum length of the values of the column for the two keys and use memcmp on the minimum length, if the length are same then we move to the next column , else return the comparison value.
            varun Varun Gupta (Inactive) added a comment - - edited

            A suggestion made by psergey was that we could have the format for sort keys as:

            <sort_key_length><field1_is_null><field1_length><field1_data><field2_length><field1_is_null><field2_data>......<fieldN_length><field1_is_null><fieldN_data>
            

            Here we store the null byte along with the field.

            varun Varun Gupta (Inactive) added a comment - - edited A suggestion made by psergey was that we could have the format for sort keys as: <sort_key_length><field1_is_null><field1_length><field1_data><field2_length><field1_is_null><field2_data>......<fieldN_length><field1_is_null><fieldN_data> Here we store the null byte along with the field.
            varun Varun Gupta (Inactive) added a comment - - edited

            Here are some discussion points which I had with bar

            He still suggests to store original values, reasons:

            In case on Unicode collation algorithm, original values use much less spaces than their keys.
            In case of multi-level collations, original values use extremely less space than their keys.

            Case 1:
            MariaDB [test]> SELECT hex(WEIGHT_STRING(_utf8'a' COLLATE utf8_unicode_ci));
            +------------------------------------------------------+
            | hex(WEIGHT_STRING(_utf8'a' COLLATE utf8_unicode_ci)) |
            +------------------------------------------------------+
            | 0E33                                                 |
            +------------------------------------------------------+
             
            One original byte produces two weight bytes.
             
             
            Case 2:
             
            MariaDB [test]> SELECT hex(WEIGHT_STRING(_utf8'a' COLLATE utf8_thai_520_w2));
            +-------------------------------------------------------+
            | hex(WEIGHT_STRING(_utf8'a' COLLATE utf8_thai_520_w2)) |
            +-------------------------------------------------------+
            | 120F0020                                              |
            +-------------------------------------------------------+
            One original value byte produces 4 weight bytes.
             
            Case 3:
             
            MariaDB [test]> SELECT hex(WEIGHT_STRING(_utf8'ß' COLLATE utf8_thai_520_w2));
            +--------------------------------------------------------+
            | hex(WEIGHT_STRING(_utf8'ß' COLLATE utf8_thai_520_w2))  |
            +--------------------------------------------------------+
            | 14101410002001590020                                   |
            +--------------------------------------------------------+
             
            Case 4:
             
            MariaDB [test]> SELECT length(WEIGHT_STRING(_ucs2 0x321D COLLATE ucs2_thai_520_w2));
            +--------------------------------------------------------------+
            | length(WEIGHT_STRING(_ucs2 0x321D COLLATE ucs2_thai_520_w2)) |
            +--------------------------------------------------------------+
            |                                                           16 |
            +--------------------------------------------------------------+
             
            Two original value bytes produce 10 weight bytes.
            

            when using packed sort-keys: compare function is strnncollsp()
            strnscollsp is slower than memcmp().
            But it's much faster than flushing disk cache. This means that writing original values requires less space than using the strnxfrm() form and we would do fewer disk writes.In case of ASCII values two times more data will fit into the disk cache using original values!

            varun Varun Gupta (Inactive) added a comment - - edited Here are some discussion points which I had with bar He still suggests to store original values, reasons: In case on Unicode collation algorithm, original values use much less spaces than their keys. In case of multi-level collations, original values use extremely less space than their keys. Case 1: MariaDB [test]> SELECT hex(WEIGHT_STRING(_utf8'a' COLLATE utf8_unicode_ci)); +------------------------------------------------------+ | hex(WEIGHT_STRING(_utf8'a' COLLATE utf8_unicode_ci)) | +------------------------------------------------------+ | 0E33 | +------------------------------------------------------+   One original byte produces two weight bytes.     Case 2:   MariaDB [test]> SELECT hex(WEIGHT_STRING(_utf8'a' COLLATE utf8_thai_520_w2)); +-------------------------------------------------------+ | hex(WEIGHT_STRING(_utf8'a' COLLATE utf8_thai_520_w2)) | +-------------------------------------------------------+ | 120F0020 | +-------------------------------------------------------+ One original value byte produces 4 weight bytes.   Case 3:   MariaDB [test]> SELECT hex(WEIGHT_STRING(_utf8'ß' COLLATE utf8_thai_520_w2)); +--------------------------------------------------------+ | hex(WEIGHT_STRING(_utf8'ß' COLLATE utf8_thai_520_w2)) | +--------------------------------------------------------+ | 14101410002001590020 | +--------------------------------------------------------+   Case 4:   MariaDB [test]> SELECT length(WEIGHT_STRING(_ucs2 0x321D COLLATE ucs2_thai_520_w2)); +--------------------------------------------------------------+ | length(WEIGHT_STRING(_ucs2 0x321D COLLATE ucs2_thai_520_w2)) | +--------------------------------------------------------------+ | 16 | +--------------------------------------------------------------+   Two original value bytes produce 10 weight bytes. when using packed sort-keys: compare function is strnncollsp() strnscollsp is slower than memcmp(). But it's much faster than flushing disk cache. This means that writing original values requires less space than using the strnxfrm() form and we would do fewer disk writes.In case of ASCII values two times more data will fit into the disk cache using original values!

            The branch where the code is rebased on 10.5 is 10.5-mdev6915-ext

            Also the patch can be found here:
            http://lists.askmonty.org/pipermail/commits/2020-February/014160.html

            varun Varun Gupta (Inactive) added a comment - The branch where the code is rebased on 10.5 is 10.5-mdev6915-ext Also the patch can be found here: http://lists.askmonty.org/pipermail/commits/2020-February/014160.html

            It turns out this code changes the comparison function even for fixed-size
            columns.

            I'm debugging this example:

            create table t5 (a bigint, b bigint, c bigint); 
            insert into t5 select seq, seq, seq  from seq_1_to_10000;
            analyze select * from t5 order by a,b,c;
            

            and on stock 10.5 I see

              #0  my_qsort2 (base_ptr=0x7fff9c0921c8, count=668, size=8, cmp=0x555556a22cf3 <native_compare>, cmp_argument=0x7ffff007bd40) at /home/psergey/dev-git/10.5/mysys/mf_qsort.c:106
              #1  0x000055555634e3e0 in Filesort_buffer::sort_buffer (this=0x7fff9c07fd40, param=0x7ffff007c0b0, count=668) at /home/psergey/dev-git/10.5/sql/filesort_utils.cc:187
            

            That is, sorting uses memcmp for key comparison:

            static int native_compare(size_t *length, unsigned char **a, unsigned char **b)
            {
              return memcmp(*a, *b, *length);
            }
            

            Debugging the same example on this patch I see it is using
            compare_packed_keys() which does per-key-part comparison:

              #1  0x00005555560ff4c4 in Field::compare_packed_keys (this=0x7fffa001f120, a=0x7fffa00a7b22 "\001\200", a_len=0x7ffff02646f0, b=0x7fffa009d6a6 "\001\200", b_len=0x7ffff02646f8, sortorder=0x7fffa0098c00) at /home/psergey/dev-git2/10.5-packed-sort-keys/sql/field.cc:1199
              #2  0x0000555556136230 in compare_packed_keys (sort_param=0x7ffff0265040, a_ptr=0x7fffa00ab140, b_ptr=0x7fffa00a9a90) at /home/psergey/dev-git2/10.5-packed-sort-keys/sql/filesort.cc:2968
              #3  0x0000555556a79690 in my_qsort2 (base_ptr=0x7fffa00a96c8, count=969, size=8, cmp=0x555556136177 <compare_packed_keys(void*, unsigned char**, unsigned char**)>, cmp_argument=0x7ffff0265040) at /home/psergey/dev-git2/10.5-packed-sort-keys/mysys/mf_qsort.c:148
              #4  0x00005555563726e7 in Filesort_buffer::sort_buffer (this=0x7fffa0068c40, param=0x7ffff0265040, count=969) at /home/psergey/dev-git2/10.5-packed-sort-keys/sql/filesort_utils.cc:192
            

            psergei Sergei Petrunia added a comment - It turns out this code changes the comparison function even for fixed-size columns. I'm debugging this example: create table t5 (a bigint , b bigint , c bigint ); insert into t5 select seq, seq, seq from seq_1_to_10000; analyze select * from t5 order by a,b,c; and on stock 10.5 I see #0 my_qsort2 (base_ptr=0x7fff9c0921c8, count=668, size=8, cmp=0x555556a22cf3 <native_compare>, cmp_argument=0x7ffff007bd40) at /home/psergey/dev-git/10.5/mysys/mf_qsort.c:106 #1 0x000055555634e3e0 in Filesort_buffer::sort_buffer (this=0x7fff9c07fd40, param=0x7ffff007c0b0, count=668) at /home/psergey/dev-git/10.5/sql/filesort_utils.cc:187 That is, sorting uses memcmp for key comparison: static int native_compare( size_t *length, unsigned char **a, unsigned char **b) { return memcmp (*a, *b, *length); } Debugging the same example on this patch I see it is using compare_packed_keys() which does per-key-part comparison: #1 0x00005555560ff4c4 in Field::compare_packed_keys (this=0x7fffa001f120, a=0x7fffa00a7b22 "\001\200", a_len=0x7ffff02646f0, b=0x7fffa009d6a6 "\001\200", b_len=0x7ffff02646f8, sortorder=0x7fffa0098c00) at /home/psergey/dev-git2/10.5-packed-sort-keys/sql/field.cc:1199 #2 0x0000555556136230 in compare_packed_keys (sort_param=0x7ffff0265040, a_ptr=0x7fffa00ab140, b_ptr=0x7fffa00a9a90) at /home/psergey/dev-git2/10.5-packed-sort-keys/sql/filesort.cc:2968 #3 0x0000555556a79690 in my_qsort2 (base_ptr=0x7fffa00a96c8, count=969, size=8, cmp=0x555556136177 <compare_packed_keys(void*, unsigned char**, unsigned char**)>, cmp_argument=0x7ffff0265040) at /home/psergey/dev-git2/10.5-packed-sort-keys/mysys/mf_qsort.c:148 #4 0x00005555563726e7 in Filesort_buffer::sort_buffer (this=0x7fffa0068c40, param=0x7ffff0265040, count=969) at /home/psergey/dev-git2/10.5-packed-sort-keys/sql/filesort_utils.cc:192

            The latest tree is 10.5-mdev6915-ext

            varun Varun Gupta (Inactive) added a comment - The latest tree is 10.5-mdev6915-ext

            Varun,
            1. How do we handle comparison for strings mentioned in 3.3 (see the description) in 10.4?
            2. Why it's hard to remove sorted fields from addon fields?

            igor Igor Babaev (Inactive) added a comment - Varun, 1. How do we handle comparison for strings mentioned in 3.3 (see the description) in 10.4? 2. Why it's hard to remove sorted fields from addon fields?

            igor

            Ans1 : In 10.4 we use the strxfrm form (mem-comparable sort keys are formed), so weights can be cut at any byte as the mem-comparable form ensure that each byte would be such that the ordering is maintained between 2 keys.

            Ans2 : It should be not hard to remove sorted fields from addon fields, but we need to make sure that all entire value is in the sort_field, that is there is not truncation or cutting (because of max_sort_length)

            varun Varun Gupta (Inactive) added a comment - igor Ans1 : In 10.4 we use the strxfrm form (mem-comparable sort keys are formed), so weights can be cut at any byte as the mem-comparable form ensure that each byte would be such that the ordering is maintained between 2 keys. Ans2 : It should be not hard to remove sorted fields from addon fields, but we need to make sure that all entire value is in the sort_field, that is there is not truncation or cutting (because of max_sort_length)

            Varun,
            I tried the following example:

            set storage_engine=myisam;
            source include/dbt3_s001.inc;
            alter table customer change c_comment c_comment varchar(255);
            analyze table customer, orders;
            select c_comment from orders, customer where c_custkey=o_custkey order by c_comment;
            set sort_buffer_size=16192;
            select c_comment from orders, customer where c_custkey=o_custkey order by c_comment;
            set sort_buffer_size=default;
            select c_comment from orders, customer where c_custkey=o_custkey order by c_comment;
            

            For the above test case:
            After the first execution of the query I got a correct result set.
            After the second execution of the query I got an empty set.
            After the third execution of the query I got a segmentation fault.

            igor Igor Babaev (Inactive) added a comment - Varun, I tried the following example: set storage_engine=myisam; source include/dbt3_s001.inc; alter table customer change c_comment c_comment varchar (255); analyze table customer, orders; select c_comment from orders, customer where c_custkey=o_custkey order by c_comment; set sort_buffer_size=16192; select c_comment from orders, customer where c_custkey=o_custkey order by c_comment; set sort_buffer_size= default ; select c_comment from orders, customer where c_custkey=o_custkey order by c_comment; For the above test case: After the first execution of the query I got a correct result set. After the second execution of the query I got an empty set. After the third execution of the query I got a segmentation fault.

            This is not reproducible with the current code.

            psergei Sergei Petrunia added a comment - This is not reproducible with the current code.

            The latest patch is Ok to push.

            psergei Sergei Petrunia added a comment - The latest patch is Ok to push.

            Pushed to 10.5

            commit b753ac066bc26acda9deb707a31c112f1bbf9ec2
            Author: Varun Gupta <varun.gupta@mariadb.com>
            Date:   Tue Mar 10 04:56:38 2020 +0530
             
                MDEV-21580: Allow packed sort keys in sort buffer
                
                This task deals with packing the sort key inside the sort buffer, which  would
                lead to efficient usage of the memory allocated for the sort buffer.
            

            varun Varun Gupta (Inactive) added a comment - Pushed to 10.5 commit b753ac066bc26acda9deb707a31c112f1bbf9ec2 Author: Varun Gupta <varun.gupta@mariadb.com> Date: Tue Mar 10 04:56:38 2020 +0530   MDEV-21580: Allow packed sort keys in sort buffer This task deals with packing the sort key inside the sort buffer, which would lead to efficient usage of the memory allocated for the sort buffer.

            People

              varun Varun Gupta (Inactive)
              varun Varun Gupta (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.