Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-11846

ERROR 1114 (HY000) table full when performing GROUP BY

Details

    • Bug
    • Status: Closed (View Workflow)
    • Major
    • Resolution: Fixed
    • 10.1.17, 10.0.27, 5.5(EOL), 10.0(EOL), 10.1(EOL), 10.2(EOL)
    • 10.1.27
    • None
    • GNU/Linux
    • 10.2.6-3, 10.3.1-1

    Description

      GROUP BY queries from MyISAM tables consistently fails with 'TABLE FULL' for some lengths of fields being grouped but succeeds for longer lengths. Does not appear to be hitting operating system limits.

      For example, with a source MyISAM table 'test.input' of 10m records defined as field a=int(11) auto-increment primary key (values 1-10,000,000) and field b defined as varchar(128) not null and randomly filled, then the following fail but any other length (1-127) runs OK:

        create table test.test as select a, RIGHT(b,62) as b, count(1) as x from test.input group by 1, 2;
        create table test.test as select a, RIGHT(b,63) as b, count(1) as x from test.input group by 1, 2;
        create table test.test as select a, RIGHT(b,80) as b, count(1) as x from test.input group by 1, 2;
        create table test.test as select a, RIGHT(b,81) as b, count(1) as x from test.input group by 1, 2;
      

      Replacing the length with @len causes the crash to occur with values of @len that are 1 lower than the lengths above: i.e. @len=61, 62, 79 and 80.

      While the query is running, a pair of files (.MAI, .MAD) are being created in /tmp; the query crashes at the point that the .MAI file is about the exceed 512Mb. For other lengths the .MAI file exceeds 512Mb without crashing. /tmp has over 404Gb free space available.

      None of the conditions documented at http://dev.mysql.com/doc/refman/5.5/en/table-size-limit.html apply in this case - plenty of disk space, query runs OK with longer values being grouped, not using InnoDB, file size is suitable for pointer size, etc.

      The solution listed elsewhere for specifying MAX_ROWS is for when explicitly creating an Aria table; MariaDB is implicitly creating the table so cannot specify MAX_ROWS. In any cases, 10m rows is not excessive.

      Table input:

                 Name: input
               Engine: MyISAM
              Version: 10
           Row_format: Dynamic
                 Rows: 10000000
       Avg_row_length: 140
          Data_length: 1400000000
      Max_data_length: 281474976710655
         Index_length: 102784000
            Data_free: 0
       Auto_increment: 10000001
          Create_time: 2017-01-18 11:53:34
          Update_time: 2017-01-18 11:54:14
           Check_time: NULL
            Collation: latin1_swedish_ci
             Checksum: NULL
       Create_options:
              Comment:
      

      Table input created and populated as:

      drop table if exists test.input;
      create table test.input (
        a integer not null auto_increment primary key,
        b varchar(128) not null);
       
      insert into test.input (b) values(REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16));
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input; -- 2^13 8388608
       
      insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input where a<=1611392; -- 10m records
      

      Selected settings:

        aria_block_size 8192
        aria_log_file_size 1073741824
        aria_max_sort_file_size 9223372036853727232
        aria_pagecache_buffer_size 134217728
        aria_sort_buffer_size 268434432
        aria_used_for_temp_tables ON
        character_set_client utf8
        character_set_connection utf8
        character_set_database latin1
        character_set_results utf8
        collation_connection latin1_swedish_ci
        collation_server latin1_swedish_ci
        default_storage_engine MyISAM
        default_tmp_storage_engine (blank)
        innodb_data_file_path ibdata1:12M:autoextend (NB: Not using Innodb)
        key_buffer_size 4294967296
        max_heap_table_size 16777216
        myisam_block_size 1024
        myisam_sort_buffer_size 134217728
        sort_buffer_size 2097152
        sql_big_selects ON
        version 10.0.27-MariaDB (server A), 10.1.17-MariaDB (server B)
        version_compile_machine x86_64
        version_compile_os Linux
      

      Attachments

        Activity

          Saysell Philip created issue -
          Saysell Philip made changes -
          Field Original Value New Value
          Description GROUP BY queries from MyISAM tables consistently fails with 'TABLE FULL' for some lengths of fields being grouped but succeeds for longer lengths. Does not appear to be hitting

          For example, with a source MyISAM table 'test.input' of 10m records defined as field a=int(11) auto-increment primary key (values 1-10,000,000) and field b defined as varchar(128) not null and randomly filled, then the following fail but any other length (1-127) runs OK:

            select a, RIGHT(b,62) as b, count(1) as x from test.input group by 1, 2;
            select a, RIGHT(b,63) as b, count(1) as x from test.input group by 1, 2;
            select a, RIGHT(b,80) as b, count(1) as x from test.input group by 1, 2;
            select a, RIGHT(b,81) as b, count(1) as x from test.input group by 1, 2;

          While the query is running, a pair of files (.MAI, .MAD) are being created in /tmp; the query crashes at the point that the .MAI file is about the exceed 512Mb. For other lengths the .MAI file exceeds 512Mb without crashing. /tmp has over 404Gb free space available.

          None of the conditions documented at http://dev.mysql.com/doc/refman/5.5/en/table-size-limit.html apply in this case - plenty of disk space, query runs OK with longer values being grouped, not using InnoDB, file size is suitable for pointer size, etc.

          The solution listed elsewhere for specifying MAX_ROWS is for when explicitly creating an Aria table; MariaDB is implicitly creating the table so cannot specify MAX_ROWS. In any cases, 10m rows is not excessive.

          Table input:
                     Name: input
                   Engine: MyISAM
                  Version: 10
               Row_format: Dynamic
                     Rows: 10000000
           Avg_row_length: 140
              Data_length: 1400000000
          Max_data_length: 281474976710655
             Index_length: 102784000
                Data_free: 0
           Auto_increment: 10000001
              Create_time: 2017-01-18 11:53:34
              Update_time: 2017-01-18 11:54:14
               Check_time: NULL
                Collation: latin1_swedish_ci
                 Checksum: NULL
           Create_options:
                  Comment:

          Table input created and populated as:
          drop table if exists test.input;
          create table test.input (
            a integer not null auto_increment primary key,
            b varchar(128) not null);

          insert into test.input (b) values(REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16));
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input; -- 2^13 8388608
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input where a<=1611392; -- 10m records

          Selected settings:
            aria_block_size 8192
            aria_log_file_size 1073741824
            aria_max_sort_file_size 9223372036853727232
            aria_pagecache_buffer_size 134217728
            aria_sort_buffer_size 268434432
            aria_used_for_temp_tables ON
            character_set_client utf8
            character_set_connection utf8
            character_set_database latin1
            character_set_results utf8
            collation_connection latin1_swedish_ci
            collation_server latin1_swedish_ci
            default_storage_engine MyISAM
            default_tmp_storage_engine (blank)
            innodb_data_file_path ibdata1:12M:autoextend (NB: Not using Innodb)
            key_buffer_size 4294967296
            max_heap_table_size 16777216
            myisam_block_size 1024
            myisam_sort_buffer_size 134217728
            sort_buffer_size 2097152
            sql_big_selects ON
            version 10.0.27-MariaDB (server A), 10.1.17-MariaDB (server B)
            version_compile_machine x86_64
            version_compile_os Linux
            
            
          GROUP BY queries from MyISAM tables consistently fails with 'TABLE FULL' for some lengths of fields being grouped but succeeds for longer lengths. Does not appear to be hitting operating system limits.

          For example, with a source MyISAM table 'test.input' of 10m records defined as field a=int(11) auto-increment primary key (values 1-10,000,000) and field b defined as varchar(128) not null and randomly filled, then the following fail but any other length (1-127) runs OK:

            select a, RIGHT(b,62) as b, count(1) as x from test.input group by 1, 2;
            select a, RIGHT(b,63) as b, count(1) as x from test.input group by 1, 2;
            select a, RIGHT(b,80) as b, count(1) as x from test.input group by 1, 2;
            select a, RIGHT(b,81) as b, count(1) as x from test.input group by 1, 2;

          While the query is running, a pair of files (.MAI, .MAD) are being created in /tmp; the query crashes at the point that the .MAI file is about the exceed 512Mb. For other lengths the .MAI file exceeds 512Mb without crashing. /tmp has over 404Gb free space available.

          None of the conditions documented at http://dev.mysql.com/doc/refman/5.5/en/table-size-limit.html apply in this case - plenty of disk space, query runs OK with longer values being grouped, not using InnoDB, file size is suitable for pointer size, etc.

          The solution listed elsewhere for specifying MAX_ROWS is for when explicitly creating an Aria table; MariaDB is implicitly creating the table so cannot specify MAX_ROWS. In any cases, 10m rows is not excessive.

          Table input:
                     Name: input
                   Engine: MyISAM
                  Version: 10
               Row_format: Dynamic
                     Rows: 10000000
           Avg_row_length: 140
              Data_length: 1400000000
          Max_data_length: 281474976710655
             Index_length: 102784000
                Data_free: 0
           Auto_increment: 10000001
              Create_time: 2017-01-18 11:53:34
              Update_time: 2017-01-18 11:54:14
               Check_time: NULL
                Collation: latin1_swedish_ci
                 Checksum: NULL
           Create_options:
                  Comment:

          Table input created and populated as:
          drop table if exists test.input;
          create table test.input (
            a integer not null auto_increment primary key,
            b varchar(128) not null);

          insert into test.input (b) values(REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16));
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input; -- 2^13 8388608
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input where a<=1611392; -- 10m records

          Selected settings:
            aria_block_size 8192
            aria_log_file_size 1073741824
            aria_max_sort_file_size 9223372036853727232
            aria_pagecache_buffer_size 134217728
            aria_sort_buffer_size 268434432
            aria_used_for_temp_tables ON
            character_set_client utf8
            character_set_connection utf8
            character_set_database latin1
            character_set_results utf8
            collation_connection latin1_swedish_ci
            collation_server latin1_swedish_ci
            default_storage_engine MyISAM
            default_tmp_storage_engine (blank)
            innodb_data_file_path ibdata1:12M:autoextend (NB: Not using Innodb)
            key_buffer_size 4294967296
            max_heap_table_size 16777216
            myisam_block_size 1024
            myisam_sort_buffer_size 134217728
            sort_buffer_size 2097152
            sql_big_selects ON
            version 10.0.27-MariaDB (server A), 10.1.17-MariaDB (server B)
            version_compile_machine x86_64
            version_compile_os Linux
            
            
          Saysell Philip made changes -
          Description GROUP BY queries from MyISAM tables consistently fails with 'TABLE FULL' for some lengths of fields being grouped but succeeds for longer lengths. Does not appear to be hitting operating system limits.

          For example, with a source MyISAM table 'test.input' of 10m records defined as field a=int(11) auto-increment primary key (values 1-10,000,000) and field b defined as varchar(128) not null and randomly filled, then the following fail but any other length (1-127) runs OK:

            select a, RIGHT(b,62) as b, count(1) as x from test.input group by 1, 2;
            select a, RIGHT(b,63) as b, count(1) as x from test.input group by 1, 2;
            select a, RIGHT(b,80) as b, count(1) as x from test.input group by 1, 2;
            select a, RIGHT(b,81) as b, count(1) as x from test.input group by 1, 2;

          While the query is running, a pair of files (.MAI, .MAD) are being created in /tmp; the query crashes at the point that the .MAI file is about the exceed 512Mb. For other lengths the .MAI file exceeds 512Mb without crashing. /tmp has over 404Gb free space available.

          None of the conditions documented at http://dev.mysql.com/doc/refman/5.5/en/table-size-limit.html apply in this case - plenty of disk space, query runs OK with longer values being grouped, not using InnoDB, file size is suitable for pointer size, etc.

          The solution listed elsewhere for specifying MAX_ROWS is for when explicitly creating an Aria table; MariaDB is implicitly creating the table so cannot specify MAX_ROWS. In any cases, 10m rows is not excessive.

          Table input:
                     Name: input
                   Engine: MyISAM
                  Version: 10
               Row_format: Dynamic
                     Rows: 10000000
           Avg_row_length: 140
              Data_length: 1400000000
          Max_data_length: 281474976710655
             Index_length: 102784000
                Data_free: 0
           Auto_increment: 10000001
              Create_time: 2017-01-18 11:53:34
              Update_time: 2017-01-18 11:54:14
               Check_time: NULL
                Collation: latin1_swedish_ci
                 Checksum: NULL
           Create_options:
                  Comment:

          Table input created and populated as:
          drop table if exists test.input;
          create table test.input (
            a integer not null auto_increment primary key,
            b varchar(128) not null);

          insert into test.input (b) values(REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16));
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input; -- 2^13 8388608
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input where a<=1611392; -- 10m records

          Selected settings:
            aria_block_size 8192
            aria_log_file_size 1073741824
            aria_max_sort_file_size 9223372036853727232
            aria_pagecache_buffer_size 134217728
            aria_sort_buffer_size 268434432
            aria_used_for_temp_tables ON
            character_set_client utf8
            character_set_connection utf8
            character_set_database latin1
            character_set_results utf8
            collation_connection latin1_swedish_ci
            collation_server latin1_swedish_ci
            default_storage_engine MyISAM
            default_tmp_storage_engine (blank)
            innodb_data_file_path ibdata1:12M:autoextend (NB: Not using Innodb)
            key_buffer_size 4294967296
            max_heap_table_size 16777216
            myisam_block_size 1024
            myisam_sort_buffer_size 134217728
            sort_buffer_size 2097152
            sql_big_selects ON
            version 10.0.27-MariaDB (server A), 10.1.17-MariaDB (server B)
            version_compile_machine x86_64
            version_compile_os Linux
            
            
          GROUP BY queries from MyISAM tables consistently fails with 'TABLE FULL' for some lengths of fields being grouped but succeeds for longer lengths. Does not appear to be hitting operating system limits.

          For example, with a source MyISAM table 'test.input' of 10m records defined as field a=int(11) auto-increment primary key (values 1-10,000,000) and field b defined as varchar(128) not null and randomly filled, then the following fail but any other length (1-127) runs OK:

            create table test.test as select a, RIGHT(b,62) as b, count(1) as x from test.input group by 1, 2;
            create table test.test as select a, RIGHT(b,63) as b, count(1) as x from test.input group by 1, 2;
            create table test.test as select a, RIGHT(b,80) as b, count(1) as x from test.input group by 1, 2;
            create table test.test as select a, RIGHT(b,81) as b, count(1) as x from test.input group by 1, 2;

          Replacing the length with @len causes the crash to occur with values of @len that are 1 lower than the lengths above: i.e. @len=61, 62, 79 and 80.

          While the query is running, a pair of files (.MAI, .MAD) are being created in /tmp; the query crashes at the point that the .MAI file is about the exceed 512Mb. For other lengths the .MAI file exceeds 512Mb without crashing. /tmp has over 404Gb free space available.

          None of the conditions documented at http://dev.mysql.com/doc/refman/5.5/en/table-size-limit.html apply in this case - plenty of disk space, query runs OK with longer values being grouped, not using InnoDB, file size is suitable for pointer size, etc.

          The solution listed elsewhere for specifying MAX_ROWS is for when explicitly creating an Aria table; MariaDB is implicitly creating the table so cannot specify MAX_ROWS. In any cases, 10m rows is not excessive.

          Table input:
                     Name: input
                   Engine: MyISAM
                  Version: 10
               Row_format: Dynamic
                     Rows: 10000000
           Avg_row_length: 140
              Data_length: 1400000000
          Max_data_length: 281474976710655
             Index_length: 102784000
                Data_free: 0
           Auto_increment: 10000001
              Create_time: 2017-01-18 11:53:34
              Update_time: 2017-01-18 11:54:14
               Check_time: NULL
                Collation: latin1_swedish_ci
                 Checksum: NULL
           Create_options:
                  Comment:

          Table input created and populated as:
          drop table if exists test.input;
          create table test.input (
            a integer not null auto_increment primary key,
            b varchar(128) not null);

          insert into test.input (b) values(REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16));
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input; -- 2^13 8388608

          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input where a<=1611392; -- 10m records

          Selected settings:
            aria_block_size 8192
            aria_log_file_size 1073741824
            aria_max_sort_file_size 9223372036853727232
            aria_pagecache_buffer_size 134217728
            aria_sort_buffer_size 268434432
            aria_used_for_temp_tables ON
            character_set_client utf8
            character_set_connection utf8
            character_set_database latin1
            character_set_results utf8
            collation_connection latin1_swedish_ci
            collation_server latin1_swedish_ci
            default_storage_engine MyISAM
            default_tmp_storage_engine (blank)
            innodb_data_file_path ibdata1:12M:autoextend (NB: Not using Innodb)
            key_buffer_size 4294967296
            max_heap_table_size 16777216
            myisam_block_size 1024
            myisam_sort_buffer_size 134217728
            sort_buffer_size 2097152
            sql_big_selects ON
            version 10.0.27-MariaDB (server A), 10.1.17-MariaDB (server B)
            version_compile_machine x86_64
            version_compile_os Linux
            
            
          elenst Elena Stepanova made changes -
          Description GROUP BY queries from MyISAM tables consistently fails with 'TABLE FULL' for some lengths of fields being grouped but succeeds for longer lengths. Does not appear to be hitting operating system limits.

          For example, with a source MyISAM table 'test.input' of 10m records defined as field a=int(11) auto-increment primary key (values 1-10,000,000) and field b defined as varchar(128) not null and randomly filled, then the following fail but any other length (1-127) runs OK:

            create table test.test as select a, RIGHT(b,62) as b, count(1) as x from test.input group by 1, 2;
            create table test.test as select a, RIGHT(b,63) as b, count(1) as x from test.input group by 1, 2;
            create table test.test as select a, RIGHT(b,80) as b, count(1) as x from test.input group by 1, 2;
            create table test.test as select a, RIGHT(b,81) as b, count(1) as x from test.input group by 1, 2;

          Replacing the length with @len causes the crash to occur with values of @len that are 1 lower than the lengths above: i.e. @len=61, 62, 79 and 80.

          While the query is running, a pair of files (.MAI, .MAD) are being created in /tmp; the query crashes at the point that the .MAI file is about the exceed 512Mb. For other lengths the .MAI file exceeds 512Mb without crashing. /tmp has over 404Gb free space available.

          None of the conditions documented at http://dev.mysql.com/doc/refman/5.5/en/table-size-limit.html apply in this case - plenty of disk space, query runs OK with longer values being grouped, not using InnoDB, file size is suitable for pointer size, etc.

          The solution listed elsewhere for specifying MAX_ROWS is for when explicitly creating an Aria table; MariaDB is implicitly creating the table so cannot specify MAX_ROWS. In any cases, 10m rows is not excessive.

          Table input:
                     Name: input
                   Engine: MyISAM
                  Version: 10
               Row_format: Dynamic
                     Rows: 10000000
           Avg_row_length: 140
              Data_length: 1400000000
          Max_data_length: 281474976710655
             Index_length: 102784000
                Data_free: 0
           Auto_increment: 10000001
              Create_time: 2017-01-18 11:53:34
              Update_time: 2017-01-18 11:54:14
               Check_time: NULL
                Collation: latin1_swedish_ci
                 Checksum: NULL
           Create_options:
                  Comment:

          Table input created and populated as:
          drop table if exists test.input;
          create table test.input (
            a integer not null auto_increment primary key,
            b varchar(128) not null);

          insert into test.input (b) values(REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16));
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input; -- 2^13 8388608

          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input where a<=1611392; -- 10m records

          Selected settings:
            aria_block_size 8192
            aria_log_file_size 1073741824
            aria_max_sort_file_size 9223372036853727232
            aria_pagecache_buffer_size 134217728
            aria_sort_buffer_size 268434432
            aria_used_for_temp_tables ON
            character_set_client utf8
            character_set_connection utf8
            character_set_database latin1
            character_set_results utf8
            collation_connection latin1_swedish_ci
            collation_server latin1_swedish_ci
            default_storage_engine MyISAM
            default_tmp_storage_engine (blank)
            innodb_data_file_path ibdata1:12M:autoextend (NB: Not using Innodb)
            key_buffer_size 4294967296
            max_heap_table_size 16777216
            myisam_block_size 1024
            myisam_sort_buffer_size 134217728
            sort_buffer_size 2097152
            sql_big_selects ON
            version 10.0.27-MariaDB (server A), 10.1.17-MariaDB (server B)
            version_compile_machine x86_64
            version_compile_os Linux
            
            
          GROUP BY queries from MyISAM tables consistently fails with 'TABLE FULL' for some lengths of fields being grouped but succeeds for longer lengths. Does not appear to be hitting operating system limits.

          For example, with a source MyISAM table 'test.input' of 10m records defined as field a=int(11) auto-increment primary key (values 1-10,000,000) and field b defined as varchar(128) not null and randomly filled, then the following fail but any other length (1-127) runs OK:

          {code:sql}
            create table test.test as select a, RIGHT(b,62) as b, count(1) as x from test.input group by 1, 2;
            create table test.test as select a, RIGHT(b,63) as b, count(1) as x from test.input group by 1, 2;
            create table test.test as select a, RIGHT(b,80) as b, count(1) as x from test.input group by 1, 2;
            create table test.test as select a, RIGHT(b,81) as b, count(1) as x from test.input group by 1, 2;
          {code}

          Replacing the length with @len causes the crash to occur with values of @len that are 1 lower than the lengths above: i.e. @len=61, 62, 79 and 80.

          While the query is running, a pair of files (.MAI, .MAD) are being created in /tmp; the query crashes at the point that the .MAI file is about the exceed 512Mb. For other lengths the .MAI file exceeds 512Mb without crashing. /tmp has over 404Gb free space available.

          None of the conditions documented at http://dev.mysql.com/doc/refman/5.5/en/table-size-limit.html apply in this case - plenty of disk space, query runs OK with longer values being grouped, not using InnoDB, file size is suitable for pointer size, etc.

          The solution listed elsewhere for specifying MAX_ROWS is for when explicitly creating an Aria table; MariaDB is implicitly creating the table so cannot specify MAX_ROWS. In any cases, 10m rows is not excessive.

          Table input:
          {noformat}
                     Name: input
                   Engine: MyISAM
                  Version: 10
               Row_format: Dynamic
                     Rows: 10000000
           Avg_row_length: 140
              Data_length: 1400000000
          Max_data_length: 281474976710655
             Index_length: 102784000
                Data_free: 0
           Auto_increment: 10000001
              Create_time: 2017-01-18 11:53:34
              Update_time: 2017-01-18 11:54:14
               Check_time: NULL
                Collation: latin1_swedish_ci
                 Checksum: NULL
           Create_options:
                  Comment:
          {noformat}

          Table input created and populated as:
          {code:sql}
          drop table if exists test.input;
          create table test.input (
            a integer not null auto_increment primary key,
            b varchar(128) not null);

          insert into test.input (b) values(REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16));
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input; -- 2^13 8388608

          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input where a<=1611392; -- 10m records
          {code}

          Selected settings:
          {noformat}
            aria_block_size 8192
            aria_log_file_size 1073741824
            aria_max_sort_file_size 9223372036853727232
            aria_pagecache_buffer_size 134217728
            aria_sort_buffer_size 268434432
            aria_used_for_temp_tables ON
            character_set_client utf8
            character_set_connection utf8
            character_set_database latin1
            character_set_results utf8
            collation_connection latin1_swedish_ci
            collation_server latin1_swedish_ci
            default_storage_engine MyISAM
            default_tmp_storage_engine (blank)
            innodb_data_file_path ibdata1:12M:autoextend (NB: Not using Innodb)
            key_buffer_size 4294967296
            max_heap_table_size 16777216
            myisam_block_size 1024
            myisam_sort_buffer_size 134217728
            sort_buffer_size 2097152
            sql_big_selects ON
            version 10.0.27-MariaDB (server A), 10.1.17-MariaDB (server B)
            version_compile_machine x86_64
            version_compile_os Linux
          {noformat}
            
          elenst Elena Stepanova made changes -
          Status Open [ 1 ] Confirmed [ 10101 ]

          Thanks for the report and test case.

          CREATE TABLE .. AS is not necessary, a subquery does the same.
          MTR-like test case, pretty much the same as in the description:

          drop table if exists test.input;
          create table test.input (
            a integer not null auto_increment primary key,
            b varchar(128) not null) engine=MyISAM;
           
          insert into test.input (b) values(REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16));
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input;
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input; # 2^13 8388608
           
          insert into test.input (b) select REPEAT(LPAD(CAST(RAND()*100000000 AS INTEGER), 8, '0'),16) from test.input where a<=1611392;
           
          select count(*) from ( select a, RIGHT(b,64) as b, count(1) as x from test.input group by 1, 2 ) ss;
          select count(*) from ( select a, RIGHT(b,62) as b, count(1) as x from test.input group by 1, 2 ) ss;
           
          drop table test.input;
          

          elenst Elena Stepanova added a comment - Thanks for the report and test case. CREATE TABLE .. AS is not necessary, a subquery does the same. MTR-like test case, pretty much the same as in the description: drop table if exists test.input; create table test.input ( a integer not null auto_increment primary key , b varchar (128) not null ) engine=MyISAM; insert into test.input (b) values (REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16)); insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input; # 2^13 8388608 insert into test.input (b) select REPEAT(LPAD( CAST (RAND()*100000000 AS INTEGER ), 8, '0' ),16) from test.input where a<=1611392;   select count (*) from ( select a, RIGHT (b,64) as b, count (1) as x from test.input group by 1, 2 ) ss; select count (*) from ( select a, RIGHT (b,62) as b, count (1) as x from test.input group by 1, 2 ) ss;   drop table test.input;
          elenst Elena Stepanova made changes -
          Fix Version/s 10.1 [ 16100 ]
          Fix Version/s 10.2 [ 14601 ]
          Affects Version/s 5.5 [ 15800 ]
          Affects Version/s 10.0 [ 16000 ]
          Affects Version/s 10.1 [ 16100 ]
          Affects Version/s 10.2 [ 14601 ]
          psergei Sergei Petrunia made changes -
          Assignee Sergei Petrunia [ psergey ]
          psergei Sergei Petrunia made changes -
          Assignee Sergei Petrunia [ psergey ] Varun Gupta [ varun ]
          ratzpo Rasmus Johansson (Inactive) made changes -
          Sprint 10.2.6-3 [ 150 ]
          varun Varun Gupta (Inactive) made changes -
          Status Confirmed [ 10101 ] In Progress [ 3 ]

          explain 
          select count(*) from ( select a, RIGHT(b,64) as b, count(1) as x from test.input group by 1, 2 ) ss;
          id	select_type	table	type	possible_keys	key	key_len	ref	rows	Extra
          1	PRIMARY	<derived2>	ALL	NULL	NULL	NULL	NULL	10000000	
          2	DERIVED	input	ALL	NULL	NULL	NULL	NULL	10000000	Using temporary; Using filesort
          
          

          varun Varun Gupta (Inactive) added a comment - explain select count(*) from ( select a, RIGHT(b,64) as b, count(1) as x from test.input group by 1, 2 ) ss; id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 10000000 2 DERIVED input ALL NULL NULL NULL NULL 10000000 Using temporary; Using filesort

          When it gets the error, which table is it writing to? Is it

          • the temporary table for doing GROUP BY , or
          • the temporary table for doing derived table
            ?
          psergei Sergei Petrunia added a comment - When it gets the error, which table is it writing to? Is it the temporary table for doing GROUP BY , or the temporary table for doing derived table ?

          For the table that it has an error with,
          does create_tmp_table() compute a correct value of share->max_rows ?

          psergei Sergei Petrunia added a comment - For the table that it has an error with, does create_tmp_table() compute a correct value of share->max_rows ?

          When we get the error we are actually writing to the temporary table for doing GROUP BY

          (gdb) p join_tab->table->alias
          $19 = {Ptr = 0x7fffd400c830 "input", str_length = 5, Alloced_length = 8, extra_alloc = 0, alloced = true, thread_specific = false, str_charset = 0x555556ecdba0 <my_charset_bin>}
          

          varun Varun Gupta (Inactive) added a comment - When we get the error we are actually writing to the temporary table for doing GROUP BY (gdb) p join_tab->table->alias $19 = {Ptr = 0x7fffd400c830 "input", str_length = 5, Alloced_length = 8, extra_alloc = 0, alloced = true, thread_specific = false, str_charset = 0x555556ecdba0 <my_charset_bin>}

          For the derived table ss

          In the function create_tmp_table
          p share->max_rows
          $17 13797
          
          

          For the work table that is when we do the group by

          In the function create_tmp_table
          p share->max_rows
          $24 13617
          

          varun Varun Gupta (Inactive) added a comment - For the derived table ss In the function create_tmp_table p share->max_rows $17 13797 For the work table that is when we do the group by In the function create_tmp_table p share->max_rows $24 13617
          varun Varun Gupta (Inactive) made changes -
          Sprint 10.2.6-3 [ 150 ] 10.2.6-3, 10.3.1-1 [ 150, 164 ]
          varun Varun Gupta (Inactive) added a comment - - edited

          (lldb) p share->base.margin_key_file_length
          (my_off_t) $436 = 536731648

          (lldb) p share->state.state.key_file_length
          (my_off_t) $438 = 536731648

           
            if (share->state.state.key_file_length >= share->base.margin_key_file_length)
            {
              my_errno=HA_ERR_INDEX_FILE_FULL;
              goto err2;
            }
          

          In the function maria_write, this is the place we get the error condition , that the index file is full

          varun Varun Gupta (Inactive) added a comment - - edited (lldb) p share->base.margin_key_file_length (my_off_t) $436 = 536731648 (lldb) p share->state.state.key_file_length (my_off_t) $438 = 536731648   if (share->state.state.key_file_length >= share->base.margin_key_file_length) { my_errno=HA_ERR_INDEX_FILE_FULL; goto err2; } In the function maria_write, this is the place we get the error condition , that the index file is full
          varun Varun Gupta (Inactive) added a comment - - edited

          After further investigation I see there is an overflow

            tmp= (tot_length + maria_block_size * keys *
          	MARIA_INDEX_BLOCK_MARGIN) / maria_block_size;
          

          (lldb) p tot_length
          (ulonglong) $31 = 18446744073709518848
           
          (lldb) p tot_length + maria_block_size * keys * MARIA_INDEX_BLOCK_MARGIN
          (unsigned long long) $30 = 98304
           
          (lldb) p tot_length /maria_block_size
          (unsigned long long) $28 = 2251799813685244
          

          varun Varun Gupta (Inactive) added a comment - - edited After further investigation I see there is an overflow tmp= (tot_length + maria_block_size * keys * MARIA_INDEX_BLOCK_MARGIN) / maria_block_size; (lldb) p tot_length (ulonglong) $31 = 18446744073709518848   (lldb) p tot_length + maria_block_size * keys * MARIA_INDEX_BLOCK_MARGIN (unsigned long long) $30 = 98304   (lldb) p tot_length /maria_block_size (unsigned long long) $28 = 2251799813685244
          varun Varun Gupta (Inactive) made changes -
          Assignee Varun Gupta [ varun ] Sergei Petrunia [ psergey ]
          Status In Progress [ 3 ] In Review [ 10002 ]

          So I am trying to reproduce the overrun.

          I can easily reproduce this part:

          @@ -746,8 +763,7 @@ int maria_create(const char *name, enum data_file_type datafile_type,
               Get estimate for index file length (this may be wrong for FT keys)
               This is used for pointers to other key pages.
             */
          -  tmp= (tot_length + maria_block_size * keys *
          -	MARIA_INDEX_BLOCK_MARGIN) / maria_block_size;
          +  tmp= (tot_length / maria_block_size + keys * MARIA_INDEX_BLOCK_MARGIN);
          

          I get there and I see:

          (gdb) p/x ((ulonglong)-1)
            $92 = 0xffffffffffffffff
          (gdb) p/x tot_length
            $93 = 0xffffffffffff6000
          (gdb) p/x maria_block_size *keys * MARIA_INDEX_BLOCK_MARGIN
            $94 =            0x20000
          

          Oh, about to overrun...

          (gdb) p/x tot_length + maria_block_size *keys * MARIA_INDEX_BLOCK_MARGIN
            $95 = 0x16000
          

          Yes, here it is.

          psergei Sergei Petrunia added a comment - So I am trying to reproduce the overrun. I can easily reproduce this part: @@ -746,8 +763,7 @@ int maria_create(const char *name, enum data_file_type datafile_type, Get estimate for index file length (this may be wrong for FT keys) This is used for pointers to other key pages. */ - tmp= (tot_length + maria_block_size * keys * - MARIA_INDEX_BLOCK_MARGIN) / maria_block_size; + tmp= (tot_length / maria_block_size + keys * MARIA_INDEX_BLOCK_MARGIN); I get there and I see: (gdb) p/x ((ulonglong)-1) $92 = 0xffffffffffffffff (gdb) p/x tot_length $93 = 0xffffffffffff6000 (gdb) p/x maria_block_size *keys * MARIA_INDEX_BLOCK_MARGIN $94 = 0x20000 Oh, about to overrun... (gdb) p/x tot_length + maria_block_size *keys * MARIA_INDEX_BLOCK_MARGIN $95 = 0x16000 Yes, here it is.

          But I'm not able to reproduce the overrun that the first portion of the patch is supposed to fix.

          Code before the patch:

              if (length > max_key_length)
                max_key_length= length;
              tot_length+= ((max_rows/(ulong) (((uint) maria_block_size -
                                                MAX_KEYPAGE_HEADER_SIZE -
                                                KEYPAGE_CHECKSUM_SIZE)/
                                               (length*2))) *
                            maria_block_size);
          

          Debugging:

          (gdb) p/x ((ulonglong)-1)
            $84 = 0xffffffffffffffff
          (gdb) p/x max_rows
            $85 = 0x6dfffffffffff24
           
          (gdb) p ((uint) maria_block_size - MAX_KEYPAGE_HEADER_SIZE - KEYPAGE_CHECKSUM_SIZE)/ (length*2)
            $86 = 44
          (gdb) p maria_block_size
            $80 = 8192
          

          So, we first divide max_rows by 44 and then multiply by 8192.

          This is (approximately) about the same as multiplying it by:

          (gdb) p maria_block_size / (((uint) maria_block_size - MAX_KEYPAGE_HEADER_SIZE - KEYPAGE_CHECKSUM_SIZE)/ (length*2))
            $87 = 186
          

          Trying to do this multiplication manually:

          (gdb) p/x max_rows * 186
            $88 = 0xfebfffffffff6028
          

          No overrun.
          Step over the computation, and I see:

          (gdb) p/x tot_length
            $58 = 0xffffffffffff8000
          

          or, depending on which of the queries I try*

          (gdb) p/x tot_length
            $90 = 0xffffffffffff6000
          

          • - I tried all queries from the MDEV summary text, with RIGHT(b,62), RIGHT(b,63) , RIGHT(b,80), RIGHT(b,81).

          Can't observe the overrun for either of them. varun, please advice how to reproduce the overrun that the first portion of the patch is fixing.

          psergei Sergei Petrunia added a comment - But I'm not able to reproduce the overrun that the first portion of the patch is supposed to fix. Code before the patch: if (length > max_key_length) max_key_length= length; tot_length+= ((max_rows/(ulong) (((uint) maria_block_size - MAX_KEYPAGE_HEADER_SIZE - KEYPAGE_CHECKSUM_SIZE)/ (length*2))) * maria_block_size); Debugging: (gdb) p/x ((ulonglong)-1) $84 = 0xffffffffffffffff (gdb) p/x max_rows $85 = 0x6dfffffffffff24   (gdb) p ((uint) maria_block_size - MAX_KEYPAGE_HEADER_SIZE - KEYPAGE_CHECKSUM_SIZE)/ (length*2) $86 = 44 (gdb) p maria_block_size $80 = 8192 So, we first divide max_rows by 44 and then multiply by 8192. This is (approximately) about the same as multiplying it by: (gdb) p maria_block_size / (((uint) maria_block_size - MAX_KEYPAGE_HEADER_SIZE - KEYPAGE_CHECKSUM_SIZE)/ (length*2)) $87 = 186 Trying to do this multiplication manually: (gdb) p/x max_rows * 186 $88 = 0xfebfffffffff6028 No overrun. Step over the computation, and I see: (gdb) p/x tot_length $58 = 0xffffffffffff8000 or, depending on which of the queries I try* (gdb) p/x tot_length $90 = 0xffffffffffff6000 - I tried all queries from the MDEV summary text, with RIGHT(b,62), RIGHT(b,63) , RIGHT(b,80), RIGHT(b,81). Can't observe the overrun for either of them. varun , please advice how to reproduce the overrun that the first portion of the patch is fixing.
          psergei Sergei Petrunia made changes -
          Assignee Sergei Petrunia [ psergey ] Varun Gupta [ varun ]

          RIGHT(b,82), where you would be seeing the overflow for the other part of the patch.

          varun Varun Gupta (Inactive) added a comment - RIGHT(b,82), where you would be seeing the overflow for the other part of the patch.

          varun, I am debugging the query with RIGHT(b, 28) and I still don't see any overruns:

          MariaDB [test]> create table test.test as select a, RIGHT(b,82) as b, count(1) as x from test.input group by 1, 2;
          

          (gdb) p/x ~0ull
            $119 = 0xffffffffffffffff
          (gdb) p/x max_rows
            $120 = 0x 6dfffffffffff24
          (gdb) print maria_block_size
            $121 = 8192
          (gdb) p ((uint) maria_block_size - MAX_KEYPAGE_HEADER_SIZE - KEYPAGE_CHECKSUM_SIZE)/ (length*2)
            $122 = 43
          (gdb) p maria_block_size
            $123 = 8192
          

          So it's like multipling max_rows by:

          (gdb) p maria_block_size / (((uint) maria_block_size - MAX_KEYPAGE_HEADER_SIZE - KEYPAGE_CHECKSUM_SIZE)/ (length*2))
            $124 = 190
          (gdb) p/x max_rows * 190
            $125 = 0x1a3fffffffff5cb8
          

          Stepping through, I get:

          (gdb) p/x tot_length
            $126 = 0x1dc47711dc46c000
          (gdb)  p/x ((ulonglong)-1)
            $127 = 0xffffffffffffffff
          

          Please point out where exactly you see the overrun.

          psergei Sergei Petrunia added a comment - varun , I am debugging the query with RIGHT(b, 28) and I still don't see any overruns: MariaDB [test]> create table test.test as select a, RIGHT (b,82) as b, count (1) as x from test.input group by 1, 2; (gdb) p/x ~0ull $119 = 0xffffffffffffffff (gdb) p/x max_rows $120 = 0x 6dfffffffffff24 (gdb) print maria_block_size $121 = 8192 (gdb) p ((uint) maria_block_size - MAX_KEYPAGE_HEADER_SIZE - KEYPAGE_CHECKSUM_SIZE)/ (length*2) $122 = 43 (gdb) p maria_block_size $123 = 8192 So it's like multipling max_rows by: (gdb) p maria_block_size / (((uint) maria_block_size - MAX_KEYPAGE_HEADER_SIZE - KEYPAGE_CHECKSUM_SIZE)/ (length*2)) $124 = 190 (gdb) p/x max_rows * 190 $125 = 0x1a3fffffffff5cb8 Stepping through, I get: (gdb) p/x tot_length $126 = 0x1dc47711dc46c000 (gdb) p/x ((ulonglong)-1) $127 = 0xffffffffffffffff Please point out where exactly you see the overrun.
          varun Varun Gupta (Inactive) added a comment - - edited

          Sergei, the overrun happens with RIGHT(b,82)
           
          (lldb) p max_rows
          (ulonglong) $3 = 495395959010754340
           
          (lldb) p maria_block_size
          (ulong) $4 = 8192
           
          (lldb) p length
          (uint) $5 = 93
           
          these are constants
          MAX_KEYPAGE_HEADER_SIZE= 21
          KEYPAGE_CHECKSUM_SIZE= 4
           
          (((uint) maria_block_size -  MAX_KEYPAGE_HEADER_SIZE - KEYPAGE_CHECKSUM_SIZE)/ (length*2))
           
          using python to show the overflow
          >>> k= (maria_block_size - MAX_KEYPAGE_HEADER_SIZE - KEYPAGE_CHECKSUM_SIZE)/(length * 2)
          >>> k
          43
          >>> (max_rows/k)*(maria_block_size)
          94378690609676730368L
          which is greater than ULONG_MAX
          

          varun Varun Gupta (Inactive) added a comment - - edited Sergei, the overrun happens with RIGHT(b,82)   (lldb) p max_rows (ulonglong) $3 = 495395959010754340   (lldb) p maria_block_size (ulong) $4 = 8192   (lldb) p length (uint) $5 = 93   these are constants MAX_KEYPAGE_HEADER_SIZE= 21 KEYPAGE_CHECKSUM_SIZE= 4   (((uint) maria_block_size - MAX_KEYPAGE_HEADER_SIZE - KEYPAGE_CHECKSUM_SIZE)/ (length*2))   using python to show the overflow >>> k= (maria_block_size - MAX_KEYPAGE_HEADER_SIZE - KEYPAGE_CHECKSUM_SIZE)/(length * 2) >>> k 43 >>> (max_rows/k)*(maria_block_size) 94378690609676730368L which is greater than ULONG_MAX

          Ok, thanks! Actually, what I've posted above also shows the overflow, but I've missed it.

          I agree with what the patch does. There is one detail missing:

          +    if (tot_length == ULLONG_MAX)
          +      continue;
          ....
          +      tot_length+= tot_length_part * maria_block_size;
          

          What if tot_length < ULLONG_MAX, and tot_length_part * maria_block_size < ULLONG_MAX , but when one adds them, there is an overflow? Please add handling for this case too.

          Hopefully this was the last comment on the patch.

          psergei Sergei Petrunia added a comment - Ok, thanks! Actually, what I've posted above also shows the overflow, but I've missed it. I agree with what the patch does. There is one detail missing: + if (tot_length == ULLONG_MAX) + continue; .... + tot_length+= tot_length_part * maria_block_size; What if tot_length < ULLONG_MAX , and tot_length_part * maria_block_size < ULLONG_MAX , but when one adds them, there is an overflow? Please add handling for this case too. Hopefully this was the last comment on the patch.
          varun Varun Gupta (Inactive) made changes -
          Status In Review [ 10002 ] Stalled [ 10000 ]
          varun Varun Gupta (Inactive) made changes -
          Status Stalled [ 10000 ] In Progress [ 3 ]
          varun Varun Gupta (Inactive) made changes -
          Assignee Varun Gupta [ varun ] Sergei Petrunia [ psergey ]
          Status In Progress [ 3 ] In Review [ 10002 ]

          Ok to push

          psergei Sergei Petrunia added a comment - Ok to push
          psergei Sergei Petrunia made changes -
          Assignee Sergei Petrunia [ psergey ] Varun Gupta [ varun ]
          Status In Review [ 10002 ] Stalled [ 10000 ]
          varun Varun Gupta (Inactive) made changes -
          Status Stalled [ 10000 ] In Progress [ 3 ]
          varun Varun Gupta (Inactive) made changes -
          Fix Version/s 10.1.27 [ 22609 ]
          Fix Version/s 10.2 [ 14601 ]
          Fix Version/s 10.1 [ 16100 ]
          Resolution Fixed [ 1 ]
          Status In Progress [ 3 ] Closed [ 6 ]
          serg Sergei Golubchik made changes -
          Workflow MariaDB v3 [ 79285 ] MariaDB v4 [ 151563 ]

          People

            varun Varun Gupta (Inactive)
            Saysell Philip
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Git Integration

                Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.