Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-19705

Assertion `tmp >= 0' failed in best_access_path

Details

    Description

      Reproducible on 10.4 with MyIsam, not with InnoDB

      CREATE TABLE t1 (d varchar(10)) ENGINE=MyISAM;
      INSERT INTO t1 VALUES ('a'),('q');
       
      CREATE TABLE t2 (f varchar(10), a2 datetime, b int, a1 varchar(1024), pk int NOT NULL, PRIMARY KEY (pk), UNIQUE KEY (f,a1,a2), KEY f2 (f(4),a2)) ENGINE=MyISAM WITH SYSTEM VERSIONING;
      INSERT INTO t2 VALUES ('aaa','1985-09-06',-163,'s',1),('bbb','1995-01-05',3,'pucaz',2),('ccc','0000-00-00',NULL,'help',3),('ddd',NULL,618,'v',4),('eee','1995-12-20',410,'m',5),('ffq','1976-06-12 20:02:56',NULL,'POKNC',6),('dddd','0000-00-00',-328,'hgsu',7);
       
      SELECT t2.b FROM t1 JOIN t2 ON t1.d = t2.f WHERE t2.pk >= 20;
      

      10.4 6999da9b19c3c325

       
      mysqld: /10.4/sql/sql_select.cc:7660: void best_access_path(JOIN*, JOIN_TAB*, table_map, uint, bool, double, POSITION*, POSITION*): Assertion `tmp >= 0' failed.
      190606 17:54:02 [ERROR] mysqld got signal 6 ;
       
      sql/sql_select.cc:7663(best_access_path(JOIN*, st_join_table*, unsigned long long, unsigned int, bool, double, st_position*, st_position*))[0x558f269e9794]
      sql/sql_select.cc:9356(best_extension_by_limited_search(JOIN*, unsigned long long, unsigned int, double, double, unsigned int, unsigned int, unsigned int))[0x558f269f2938]
      sql/sql_select.cc:9433(best_extension_by_limited_search(JOIN*, unsigned long long, unsigned int, double, double, unsigned int, unsigned int, unsigned int))[0x558f269f3256]
      sql/sql_select.cc:8568(greedy_search(JOIN*, unsigned long long, unsigned int, unsigned int, unsigned int))[0x558f269ee9b5]
      sql/sql_select.cc:8134(choose_plan(JOIN*, unsigned long long))[0x558f269eca8c]
      sql/sql_select.cc:5480(make_join_statistics(JOIN*, List<TABLE_LIST>&, st_dynamic_array*))[0x558f269d9ff6]
      sql/sql_select.cc:2192(JOIN::optimize_inner())[0x558f269bae78]
      sql/sql_select.cc:1562(JOIN::optimize())[0x558f269b5041]
      sql/sql_select.cc:4590(mysql_select(THD*, TABLE_LIST*, unsigned int, List<Item>&, Item*, unsigned int, st_order*, st_order*, Item*, st_order*, unsigned long long, select_result*, st_select_lex_unit*, st_select_lex*))[0x558f269d20a9]
      sql/sql_select.cc:424(handle_select(THD*, LEX*, select_result*, unsigned long))[0x558f269a9091]
      sql/sql_parse.cc:6612(execute_sqlcom_select(THD*, TABLE_LIST*))[0x558f2692e7b5]
      sql/sql_parse.cc:3884(mysql_execute_command(THD*))[0x558f2691b914]
      sql/sql_parse.cc:8164(mysql_parse(THD*, char*, unsigned int, Parser_state*, bool, bool))[0x558f26936a76]
      sql/sql_parse.cc:1831(dispatch_command(enum_server_command, THD*, char*, unsigned int, bool, bool))[0x558f2690f70e]
      sql/sql_parse.cc:1362(do_command(THD*))[0x558f2690c64b]
      sql/sql_connect.cc:1403(do_handle_one_connection(CONNECT*))[0x558f26c81cc9]
      sql/sql_connect.cc:1307(handle_one_connection)[0x558f26c81693]
      perfschema/pfs.cc:1864(pfs_spawn_thread)[0x558f280155af]
      /lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba)[0x7ff96566d6ba]
      x86_64/clone.S:111(clone)[0x7ff9648fe41d]
       
      Query (0x62b00007e290): SELECT t2.b  FROM t1 JOIN t2 ON t1.d = t2.f WHERE t2.pk >= 20
      
      

      On 10.3:
      mysqltest: At line 19: query 'CREATE TABLE t2 ( f varchar(10) , a2 datetime, b int , a1 varchar(1024) , pk int NOT NULL , PRIMARY KEY (pk), UNIQUE KEY (f,a1,a2), KEY f2 (f(4),a2)) ENGINE=MyISAM WITH SYSTEM VERSIONING' failed: 1071: Specified key was too long; max key length is 1000 bytes
      if I set shorter key, then test passes

      No visible problems on a non-debug build.

      Attachments

        Issue Links

          Activity

            First lets just consider the case without SYSTEM VERSIONING, now i run the select and see in the optimizer_trace

             
                                "plan_prefix": ["t1"],
                                "table": "t2",
                                "best_access_path": {
                                  "considered_access_paths": [
                                    {
                                      "access_type": "ref",
                                      "index": "f2",
                                      "rows": 1.1e14,
                                      "cost": 2.1562,
                                      "chosen": true
                                    },
            

            so rows shos 1.1e14. The number of records in the table t2 is 7 actually, so this is some garbage data.
            We get the estimate from

            if (!(records= keyinfo->actual_rec_per_key(key_parts-1)))
            

            which returns estimates in rec_per_key, so rec_per_key estimates have some garbage value for the key f2

            (gdb) p keyinfo->rec_per_key[0]
              $18 = 288411799865327618
            (gdb) p keyinfo->rec_per_key[1]
              $19 = 4785147619639637 
            

            The issue is that for UNIQUE long keys for which we create a hash, rec_per_key is allocated for all the keyparts of the key though on the storage engine level we will have only 1 keypart. On the sql layer we have all the keyparts but we dont make use of rec_per_key for such keys.
            So the solution to fix this would be to use rec_per_key for long unique keys as only 1 keypart, that means rec_per_key[0] would have the estimate.

            varun Varun Gupta (Inactive) added a comment - First lets just consider the case without SYSTEM VERSIONING, now i run the select and see in the optimizer_trace   "plan_prefix": ["t1"], "table": "t2", "best_access_path": { "considered_access_paths": [ { "access_type": "ref", "index": "f2", "rows": 1.1e14, "cost": 2.1562, "chosen": true }, so rows shos 1.1e14. The number of records in the table t2 is 7 actually, so this is some garbage data. We get the estimate from if (!(records= keyinfo->actual_rec_per_key(key_parts-1))) which returns estimates in rec_per_key, so rec_per_key estimates have some garbage value for the key f2 (gdb) p keyinfo->rec_per_key[0] $18 = 288411799865327618 (gdb) p keyinfo->rec_per_key[1] $19 = 4785147619639637 The issue is that for UNIQUE long keys for which we create a hash, rec_per_key is allocated for all the keyparts of the key though on the storage engine level we will have only 1 keypart. On the sql layer we have all the keyparts but we dont make use of rec_per_key for such keys. So the solution to fix this would be to use rec_per_key for long unique keys as only 1 keypart, that means rec_per_key [0] would have the estimate.
            varun Varun Gupta (Inactive) added a comment - Patch http://lists.askmonty.org/pipermail/commits/2019-June/013850.html

            How would work in the optimizer, when there are three key parts but onlt one rec_per_key[] element?

            serg Sergei Golubchik added a comment - How would work in the optimizer, when there are three key parts but onlt one rec_per_key[] element?

            serg for unique keys with hash the optimizer would only know that there is only one keypart and it will try to use the first one only. But the storage engine should set rec_per_key correctly else we would need to set them explicity for these keys somehow.
            For the other keys everything remains the same

            varun Varun Gupta (Inactive) added a comment - serg for unique keys with hash the optimizer would only know that there is only one keypart and it will try to use the first one only. But the storage engine should set rec_per_key correctly else we would need to set them explicity for these keys somehow. For the other keys everything remains the same

            People

              varun Varun Gupta (Inactive)
              alice Alice Sherepa
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.