[MDEV-18073] get_range_limit_read_cost() doesnt adjust LIMIT for the range access Created: 2018-12-24  Updated: 2019-01-23

Status: Open
Project: MariaDB Server
Component/s: Optimizer
Affects Version/s: 10.1, 10.2, 10.3, 10.4
Fix Version/s: 10.4

Type: Bug Priority: Major
Reporter: Sergei Petrunia Assignee: Sergei Petrunia
Resolution: Unresolved Votes: 1
Labels: order-by-optimization

Issue Links:
Relates
relates to MDEV-17761 Odd optimizer choice with ORDER BY LI... Stalled
relates to MDEV-18079 Opportunistic optimization for ORDER ... Open

 Description   

Basically, it ignores the selectivity of the range access being considered.

Consider an example

create table t1  (
  pk int not null,
  a int,
  b int,
  c int,
  filler char(100),
  KEY a_c(a,c),
  KEY a_b(a,b)
);
 
insert into t1 
select a, a,a,a, 'filler-dataaa' from test.one_k;
 
update t1 set a=1 where pk between 0 and 150;
update t1 set b=2 where pk between 0 and 20;

Let's debug this query:

explain 
select * from t1
where a=1 and b=2
order by c limit 10;

Put a breakpoint in test_if_cheaper_ordering.
We will have:

  • select_limit=10
  • refkey_rows_estimate=21 (this is from range on index a_b)
  • table_records=1000

The selectivity WHERE condition would be about 2%:

refkey_rows_estimate / table_records= 0.021  

This code is executed:

          select_limit= (ha_rows) (select_limit *
                                   (double) table_records /
                                    refkey_rows_estimate);

and adjusts select_limit to be: 10 / 0.021 = 476.
The meaning of this number is:

Assuming we are doing a full index scan on an index compatible with ORDER BY clause,
how many records we would need to scan before we get the LIMIT (10) matches we want

Good so far.

Then, we enter get_range_limit_read_cost(... rows_limit=476, ...).

and arrive here:

    if (best_rows > rows_limit)
    {
      /*
        LIMIT clause specifies that we will need to read fewer records than
        quick select will return. Assume that quick select's cost is
        proportional to the number of records we need to return (e.g. if we 
        only need 1/3rd of records, it will cost us 1/3rd of quick select's
        read time)
      */
      best_cost *= rows_limit / best_rows;
    }
    *read_time= best_cost;

Look at the if-condition.
It compares the number of rows we will get from the quick select (151 rows, these rows will satisfy a=1) with the rows_limit, which is the number of rows to read before the condition on "a=1" is applied.

This seems wrong.

Instead of rows_limit, we should use "the number of rows to read from quick select, before we find #LIMIT matches".



 Comments   
Comment by Sergei Petrunia [ 2018-12-24 ]

A patch: http://lists.askmonty.org/pipermail/commits/2018-December/013230.html

Comment by Sergei Petrunia [ 2018-12-24 ]

Playing with the patch and the provided example dataset
Before the patch:

mysql> explain  select * from t1 where a=1 and b=2 order by c limit 10;
+------+-------------+-------+------+---------------+------+---------+-------------+------+-----------------------------+
| id   | select_type | table | type | possible_keys | key  | key_len | ref         | rows | Extra                       |
+------+-------------+-------+------+---------------+------+---------+-------------+------+-----------------------------+
|    1 | SIMPLE      | t1    | ref  | a_c,a_b       | a_b  | 10      | const,const |   21 | Using where; Using filesort |
+------+-------------+-------+------+---------------+------+---------+-------------+------+-----------------------------+

After the patch:

mysql> explain  select * from t1 where a=1 and b=2 order by c limit 10;
+------+-------------+-------+-------+---------------+------+---------+------+------+-------------+
| id   | select_type | table | type  | possible_keys | key  | key_len | ref  | rows | Extra       |
+------+-------------+-------+-------+---------------+------+---------+------+------+-------------+
|    1 | SIMPLE      | t1    | range | a_c,a_b       | a_c  | 5       | NULL |  151 | Using where |
+------+-------------+-------+-------+---------------+------+---------+------+------+-----------

Comment by Varun Gupta (Inactive) [ 2018-12-24 ]

The dataset I created
create table t0 (a int);
INSERT INTO t0 VALUES (0),(0),(0),(0),(2),(0),(0),(1),(1),(0);

CREATE TABLE t1 (
a int(11) DEFAULT NULL,
b int(11) DEFAULT NULL,
d int(11) DEFAULT NULL,
KEY b_2 (b),
KEY b_3 (a,b),
KEY b_4 (a,d)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4;

insert into t1 select A.a , B.a, C.a from t0 A, t0 B, t0 C, t0 D;

MariaDB [test]> analyze select a,b,d from t1 where a=1 and d=2 order by b;
+------+-------------+-------+------+---------------+------+---------+-------------+------+--------+----------+------------+-----------------------------+
| id   | select_type | table | type | possible_keys | key  | key_len | ref         | rows | r_rows | filtered | r_filtered | Extra                       |
+------+-------------+-------+------+---------------+------+---------+-------------+------+--------+----------+------------+-----------------------------+
|    1 | SIMPLE      | t1    | ref  | b_3,b_4       | b_4  | 10      | const,const |  200 | 200.00 |   100.00 |     100.00 | Using where; Using filesort |
+------+-------------+-------+------+---------------+------+---------+-------------+------+--------+----------+------------+-----------------------------+
1 row in set (0.022 sec)

MariaDB [test]> analyze select a,b,d from t1 where a=1 and d=2 order by b limit 1;
+------+-------------+-------+-------+---------------+------+---------+-------------+-------+--------+----------+------------+-------------+
| id   | select_type | table | type  | possible_keys | key  | key_len | ref         | rows  | r_rows | filtered | r_filtered | Extra       |
+------+-------------+-------+-------+---------------+------+---------+-------------+-------+--------+----------+------------+-------------+
|    1 | SIMPLE      | t1    | index | b_3,b_4       | b_2  | 5       | const,const | 10266 | 288.00 |     1.95 |       0.35 | Using where |
+------+-------------+-------+-------+---------------+------+---------+-------------+-------+--------+----------+------------+-------------+
1 row in set (0.026 sec)

So with limit=1 we change from ref -> index access. But I feel it is strange that we use b_2 instead of b_3. The logic here is that b_3 uses more keyparts than b_2.

Comment by Varun Gupta (Inactive) [ 2018-12-24 ]

Lets use the optimizer trace to trace test_if_cheaper_ordering

          {
            "reconsidering_access_paths_for_index_ordering": {
              "clause": "ORDER BY",
              "fanout": 1,
              "read_time": 82.001,
              "table_name": "t1",
              "rows_estimation": 200,
              "possible_keys": [
                {
                  "table_name": "t1",
                  "index": "b_2",
                  "can_resolve_order": true,
                  "updated_limit": 51,
                  "index_scan_time": 51,
                  "records": 10266,
                  "chosen": true
                },
                {
                  "table_name": "t1",
                  "index": "b_3",
                  "can_resolve_order": true,
                  "updated_limit": 51,
                  "range_scan_time": 0.4074,
                  "index_scan_time": 0.4074,
                  "records": 2000,
                  "chosen": false,
                  "cause": "keyparts_greater_than_the_current_best_keyparts"
                },
                {
                  "table_name": "t1",
                  "index": "b_4",
                  "can_resolve_order": false,
                  "cause": "not_usable_index_for_the query"
                }
              ]
            }

Comment by Sergei Petrunia [ 2019-01-23 ]

Pushed the patch that fixes the computation into 10.4
(TODO: look again at the above two comments and decide if/what should be done for them [and in the scope of this MDEV or not])

Generated at Thu Feb 08 08:41:18 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.