[MDEV-31067] selectivity_from_histogram >1.0 for col=const on DOUBLE_PREC_HB histogram Created: 2023-04-17  Updated: 2023-05-22  Resolved: 2023-05-22

Status: Closed
Project: MariaDB Server
Component/s: Optimizer
Affects Version/s: 10.4, 10.5, 10.6, 10.7, 10.8
Fix Version/s: 10.11.3, 10.4.29, 10.5.20, 10.6.13, 10.8.8, 10.9.6, 10.10.4

Type: Bug Priority: Blocker
Reporter: Sergei Petrunia Assignee: Sergei Petrunia
Resolution: Fixed Votes: 1
Labels: None

Issue Links:
Blocks
Relates
relates to MDEV-31327 Histogram range selectivity estimates... Stalled

 Description   

This is a public part of TODO-3823.

For certain data distributions, histogram code can produce selectivity>1.0 for a column=const equality on a DOUBLE_PREC_HB histogram (I think one can achieve the same on SINGLE_PREC_HB, the point is that it's not JSON_HB histogram).

create table t10 (a int);
-- Fill the table, will provide INSERTs below
analyze table t10 persistent for all;

set optimizer_trace=1;
explain select * from t10  where a in (91303);
+------+-------------+-------+------+---------------+------+---------+------+-------+----------+-------------+
| id   | select_type | table | type | possible_keys | key  | key_len | ref  | rows  | filtered | Extra       |
+------+-------------+-------+------+---------------+------+---------+------+-------+----------+-------------+
|    1 | SIMPLE      | t10   | ALL  | NULL          | NULL | NULL    | NULL | 99840 |   100.00 | Using where |
+------+-------------+-------+------+---------------+------+---------+------+-------+----------+-------------+

select json_detailed(json_extract(trace, '$**.selectivity_for_columns')) from information_schema.optimizer_trace\G
*************************** 1. row ***************************
json_detailed(json_extract(trace, '$**.selectivity_for_columns')): [
    [
        {
            "column_name": "a",
            "ranges": 
            ["91303 <= a <= 91303"],
            "selectivity_from_histogram": 18.28543534
        }
    ]
]

INSERTs (synthetic data):

create table bucket(a int);  # This holds how many rows we hold in a bucket.
insert into bucket select 1 from seq_1_to_780;
 
create table one_third_of_bucket(a int);  # one-third of a bucket
insert into one_third_of_bucket select 1 from seq_1_to_260;
 
create table t10 (a int);
insert into t10 select 0 from bucket, seq_1_to_4;
 
insert into t10 select 8693 from one_third_of_bucket;
insert into t10 select 8694 from one_third_of_bucket;
insert into t10 select 8695 from one_third_of_bucket;
 
 
insert into t10 select 34783 from one_third_of_bucket;
insert into t10 select 34784 from one_third_of_bucket;
insert into t10 select 34785 from one_third_of_bucket;
 
 
insert into t10 select 34785 from bucket, seq_1_to_8;
 
insert into t10 select 65214 from one_third_of_bucket;
insert into t10 select 65215 from one_third_of_bucket;
insert into t10 select 65216 from one_third_of_bucket;
 
insert into t10 select 65216 from bucket, seq_1_to_52;
 
insert into t10 select 65217 from one_third_of_bucket;
insert into t10 select 65218 from one_third_of_bucket;
insert into t10 select 65219 from one_third_of_bucket;
 
insert into t10 select 65219 from bucket;
 
 
insert into t10 select 73913 from one_third_of_bucket;
insert into t10 select 73914 from one_third_of_bucket;
insert into t10 select 73915 from one_third_of_bucket;
 
insert into t10 select 73915 from bucket, seq_1_to_40;
 
 
insert into t10 select 78257 from one_third_of_bucket;
insert into t10 select 78258 from one_third_of_bucket;
insert into t10 select 78259 from one_third_of_bucket;
 
insert into t10 select 91300 from one_third_of_bucket;
insert into t10 select 91301 from one_third_of_bucket;
insert into t10 select 91302 from one_third_of_bucket;
 
insert into t10 select 91302 from bucket, seq_1_to_6;
 
insert into t10 select 91303 from one_third_of_bucket;
insert into t10 select 91304 from one_third_of_bucket;
insert into t10 select 91305 from one_third_of_bucket;
 
insert into t10 select 91305 from bucket, seq_1_to_8;
 
insert into t10 select  99998 from one_third_of_bucket;
insert into t10 select  99999 from one_third_of_bucket;
insert into t10 select 100000 from one_third_of_bucket;



 Comments   
Comment by Sergei Petrunia [ 2023-04-17 ]

The problem is in Histogram::point_selectivity(), this part of code:

    else
    {
      /* 
        The value 'pos' fits within one single histogram bucket.
        ...
      */
      HERE! 

In particular, we arrive at the value that's greater than 1.0 here:

      /*
        So:
        - each bucket has the same #rows 
        - values are unformly distributed across the [min_value,max_value] domain.
  
        If a bucket has value range that's N times bigger then average, than
        each value will have to have N times fewer rows than average.
      */
      sel= avg_sel * avg_bucket_width / current_bucket_width;

Comment by Sergei Petrunia [ 2023-04-17 ]

Introduced in:

commit ad842b5f058d5342c22cdc86542baa2ae9db5e70
Author:	Sergey Petrunya <psergey@askmonty.org>  Wed Mar 26 17:55:00 2014
Committer:	Sergey Petrunya <psergey@askmonty.org>  Wed Mar 26 17:55:00 2014
Original File:	sql/sql_statistics.h
 
MDEV-5926: EITS: Histogram estimates for column=least_possible_value are wrong
[Attempt #2]

Comment by Sergei Petrunia [ 2023-04-17 ]

Trying to understand the logic behind the

 sel= avg_sel * avg_bucket_width / current_bucket_width;

So, we are looking to get #rows estimate for a value X that fits within a single bucket. That is,

  bucket_Y.left_endpoint < position(X) < bucket_Y.right_endpoint

How many rows are in the bucket that have value X?

point_selectivity() has avg_sel parameter, but that's a global per-table average.

Suppose the histogram for this particular table has two buckets:

bucketY1 = {left_endp1,  left_endp1 + 0.5} 
bucketY2 = {left_endp2,  left_endp2 + 0.0001}

both buckets hold the same number of rows.
But do they hold the same number of different values?

If we assume that the values are distributed uniformly across the 0.0 (min value) to 1.0 (max value) range, we might guess that bucketY1 has more different values in it than bucketY2.

This is what this code is trying to do. It assumes that buckets that span bigger value range have lower avg_frequency (rows have different values, plenty of values to choose from). Contrary, a bucket that span small value range has fewer different values, and so rows in the bucket will tend to have the same value.

Comment by Sergei Petrunia [ 2023-04-17 ]

In the testcase for this MDEV, the bucket has a very narrow value range: its "right_endpoint - left_endpoint" is 500x smaller than what the average bucket's right_endpoint - left_endpoint has.

The code multiplies the estimate by this factor and ends up with a non-sensical estimate.

Comment by Sergei Petrunia [ 2023-04-18 ]

bb-10.6-mdev31067

Comment by Sergei Petrunia [ 2023-04-18 ]

Takeaways from the optimizer call:

Igor would prefer to see the "brave heuristic" removed completely. I'm hesitant as that will cause more changes in the estimates (and so changes in the query plans).

But let me implement that in a patch and see if that one would have an agreement.

Comment by Sergei Petrunia [ 2023-04-19 ]

The alternative patch implementing suggestions from the above call: https://github.com/MariaDB/server/commits/bb-10.6-mdev31067-variant2. igor please review.

Comment by Sergei Petrunia [ 2023-04-19 ]

igor please review.

Comment by Igor Babaev [ 2023-04-24 ]

psergei Sergei,
1. This is a bug of 10.0 code. Thus the fix should be applied starting from 10.4.
2. Let's call a value frequently used if its position in histogram coincides with the beginning of two adjacent buckets. Whether the probed value is frequently used can be easily checked. The number of buckets with the same starting point coinciding with the position of the value provides us with a good low bound for the selectivity of such value. For not frequently used values we should calculate the average frequency for them only and it will be less than the average frequency for all values.
Here's an example:
100 rows with 7 values and 5 buckets with 20 rows in each bucket, one value is encountered in 40 rows and this is the only frequently used value, all remaining values are encountered in 10 rows. The average frequency is 100/7 ~ 14.3, so the average selectivity is 14.3% that is greater than 10% expected for non-frequently used values.

Comment by Sergei Petrunia [ 2023-04-27 ]

https://github.com/MariaDB/server/commits/bb-10.6-mdev31067-variant3 . igor please review.

Comment by Igor Babaev [ 2023-04-28 ]

In my opinion the function Histogram::point_selectivity() the has patch touched still requires some improvements.

Comment by Sergei Petrunia [ 2023-04-28 ]

igor,

Yet I really mind the code that calculates selectivity for popular values: it looks too complicated and your comment does not help to understand it.

How is this part of this MDEV? Nor the testcase for this MDEV, neither any of the fixes touch that part of the code.

Comment by Sergei Golubchik [ 2023-04-28 ]

ok to push variant 2. commit https://github.com/MariaDB/server/commit/a87c17ab566def817da2f79fb7badb7318a1e311

It is a conservative upper-bound estimate, and is delivers lower estimates than a complex magic of popular values in the variant 3. Thus I see no reason to use the complex approach if the simple and safe approach produces better estimates.

Generated at Thu Feb 08 10:21:01 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.