[MDEV-26519] JSON Histograms: improve histogram collection Created: 2021-09-01  Updated: 2023-03-21  Resolved: 2022-01-21

Status: Closed
Project: MariaDB Server
Component/s: Optimizer
Fix Version/s: 10.8.1

Type: Task Priority: Blocker
Reporter: Sergei Petrunia Assignee: Sergei Petrunia
Resolution: Fixed Votes: 0
Labels: Preview_10.7, Preview_10.8, eits

Issue Links:
PartOf
is part of MDEV-21130 Histograms: use JSON as on-disk format Closed
is part of MDEV-27373 Q1 2022 release merge Closed
Problem/Incident
causes MDEV-26709 JSON histogram may contain bucketS th... Closed
causes MDEV-26710 Histogram field in mysql.column_stats... Closed
causes MDEV-26711 JSON histograms become invalid due to... Closed
causes MDEV-26718 Query with: Cross join, Non-indexed c... Closed
causes MDEV-26724 Endless loop in json_escape_to_string... Closed
causes MDEV-26751 Deficiencies in MTR coverage for JSON... Closed
causes MDEV-26801 Valgrind/MSAN errors in Column_statis... Closed
causes MDEV-26885 Estimation for filtered rows less pre... Closed
causes MDEV-26886 Estimation for filtered rows less pre... Closed
causes MDEV-26892 JSON histograms become invalid with a... Closed
causes MDEV-26901 Estimation for filtered rows less pre... Closed
causes MDEV-26911 Unexpected ER_DUP_KEY, ASAN errors, d... Closed
causes MDEV-27203 Valgrind / MSAN errors in Histogram_j... Closed
causes MDEV-27229 Estimation for filtered rows less pre... Closed
causes MDEV-27230 Estimation for filtered rows less pre... Closed
causes MDEV-27243 Estimation for filtered rows less pre... Closed
causes MDEV-27492 DOUBLE_PREC_HB histogram has poor est... Stalled
causes MDEV-27497 DOUBLE_PREC_HB histogram produces wro... Stalled
causes MDEV-30097 Assertion `low == (int)buckets.size()... Open
Relates
relates to MDEV-27472 ANALYZE: r_filtered=100 may look conf... Stalled
relates to MDEV-27495 Less precise filtered estimation with... Closed
relates to MDEV-26764 JSON_HB Histograms: handle BINARY and... Closed
relates to MDEV-26849 JSON Histograms: point selectivity es... Closed
relates to MDEV-27062 Make histogram_type=JSON_HB the new d... Closed

 Description   

The histograms as defined in MDEV-21130 follow the approach used by SINGLE_PREC_HB/DOUBLE_PREC_HB histograms and put the bucket bound exactly after
a certain fraction of rows.

1. Popular values should have their own buckets

For example, consider a table with these values (each character is one row):

  aaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbccdefghijkilmnopq
  |     |     |     |     |     |     |     |     |
  0     1     2     3     4     5     6     7     8

A histogram with 8 buckets will have these buckets:

1. a-a
2. a-a
3. a-a
3. a-a
5. a-b
6. b-f
7. f-i
8. i-q

The estimation function will be able to see that 'a' is a popular value, it will determine its frequency with bucket_size precision.

However, a better approach would be to put all 'a' values in one bucket:

  aaaaaaaaaaaaaaaaaaaaaaaaaaaabbbbccdefghijkilmnopq
  |                           |  |  |  |  |  |  | |
  0                           1  2  3  4  5  6  7 8

This will give a more precise number for table 'a'.
It will also provide more buckets for other values.

Other databases

  • MySQL does something like this.
  • PostgreSQL collects MCVs (Most Common Values) and removes them from consideration before building the histogram.

2. Store the number of distinct values in each bucket

Again, following MySQL here: store number of distinct values in each bucket.

For strings, store only prefix

Store only a 40-character prefix.



 Comments   
Comment by Sergei Petrunia [ 2021-09-07 ]

Note for those who were asking "why it is not done like in MySQL-8" : MySQL 8 has some properties that I doubt we will want to have: https://bugs.mysql.com/bug.php?id=104789

Comment by Sergei Petrunia [ 2021-09-10 ]

Suggestion #1:

"Let each value that's larger than one bucket be in its own bucket"

However, this may cause us to generate close to 2*N buckets (where N is the
intended number of buckets). The idea is that popular values are separated
by a small amount of unpopular values.

0123456789 0123456789 0123456789 0123456789  --  numbers, just for reference
 AAAAAAAAb bCCCCCCCCd dEEEEEEEEf fGGGGGGG..  --  data (one character is one row)

  • suppose bucket_size=5
  • capital letters are popular values (each takes >5 rows)
  • small letters are non-popular values

Make each "popular" value take a bit more than bucket_size rows. The table can
have slightly less than N such "popular" values (let it be N1)

Between each two consequent popular values, put a few rows with unpopular
values. Then, each popular value gets a bucket, and each gap gets a bucket,
which gives 2*N1 buckets.

Comment by Sergei Petrunia [ 2021-09-10 ]

Suggestion #2:
Walk through the values and only put the value into a separate bucket if it covers
the current position + the whole next bucket:

Here, D is not put into a separate bucket:

 |===============|===============|
  aaabbccDDDDDDDDDDDDDDDDDDDDDDDee 

Here F is put into a separate bucket:

 |===============|===============|
  aaabbccFFFFFFFFFFFFFFFFFFFFFFFFFFF

For values that take between 1 and 2 buckets, getting their own bucket becomes
is a matter of luck.

A popular value that is put into a larger bucket "saves" buckets by using fewer buckets.
If this happens, the generated histogram will have fewer buckets.

(The alternative is to make the next buckets have fewer rows but we are not
doing that in this MDEV )

Comment by Sergei Petrunia [ 2021-09-10 ]

https://github.com/MariaDB/server/tree/bb-10.7-mdev26519

Comment by Sergei Petrunia [ 2021-09-10 ]

The patch implements Suggestion #2 described above.

Comment by Ian Gilfillan [ 2021-09-22 ]

For anyone wanting to preview the feature, see https://mariadb.org/10-7-preview-feature-json-histograms/

Comment by Elena Stepanova [ 2021-12-08 ]

The 10.8 branch to test is preview-10.8-MDEV-26519-json-histograms

Comment by Elena Stepanova [ 2021-12-14 ]

Here are the acceptance criteria I am planning to apply to this preview (minor modifications are possible):

1) At least 100x5min test runs for estimation precision checks should pass
Note: This criterion is made-up to replace the absent requirements. Constants below were chosen in an arbitrary fashion and can be adjusted if there is a demand and interested parties agree.
Note: "Special case" or "corner case" exceptions will be accepted, but only if they are officially documented in a form understandable to end users.
The test is performed on a non-debug build as follows:

  • two servers with mildly randomized identical configurations are run using the same binaries from the preview branch, test server running with JSON_HB and the baseline with DOUBLE_PREC_HB, histogram_size default;
  • the servers execute randomized single-table ANALYZE FORMAT=JSON SELECT statements on randomized table structures and data, with table size mostly ranging from 20 to 5000 rows;
  • only queries with rows value >= 20 are looked at;
  • only queries for which on the baseline server the precision of filtered estimation is within 5% are looked at;
  • an error is returned if the precision of filtered estimation on the test server is more than 20% off.

2) At least 100x5min test runs for histogram health checks and basic server stability should show no feature-specific problems or regressions comparing to vanilla 10.8.
The stress test is performed on a debug ASAN build with heavily randomized configurations as follows:

  • the standard generic regression test combinations (random DML+DDL) are used, with the addition of histogram_type=JSON_HB and frequent ANALYZE TABLE .. PERSISTENT FOR ALL;
  • periodically mysql.column_stats.histogram for existing JSON histograms is checked so that
    • JSON_VALID is true,
    • actual histogram size is less or equal than histogram_size value,
    • the total size of all buckets sums up to 1.

3) MTR test runs (subject to general skip lists) on an MSAN/ASAN/UBSAN debug builds, big+nm and big+ps, with --mysqld=--histogram-type=JSON_HB, may only return legacy failures and understandable mismatch/test failures caused by the non-default option.

4) MTR test runs (subject to general skip lists) on an MSAN/ASAN/UBSAN debug builds, big+nm and big+ps, may only return legacy failures.

Legend: as above

Test Status Updated Last tested commit Notes
Estimation precision 2022-01-16 348d3788 A few failures similar to those already ruled out as unrelated issues
Health/stability 2022-01-16 348d3788 18% tests failed on unrelated reasons
ASAN/MSAN/UBSAN big/nm+ps MTR 2022-01-16 746fc9e5 a few unrelated sporadic failures
ASAN/MSAN/UBSAN big/nm+ps MTR with JSON_HB 2022-01-16 746fc9e5 a few unrelated sporadic failures
Comment by Elena Stepanova [ 2022-01-17 ]

I think preview-10.8-MDEV-26519-json-histograms as of commit 746fc9e5 can be pushed into 10.8 main and released with 10.8.1.

The difference between 746fc9e5 and 348d3788 is in test files only, so tests performed on 348d3788 do not need to be re-run.

Please note that testing of the feature was limited to its technical functionality (histogram precision) and absence of functional/stability regression. Due to the specifics of the feature, it doesn't reflect end user experience – users don't use histograms directly and their precision as such doesn't concern them, what's important is the final performance of query execution, and it wasn't a part of the feature testing as it was ruled out at early stages as irrelevant to the task scope. It is reasonable for the feature itself (MDEV-21130/MDEV-26519), but for enabling it by default (MDEV-27062) I would recommend some comparative performance testing, at least with widely-recognized workloads, possibly with the table structures tuned to extend histogram usage.

Comment by Sergei Petrunia [ 2022-01-21 ]

Pushed into 10.8. JSON_HB histograms are NOT enabled by default.

Generated at Thu Feb 08 09:45:55 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.