This prevents us from supporting other kinds of histograms.
The first low-hanging fruit would be to store the histogram bucket bounds precisely (like MySQL and PostgreSQL do, for example).
The idea of this MDEV is to switch to JSON as storage format for histograms.
If we do that, it will:
Improve the histogram precision
Allow the DBAs to examine the histograms
Enable other histogram types to be collected/used.
Milestone-1:
Let histogram_type have another possible value, tentative name "JSON"
when that is set, let ANALYZE TABLE syntax collect a JSON "histogram"
{ "hello":"world"}
that is, the following should work:
set histogram_type='json';
analyze table t1 persisent forall;
select histogram from mysql.column_stats where table_name='t1' ;
this should produce {"hello":"world"}.
Milestone-2: produce JSON with histogram.
- the exact format is not specified, for now, print the bucket endpoints and produce output like this:
[
"value1",
"value2",
...
]
Milestone-2, part#2: make mysql.column_stats.histogram a blob.
Milestone-3: Parse the JSON back into an array
Figure out how to use the JSON parser.
Parse the JSON data produced in Milestone-2 back. For now, just print the parsed values to stderr.
(Additional input provided on Zulip re parsing valid/invalid JSON histograms)
Milestone-4: Make the code support different kinds of Histograms
Currently, there's only one type of histogram.
smaller issue: histogram lookup functions assume the histogram stores fractions, not values.
bigger issue: memory allocation for histograms is de-coupled from reading the histograms. See alloc_statistics_for_table, read_histograms_for_table.
The histogram object lives in a data structure that is bzero'ed first and then filled later (IIRC there was a bug (fixed) where the optimizer attempted to use bzero'ed histogram)
Can histograms be collected or loaded in parallel by several threads? This was an (unintentional?) possibility but then it was disabled (see TABLE_STATISTICS_CB object and its use)
Step 1: Separate classes for binary and JSON histograms
Need to introduce
class Histogram -- interface, no data members.
class Histogram_binary : public Histogram
class Histogram_json : public Histogram
and a factory function
Histogram *create_histogram(Histogram_type)
for now, let Histogram_json::point_selectivity() and Histogram_json::range_selectivity() return 0.1 and 0.5, respectively.
Step 2: Demonstrate saving/loading of histograms
Now, the code already can:
collect a JSON histogram and save it.
when loading a histogram, figure from histogram_type column that this is JSON histogram being loaded, create Histogram_json and invoke the parse function.
Parse function at the moment only prints to stderr.
However, we should catch parse errors and make sure they are reported to the client.
The test may look like this:
INSERT INTO mysql.column_stats VALUES('test','t1','column1', .... '[invalid, json, data']);
FLUSH TABLES;
# this should print some descriptive test
--error NNNN
select * from test.t1;
Milestone-5: Parse the JSON data into a structure that allows lookups.
The structure is
std::vector<std::string>
and it holds the data in KeyTupleFormat (See the comments for reasoning. There was a suggestion to use in_vector (This is what IN subqueries use) but it didn't work out)
Milestone 5.1 (aka Milestone 44)
Make a function to estimate selectivity using the data structure specified in previous milestone.
Make range_selectivity() accept key_range parameters.
(currently, they accept fractions, which is only suitable for binary histograms)
This means Histogram_binary will need to have access to min_value and max_value to compute the fractions.
Attachments
Issue Links
causes
MDEV-26709JSON histogram may contain bucketS than histogram_size allows
Closed
MDEV-26710Histogram field in mysql.column_stats is too short, JSON histograms get corrupt
Closed
MDEV-26711JSON histograms become invalid due to not properly quoted values
Closed
MDEV-26718Query with: Cross join, Non-indexed column comparison, Column with histogram: speed varies 20x depending on histogram type
Closed
MDEV-26724Endless loop in json_escape_to_string upon collecting JSON histograms with empty string in a column
Closed
MDEV-26737Outdated VARIABLE_COMMENT for HISTOGRAM_TYPE in I_S.SYSTEM_VARIABLES and help
Closed
MDEV-26750Estimation for filtered rows is far off with JSON_HB histogram
Closed
MDEV-26751Deficiencies in MTR coverage for JSON histograms
Closed
MDEV-26801Valgrind/MSAN errors in Column_statistics_collected::finish, main.statistics_json fails
Sergei Petrunia
added a comment - Discussion:
https://lists.launchpad.net/maria-developers/msg12743.html
https://lists.launchpad.net/maria-developers/msg12757.html
Sergei Petrunia
added a comment - Code for Milestone-2: https://github.com/MariaDB/server/pull/1871 . It has a memory leak in class Histogram but this will be fixed in the subsequent milestones.
The storage format is specific to each datatype and is not explicitly defined.
The interface for populating the array and making lookups uses Item objects:
virtualvoid set(uint pos,Item *item)=0;
bool find(Item *item);
Sergei Petrunia
added a comment -
On the question of "How does 'col IN (const1,const2,...)' store its lookup array" :
It is item_cmpfunc.h, class in_vector. It has type-specific descendants like in_string, in_longlong,
in_timestamp. It is created through this call:
array= m_comparator.type_handler()->make_in_vector(thd, this , arg_count - 1);
The storage format is specific to each datatype and is not explicitly defined.
The interface for populating the array and making lookups uses Item objects:
virtual void set(uint pos,Item *item)=0;
bool find(Item *item);
alloc_statistics_for_table_share() - allocates stats data structures but
doesn't read them.
read_statistics_for_table() - loads the statistics.
Note that this is done in two steps: first, column stats are loaded, including
histogram_type and histogram_size. Then, the histogram itself is loaded.
The second step even does a separate read from mysql.column_stats!
The loading is synchronized using TABLE_STATISTICS_CB (only one thread is doing
the loading)
read_histograms_for_table() - called after read_statistics_for_table(), reads the
histograms.
The synchronization is done through TABLE_STATISTICS_CB (only one thread is doing
the loading).
Also, TABLE_STATISTICS_CB ensures that histograms are only read once (if there
are two TABLE objects, they'll share the same field->read_stats and so
Histogram object).
=== Collecting histogram ===
alloc_statistics_for_table() is used to allocate statistical data structures.
allocation is done on TABLE's MEM_ROOT.
collect_statistics_for_table() - is used to collect the statistics.
(Note: it looks like it's possible for different threads to collect statistics
for different fields simultaneously?)
Then, Column_stat::store_stat_fields() saves the data into table.
Then, the histogram is read back from disk before it is used.
Sergei Petrunia
added a comment - Some notes from debugging
=== Histogram lifecycle ===
=== Loading histogram from mysql.column_stat ===
alloc_statistics_for_table_share() - allocates stats data structures but
doesn't read them.
read_statistics_for_table() - loads the statistics.
Note that this is done in two steps: first, column stats are loaded, including
histogram_type and histogram_size. Then, the histogram itself is loaded.
The second step even does a separate read from mysql.column_stats!
The loading is synchronized using TABLE_STATISTICS_CB (only one thread is doing
the loading)
read_histograms_for_table() - called after read_statistics_for_table(), reads the
histograms.
The synchronization is done through TABLE_STATISTICS_CB (only one thread is doing
the loading).
Also, TABLE_STATISTICS_CB ensures that histograms are only read once (if there
are two TABLE objects, they'll share the same field->read_stats and so
Histogram object).
=== Collecting histogram ===
alloc_statistics_for_table() is used to allocate statistical data structures.
allocation is done on TABLE's MEM_ROOT.
collect_statistics_for_table() - is used to collect the statistics.
(Note: it looks like it's possible for different threads to collect statistics
for different fields simultaneously?)
Then, Column_stat::store_stat_fields() saves the data into table.
Then, the histogram is read back from disk before it is used.
It doesn't have to have anything to do with collection-time data structure.
It is basically an ordered array (either of bucket endpoints or of some
stuctures that contain endpoints plus something else).
The ordered array will be used to make lookups into it, so that we can find the
buckets that overlap with the range we're computing the estimate for (denote as
$SEARCH_RANGE).
It is likely that $SEARCH_RANGE partially overlaps with a bucket, that is
$BUCKET_MIN < $SEARCH_ENDP < $BUCKET_MAX.
In this case, we'll follow existing histograms and assume that $SEARCH_RANGE
occupies a "proportional" part of the bucket.
Here, the min_value and max_value have the bounds.
They are defined as
Field * Column_statistics::min_value
and pos_in_interval is defined as:
double pos_in_interval(Field *min, Field *max);
There are two implementations: pos_in_interval_val_real(), which gets min/max values by calling min/max->val_real() and then operating on the numbers it has got.
pos_in_interval_val_str() which gets min and max values by calling
and then operating on 8-byte prefixes as integers.
Sergei Petrunia
added a comment - - edited More on the in-memory data structure for lookups.
It doesn't have to have anything to do with collection-time data structure.
It is basically an ordered array (either of bucket endpoints or of some
stuctures that contain endpoints plus something else).
The ordered array will be used to make lookups into it, so that we can find the
buckets that overlap with the range we're computing the estimate for (denote as
$SEARCH_RANGE).
It is likely that $SEARCH_RANGE partially overlaps with a bucket, that is
$BUCKET_MIN < $SEARCH_ENDP < $BUCKET_MAX.
In this case, we'll follow existing histograms and assume that $SEARCH_RANGE
occupies a "proportional" part of the bucket.
Currenty the fraction is calculated as follows:
store_key_image_to_rec(field, (uchar *) $SEARCH_ENDP,
field->key_length());
pos = field->pos_in_interval(min_value,
max_value);
Here, the min_value and max_value have the bounds.
They are defined as
Field * Column_statistics::min_value
and pos_in_interval is defined as:
double pos_in_interval(Field *min, Field *max);
There are two implementations:
pos_in_interval_val_real() , which gets min/max values by calling min/max->val_real() and then operating on the numbers it has got.
pos_in_interval_val_str() which gets min and max values by calling
strxfrm(..., target_size=8, min->ptr + data_offset, min->data_length())
and then operating on 8-byte prefixes as integers.
1. Its methods to populate/make lookups use "Item *item" as parameters. This will need to change to also allow Field*.
2. The find() method currently only supports equality lookups. We need [X;Y) open/closed ranges.
3. The values are stored in some custom format. The format allows to compare values, but it is unclear how one could implement an analog of Field::pos_in_interval() for it.
4. in_vector is not flexible - there's no way to store any extra data for a bucket.
Arguments for using it: we will be able to reuse
1. Code that allocates the array
2. Encoding functions
Not much.
Sergei Petrunia
added a comment - - edited Arguments against in_vector data structure:
1. Its methods to populate/make lookups use "Item *item" as parameters. This will need to change to also allow Field*.
2. The find() method currently only supports equality lookups. We need [X;Y) open/closed ranges.
3. The values are stored in some custom format. The format allows to compare values, but it is unclear how one could implement an analog of Field::pos_in_interval() for it.
4. in_vector is not flexible - there's no way to store any extra data for a bucket.
Arguments for using it: we will be able to reuse
1. Code that allocates the array
2. Encoding functions
Not much.
Sergei Petrunia
added a comment - Leaning towards using std::vector<KeyTupleFormat> data structure/
Here is a patch https://github.com/MariaDB/server/commit/4d3c434028649a0f14471ed4a1a4c10b97b78eb8 that shows how to do all needed operations:
JSON text -> binary conversion
Comparing two values (so one can find the bucket)
Computing pos_in_interval(A,B,X) - for A <= X <= B, return a fraction specifying how close X to the endoints of the [A,B] interval.
Patch usage: https://gist.github.com/spetrunia/a09bd451f650737ba5ea7a6e51bc797a
Michael Okoko
added a comment - The progress so far is documented as part of the GSoC final report at https://gist.github.com/idoqo/ac520943e53f64034beaed4258b62ba5
histogram_type variable is used to specify what kind of histogram is collected.
After this patch, it can be set to a new value: JSON_HB.
Then, EITS code will create JSON histograms. Old histogram types remain supported, the optimizer will use whatever histogram is available.
The histograms are kept in mysql.column_stats, the columns of interest are:
hist_type (SINGLE_PREC_HB, DOUBLE_PREC_HB, now also JSON_HB)
histogram. for JSON_HB, this is a JSON document.
the function DECODE_HISTOGRAM also "supports" JSON_HB by returning the histogram unmodified. That is, one can do this:
select DECODE_HISTOGRAM(hist_type, histgram) from mysql.column_stats
and get a readable representation of histogram for any kind of histogram.
Things to test
Collection of histograms. ANALYZE ... PERSISTENT FOR... statements overlapping with other workload. Histogram memory management has changed significantly (This also affects old, binary histograms).
Use by the optimizer. You can look into the MTR tests and find examples like this:
analyze select * from tbl where col1= const
analyze select * from tbl where col1< const
analyze select * from tbl where col1 between const1 and const2
There is only one column col1 which is not indexed. The interesting parts of the output are filtered (prediction from the histogram) and r_filtered (actual value).
There will be differences in estimates with binary histograms. The estimates should be generally better. Rounding-error-level regressions are acceptable. An estimate that is significantly worse than one from the binary histogram warrants an investigation.
Sergei Petrunia
added a comment - - edited elenst , documentation for testing:
histogram_type variable is used to specify what kind of histogram is collected.
After this patch, it can be set to a new value: JSON_HB.
Then, EITS code will create JSON histograms. Old histogram types remain supported, the optimizer will use whatever histogram is available.
The histograms are kept in mysql.column_stats, the columns of interest are:
hist_type (SINGLE_PREC_HB, DOUBLE_PREC_HB, now also JSON_HB)
histogram. for JSON_HB, this is a JSON document.
the function DECODE_HISTOGRAM also "supports" JSON_HB by returning the histogram unmodified. That is, one can do this:
select DECODE_HISTOGRAM(hist_type, histgram) from mysql.column_stats
and get a readable representation of histogram for any kind of histogram.
Things to test
Collection of histograms. ANALYZE ... PERSISTENT FOR... statements overlapping with other workload. Histogram memory management has changed significantly (This also affects old, binary histograms).
Use by the optimizer. You can look into the MTR tests and find examples like this:
analyze select * from tbl where col1= const
analyze select * from tbl where col1< const
analyze select * from tbl where col1 between const1 and const2
There is only one column col1 which is not indexed. The interesting parts of the output are filtered (prediction from the histogram) and r_filtered (actual value).
There will be differences in estimates with binary histograms. The estimates should be generally better. Rounding-error-level regressions are acceptable. An estimate that is significantly worse than one from the binary histogram warrants an investigation.
Descriptions in the server for system variables need updating as well, for example histogram_type still states "Specifies type of the histograms created by ANALYZE. Possible values are: SINGLE_PREC_HB - single precision height-balanced, DOUBLE_PREC_HB - double precision height-balanced." even though JSON_HB is valid as well.
Ian Gilfillan
added a comment - Descriptions in the server for system variables need updating as well, for example histogram_type still states "Specifies type of the histograms created by ANALYZE. Possible values are: SINGLE_PREC_HB - single precision height-balanced, DOUBLE_PREC_HB - double precision height-balanced." even though JSON_HB is valid as well.
Hi there!
I would like to work on this issue and then submit my proposal.
How can I configuring system to solve this?