[MDEV-7648] Extra data in ANALYZE FORMAT=JSON $stmt Created: 2015-02-28 Updated: 2018-04-10 |
|
| Status: | Open |
| Project: | MariaDB Server |
| Component/s: | Optimizer |
| Fix Version/s: | None |
| Type: | Task | Priority: | Major |
| Reporter: | Sergei Petrunia | Assignee: | Sergei Petrunia |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | analyze-stmt | ||
| Issue Links: |
|
||||||||||||||||||||||||||||||||
| Description |
|
ANALYZE FORMAT=JSON $stmt can (should) print more data about execution. Requested things
|
| Comments |
| Comment by Sergei Petrunia [ 2015-02-28 ] | |||||||||||||||||||||||||
|
Studying how to get execution time. when the server is started with performance_schema=ON (and no other changes from the default), it actually does count time spent accessing the tables. Proof: put a breakpoint in start_table_io_wait_v1, see this to execute:
Put a breakpoint in end_table_io_wait_v1, see this:
| |||||||||||||||||||||||||
| Comment by Sergei Petrunia [ 2015-02-28 ] | |||||||||||||||||||||||||
|
Looking at P_S code... It has The Interface (with names starting with PSI_xxx), and The Implementation (names start with PFS_xxx). There are provisions for having two implementations, but there is only _v1 implementation. _v2 implementation is a dummy stub. P_S data structures:
The important part: end_table_io_wait_v1 collects its statistics in a PFS_table. A PFS_table is like a TABLE: it is not shared with other threads, a self-join will have two PFS_table objects. That is, the timing data it collects is really per-table. | |||||||||||||||||||||||||
| Comment by Sergei Petrunia [ 2015-02-28 ] | |||||||||||||||||||||||||
|
P_S being on doesn't imply that table reads will be timed. One can manually disable timing:
or:
| |||||||||||||||||||||||||
| Comment by Sergei Petrunia [ 2015-02-28 ] | |||||||||||||||||||||||||
|
PFS_table has:
PFS_table_stat has:
then
We need this:
(We need it for all indexes so that things like index_merge and DS-MRR work (hmm index_merge actually uses different pfs_table objects)). | |||||||||||||||||||||||||
| Comment by Sergei Petrunia [ 2015-03-01 ] | |||||||||||||||||||||||||
|
Looking what units are used in PFS_single_stat ...
returns data in picoseconds. The data is converted using a"normalizer":
wait_timer is a global variable (can be set through P_S.setup_timers)
the normalizer is used here
like so:
the resulting data is in pico-seconds. | |||||||||||||||||||||||||
| Comment by Sergei Petrunia [ 2015-03-01 ] | |||||||||||||||||||||||||
|
Tracking table read times will require start/stop points. The choice of starting point is obvious:
As for ending, there are options:
| |||||||||||||||||||||||||
| Comment by Sergei Petrunia [ 2015-03-02 ] | |||||||||||||||||||||||||
|
Hit an interesting property in P_S: http://bugs.mysql.com/bug.php?id=76100. It doesn't directly affect this task. | |||||||||||||||||||||||||
| Comment by Sergei Petrunia [ 2015-03-05 ] | |||||||||||||||||||||||||
|
Ok so we now can collect the time that is spent accessing the tables.
It's tempting to re-use those to collect the time spent in SELECTs... but we can't do that, because
shows that stage tracking is not enabled by default. | |||||||||||||||||||||||||
| Comment by Sergei Petrunia [ 2015-03-05 ] | |||||||||||||||||||||||||
|
Examples of output with r_time_ms, time spent to access the table: ANALYZE in PostgreSQL: PostgreSQL also support a kind of ANALYZE FORMAT=JSON. (their syntax is explain (analyze, format json) select): | |||||||||||||||||||||||||
| Comment by Sergei Petrunia [ 2015-03-25 ] | |||||||||||||||||||||||||
|
Switched from using P_S to using our own timing code. |