Details
-
Bug
-
Status: Closed (View Workflow)
-
Major
-
Resolution: Fixed
-
10.6
Description
Consider an attached testcase mdev34190-test.diff
It creates a table with 160K rows, each row being about 1K.
This should take about 10K InnoDB disk pages (assuming page fill ratio=100%).
Then, I restart the server, and do an ANALYZE for a full table scan.
I take care to prevent any other activity:
- Disable buffer pool save/load
- Disable background statistics collection
- Following Marko's advise, use innodb_fast_shutdown=0
I get:
show status like 'innodb_pages_read';
|
Variable_name Value
|
Innodb_pages_read 143
|
analyze format=json
|
select * from t1 force index (PRIMARY)
|
order by a asc, b asc, c asc, d asc;
|
ANALYZE
|
{
|
"query_block": { |
"select_id": 1, |
"r_loops": 1, |
"r_total_time_ms": 910.051, |
"table": { |
"table_name": "t1", |
"access_type": "index", |
"key": "PRIMARY", |
"key_length": "1028", |
"used_key_parts": ["a", "b", "c", "d"], |
"r_loops": 1, |
"rows": 1, |
"r_rows": 163840, |
"r_table_time_ms": 794.313, |
"r_other_time_ms": 115.711, |
"r_engine_stats": { |
"pages_accessed": 11708, |
"pages_read_count": 271, |
"pages_read_time_ms": 13.03 |
},
|
"filtered": 100, |
"r_filtered": 100, |
"using_index": true |
}
|
}
|
}
|
only 271 pages were read? which took just 13 ms of 794 ms spent in the engine?
show status like 'innodb_pages_read';
|
Variable_name Value
|
Innodb_pages_read 12112
|
Note that Innodb_pages_read shows a more reasonable number of 12K.
If I get the same query to do a backward scan by using ORDER BY DESC, I get a more reasonable numbers:
analyze format=json
|
select * from t1 force index (PRIMARY)
|
order by a desc, b desc, c desc, d desc;
|
ANALYZE
|
{
|
"query_block": { |
"select_id": 1, |
"r_loops": 1, |
"r_total_time_ms": 1129.6, |
"table": { |
"table_name": "t1", |
"access_type": "index", |
"key": "PRIMARY", |
"key_length": "1028", |
"used_key_parts": ["a", "b", "c", "d"], |
"r_loops": 1, |
"rows": 1, |
"r_rows": 163841, |
"r_table_time_ms": 1036.171, |
"r_other_time_ms": 93.424, |
"r_engine_stats": { |
"pages_accessed": 11708, |
"pages_read_count": 11707, |
"pages_read_time_ms": 315.503 |
},
|
"filtered": 100, |
"r_filtered": 100, |
"using_index": true |
}
|
}
|
}
|
Attachments
Issue Links
- relates to
-
MDEV-31558 Add InnoDB engine information to the slow query log
-
- Closed
-
-
MDEV-31577 Make ANALYZE FORMAT=JSON print innodb stats
-
- Closed
-
-
MDEV-34125 ANALYZE FORMAT=JSON: r_engine_stats.pages_read_time_ms has wrong scale
-
- Closed
-
Activity
Field | Original Value | New Value |
---|---|---|
Description |
Consider an attached testcase.
It creates a table with 160K rows, each row being about 1K. This should take about 10K InnoDB disk pages (assuming page fill ratio=100%). Then, I restart the server, and do an ANALYZE for a full table scan. I take care to prevent any other activity: - Disable buffer pool save/load - Disable background statistics collection - Following Marko's advise, use innodb_fast_shutdown=0 I get: {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 143 analyze format=json select * from t1 force index (PRIMARY) order by a asc, b asc, c asc, d asc; ANALYZE {code} {code:js} { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 910.051, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163840, "r_table_time_ms": 794.313, "r_other_time_ms": 115.711, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 271, "pages_read_time_ms": 13.03 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} only 271 pages were read? {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 12112 {code} Note that Innodb_pages_read shows a more reasonable number of 12K. |
Consider an attached testcase
It creates a table with 160K rows, each row being about 1K. This should take about 10K InnoDB disk pages (assuming page fill ratio=100%). Then, I restart the server, and do an ANALYZE for a full table scan. I take care to prevent any other activity: - Disable buffer pool save/load - Disable background statistics collection - Following Marko's advise, use innodb_fast_shutdown=0 I get: {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 143 analyze format=json select * from t1 force index (PRIMARY) order by a asc, b asc, c asc, d asc; ANALYZE {code} {code:js} { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 910.051, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163840, "r_table_time_ms": 794.313, "r_other_time_ms": 115.711, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 271, "pages_read_time_ms": 13.03 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} only 271 pages were read? {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 12112 {code} Note that Innodb_pages_read shows a more reasonable number of 12K. |
Attachment | mdev34190-test.diff [ 73503 ] |
Description |
Consider an attached testcase
It creates a table with 160K rows, each row being about 1K. This should take about 10K InnoDB disk pages (assuming page fill ratio=100%). Then, I restart the server, and do an ANALYZE for a full table scan. I take care to prevent any other activity: - Disable buffer pool save/load - Disable background statistics collection - Following Marko's advise, use innodb_fast_shutdown=0 I get: {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 143 analyze format=json select * from t1 force index (PRIMARY) order by a asc, b asc, c asc, d asc; ANALYZE {code} {code:js} { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 910.051, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163840, "r_table_time_ms": 794.313, "r_other_time_ms": 115.711, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 271, "pages_read_time_ms": 13.03 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} only 271 pages were read? {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 12112 {code} Note that Innodb_pages_read shows a more reasonable number of 12K. |
Consider an attached testcase [^mdev34190-test.diff]
It creates a table with 160K rows, each row being about 1K. This should take about 10K InnoDB disk pages (assuming page fill ratio=100%). Then, I restart the server, and do an ANALYZE for a full table scan. I take care to prevent any other activity: - Disable buffer pool save/load - Disable background statistics collection - Following Marko's advise, use innodb_fast_shutdown=0 I get: {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 143 analyze format=json select * from t1 force index (PRIMARY) order by a asc, b asc, c asc, d asc; ANALYZE {code} {code:js} { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 910.051, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163840, "r_table_time_ms": 794.313, "r_other_time_ms": 115.711, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 271, "pages_read_time_ms": 13.03 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} only 271 pages were read? {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 12112 {code} Note that Innodb_pages_read shows a more reasonable number of 12K. |
Description |
Consider an attached testcase [^mdev34190-test.diff]
It creates a table with 160K rows, each row being about 1K. This should take about 10K InnoDB disk pages (assuming page fill ratio=100%). Then, I restart the server, and do an ANALYZE for a full table scan. I take care to prevent any other activity: - Disable buffer pool save/load - Disable background statistics collection - Following Marko's advise, use innodb_fast_shutdown=0 I get: {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 143 analyze format=json select * from t1 force index (PRIMARY) order by a asc, b asc, c asc, d asc; ANALYZE {code} {code:js} { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 910.051, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163840, "r_table_time_ms": 794.313, "r_other_time_ms": 115.711, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 271, "pages_read_time_ms": 13.03 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} only 271 pages were read? {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 12112 {code} Note that Innodb_pages_read shows a more reasonable number of 12K. |
Consider an attached testcase [^mdev34190-test.diff]
It creates a table with 160K rows, each row being about 1K. This should take about 10K InnoDB disk pages (assuming page fill ratio=100%). Then, I restart the server, and do an ANALYZE for a full table scan. I take care to prevent any other activity: - Disable buffer pool save/load - Disable background statistics collection - Following Marko's advise, use innodb_fast_shutdown=0 I get: {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 143 analyze format=json select * from t1 force index (PRIMARY) order by a asc, b asc, c asc, d asc; ANALYZE {code} {code:js} { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 910.051, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163840, "r_table_time_ms": 794.313, "r_other_time_ms": 115.711, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 271, "pages_read_time_ms": 13.03 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} only 271 pages were read? which took just 13 ms of 794 ms spent in the engine? {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 12112 {code} Note that Innodb_pages_read shows a more reasonable number of 12K. |
Description |
Consider an attached testcase [^mdev34190-test.diff]
It creates a table with 160K rows, each row being about 1K. This should take about 10K InnoDB disk pages (assuming page fill ratio=100%). Then, I restart the server, and do an ANALYZE for a full table scan. I take care to prevent any other activity: - Disable buffer pool save/load - Disable background statistics collection - Following Marko's advise, use innodb_fast_shutdown=0 I get: {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 143 analyze format=json select * from t1 force index (PRIMARY) order by a asc, b asc, c asc, d asc; ANALYZE {code} {code:js} { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 910.051, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163840, "r_table_time_ms": 794.313, "r_other_time_ms": 115.711, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 271, "pages_read_time_ms": 13.03 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} only 271 pages were read? which took just 13 ms of 794 ms spent in the engine? {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 12112 {code} Note that Innodb_pages_read shows a more reasonable number of 12K. |
Consider an attached testcase [^mdev34190-test.diff]
It creates a table with 160K rows, each row being about 1K. This should take about 10K InnoDB disk pages (assuming page fill ratio=100%). Then, I restart the server, and do an ANALYZE for a full table scan. I take care to prevent any other activity: - Disable buffer pool save/load - Disable background statistics collection - Following Marko's advise, use innodb_fast_shutdown=0 I get: {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 143 analyze format=json select * from t1 force index (PRIMARY) order by a asc, b asc, c asc, d asc; ANALYZE {code} {code:js} { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 910.051, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163840, "r_table_time_ms": 794.313, "r_other_time_ms": 115.711, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 271, "pages_read_time_ms": 13.03 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} only 271 pages were read? which took just 13 ms of 794 ms spent in the engine? {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 12112 {code} Note that Innodb_pages_read shows a more reasonable number of 12K. If I get the same query to do a backward scan by using ORDER BY DESC, I get a more reasonable numbers: {code:json} analyze format=json select * from t1 force index (PRIMARY) order by a desc, b desc, c desc, d desc; ANALYZE { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 1129.6, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163841, "r_table_time_ms": 1036.171, "r_other_time_ms": 93.424, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 11707, "pages_read_time_ms": 315.503 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} |
Description |
Consider an attached testcase [^mdev34190-test.diff]
It creates a table with 160K rows, each row being about 1K. This should take about 10K InnoDB disk pages (assuming page fill ratio=100%). Then, I restart the server, and do an ANALYZE for a full table scan. I take care to prevent any other activity: - Disable buffer pool save/load - Disable background statistics collection - Following Marko's advise, use innodb_fast_shutdown=0 I get: {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 143 analyze format=json select * from t1 force index (PRIMARY) order by a asc, b asc, c asc, d asc; ANALYZE {code} {code:js} { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 910.051, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163840, "r_table_time_ms": 794.313, "r_other_time_ms": 115.711, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 271, "pages_read_time_ms": 13.03 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} only 271 pages were read? which took just 13 ms of 794 ms spent in the engine? {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 12112 {code} Note that Innodb_pages_read shows a more reasonable number of 12K. If I get the same query to do a backward scan by using ORDER BY DESC, I get a more reasonable numbers: {code:json} analyze format=json select * from t1 force index (PRIMARY) order by a desc, b desc, c desc, d desc; ANALYZE { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 1129.6, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163841, "r_table_time_ms": 1036.171, "r_other_time_ms": 93.424, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 11707, "pages_read_time_ms": 315.503 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} |
Consider an attached testcase [^mdev34190-test.diff]
It creates a table with 160K rows, each row being about 1K. This should take about 10K InnoDB disk pages (assuming page fill ratio=100%). Then, I restart the server, and do an ANALYZE for a full table scan. I take care to prevent any other activity: - Disable buffer pool save/load - Disable background statistics collection - Following Marko's advise, use innodb_fast_shutdown=0 I get: {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 143 analyze format=json select * from t1 force index (PRIMARY) order by a asc, b asc, c asc, d asc; ANALYZE {code} {code:js} { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 910.051, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163840, "r_table_time_ms": 794.313, "r_other_time_ms": 115.711, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 271, "pages_read_time_ms": 13.03 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} only 271 pages were read? which took just 13 ms of 794 ms spent in the engine? {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 12112 {code} Note that Innodb_pages_read shows a more reasonable number of 12K. If I get the same query to do a backward scan by using ORDER BY DESC, I get a more reasonable numbers: {code:json} analyze format=json select * from t1 force index (PRIMARY) order by a desc, b desc, c desc, d desc; {code} {code:json} ANALYZE { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 1129.6, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163841, "r_table_time_ms": 1036.171, "r_other_time_ms": 93.424, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 11707, "pages_read_time_ms": 315.503 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} |
Description |
Consider an attached testcase [^mdev34190-test.diff]
It creates a table with 160K rows, each row being about 1K. This should take about 10K InnoDB disk pages (assuming page fill ratio=100%). Then, I restart the server, and do an ANALYZE for a full table scan. I take care to prevent any other activity: - Disable buffer pool save/load - Disable background statistics collection - Following Marko's advise, use innodb_fast_shutdown=0 I get: {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 143 analyze format=json select * from t1 force index (PRIMARY) order by a asc, b asc, c asc, d asc; ANALYZE {code} {code:js} { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 910.051, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163840, "r_table_time_ms": 794.313, "r_other_time_ms": 115.711, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 271, "pages_read_time_ms": 13.03 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} only 271 pages were read? which took just 13 ms of 794 ms spent in the engine? {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 12112 {code} Note that Innodb_pages_read shows a more reasonable number of 12K. If I get the same query to do a backward scan by using ORDER BY DESC, I get a more reasonable numbers: {code:json} analyze format=json select * from t1 force index (PRIMARY) order by a desc, b desc, c desc, d desc; {code} {code:json} ANALYZE { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 1129.6, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163841, "r_table_time_ms": 1036.171, "r_other_time_ms": 93.424, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 11707, "pages_read_time_ms": 315.503 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} |
Consider an attached testcase [^mdev34190-test.diff]
It creates a table with 160K rows, each row being about 1K. This should take about 10K InnoDB disk pages (assuming page fill ratio=100%). Then, I restart the server, and do an ANALYZE for a full table scan. I take care to prevent any other activity: - Disable buffer pool save/load - Disable background statistics collection - Following Marko's advise, use innodb_fast_shutdown=0 I get: {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 143 analyze format=json select * from t1 force index (PRIMARY) order by a asc, b asc, c asc, d asc; ANALYZE {code} {code:js} { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 910.051, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163840, "r_table_time_ms": 794.313, "r_other_time_ms": 115.711, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 271, "pages_read_time_ms": 13.03 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} only 271 pages were read? which took just 13 ms of 794 ms spent in the engine? {code} show status like 'innodb_pages_read'; Variable_name Value Innodb_pages_read 12112 {code} Note that Innodb_pages_read shows a more reasonable number of 12K. If I get the same query to do a backward scan by using ORDER BY DESC, I get a more reasonable numbers: {code:json} analyze format=json select * from t1 force index (PRIMARY) order by a desc, b desc, c desc, d desc; {code} {code:js} ANALYZE { "query_block": { "select_id": 1, "r_loops": 1, "r_total_time_ms": 1129.6, "table": { "table_name": "t1", "access_type": "index", "key": "PRIMARY", "key_length": "1028", "used_key_parts": ["a", "b", "c", "d"], "r_loops": 1, "rows": 1, "r_rows": 163841, "r_table_time_ms": 1036.171, "r_other_time_ms": 93.424, "r_engine_stats": { "pages_accessed": 11708, "pages_read_count": 11707, "pages_read_time_ms": 315.503 }, "filtered": 100, "r_filtered": 100, "using_index": true } } } {code} |
Attachment | mdev34190-poc1.diff [ 73504 ] |
Link |
This issue relates to |
Link |
This issue relates to |
Link |
This issue relates to |
Affects Version/s | 10.6 [ 24028 ] |
Fix Version/s | 10.6 [ 24028 ] |
Component/s | Optimizer [ 10200 ] | |
Component/s | Storage Engine - InnoDB [ 10129 ] |
Labels | analyze-stmt |
Assignee | Sergei Petrunia [ psergey ] |
Status | Open [ 1 ] | In Progress [ 3 ] |
Assignee | Sergei Petrunia [ psergey ] | Michael Widenius [ monty ] |
Status | In Progress [ 3 ] | In Review [ 10002 ] |
Status | In Review [ 10002 ] | Stalled [ 10000 ] |
Assignee | Michael Widenius [ monty ] | Sergei Petrunia [ psergey ] |
Fix Version/s | 10.6.19 [ 29833 ] | |
Fix Version/s | 10.11.9 [ 29834 ] | |
Fix Version/s | 11.1.6 [ 29835 ] | |
Fix Version/s | 11.2.5 [ 29836 ] | |
Fix Version/s | 11.4.3 [ 29837 ] | |
Fix Version/s | 11.5.2 [ 29838 ] | |
Fix Version/s | 10.6 [ 24028 ] | |
Resolution | Fixed [ 1 ] | |
Status | Stalled [ 10000 ] | Closed [ 6 ] |
Looking for occurrences where Innodb_pages_read gets incremented but mariadb_increment_pages_read() is not incremented...
I find this:
(gdb) wher
#0 buf_page_init_for_read (mode=132, page_id={m_id = 21474836498}, zip_size=0, unzip=false) at /home/psergey/dev-git2/10.6-mdev34125-fix-analyze-timings/storage/innobase/buf/buf0rea.
#1 0x00005555569caec4 in buf_read_page_low (space=0x7fffcc047ab8, sync=false, mode=132, page_id={m_id = 21474836498}, zip_size=0, unzip=false) at /home/psergey/dev-git2/10.6-mdev3412
#2 0x00005555569cbf60 in buf_read_ahead_linear (page_id={m_id = 21474836495}, zip_size=0, ibuf=false) at /home/psergey/dev-git2/10.6-mdev34125-fix-analyze-timings/storage/innobase/bu
#3 0x0000555556983d7f in btr_pcur_move_to_next_page (cursor=0x7fffcc02eb30, mtr=0x7fffea013af0) at /home/psergey/dev-git2/10.6-mdev34125-fix-analyze-timings/storage/innobase/btr/btr0
#4 0x00005555568b2ae5 in row_search_mvcc (buf=0x7fffcc02c400 "\374\060\071bdcded-142a-11ef-a9a6-00d86110704409bdcded-142a-11ef-a9a6-00d86110704409bdcded-142a-11ef-a9a6-00d86110704409
#5 0x00005555566ab55b in ha_innobase::general_fetch (this=0x7fffcc02b240, buf=0x7fffcc02c400 "\374\060\071bdcded-142a-11ef-a9a6-00d86110704409bdcded-142a-11ef-a9a6-00d86110704409bdcd
#6 0x00005555566ab7e2 in ha_innobase::index_next (this=0x7fffcc02b240, buf=0x7fffcc02c400 "\374\060\071bdcded-142a-11ef-a9a6-00d86110704409bdcded-142a-11ef-a9a6-00d86110704409bdcded-
#7 0x0000555556281ba6 in handler::ha_index_next (this=0x7fffcc02b240, buf=0x7fffcc02c400 "\374\060\071bdcded-142a-11ef-a9a6-00d86110704409bdcded-142a-11ef-a9a6-00d86110704409bdcded-1
#8 0x0000555555f5d78d in join_read_next (info=0x7fffcc01b6e8) at /home/psergey/dev-git2/10.6-mdev34125-fix-analyze-timings/sql/sql_select.cc:22990
#9 0x0000555555e11fb9 in READ_RECORD::read_record (this=0x7fffcc01b6e8) at /home/psergey/dev-git2/10.6-mdev34125-fix-analyze-timings/sql/records.h:81
buf_read_page_low() calls mariadb_increment_pages_read().
But it does it only when called with sync=true.
buf_read_ahead_linear() calls buf_read_page_low(...sync=false... ), so nothing gets incremented.