Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-34190

ANALYZE: r_engine_stats shows unrealistically low pages_read_count

Details

    Description

      Consider an attached testcase mdev34190-test.diff

      It creates a table with 160K rows, each row being about 1K.
      This should take about 10K InnoDB disk pages (assuming page fill ratio=100%).

      Then, I restart the server, and do an ANALYZE for a full table scan.

      I take care to prevent any other activity:

      • Disable buffer pool save/load
      • Disable background statistics collection
      • Following Marko's advise, use innodb_fast_shutdown=0

      I get:

      show status like 'innodb_pages_read';
      Variable_name   Value
      Innodb_pages_read       143
      analyze format=json 
      select * from t1 force index (PRIMARY) 
      order by a asc, b asc, c asc, d asc;
      ANALYZE
      

      {
        "query_block": {
          "select_id": 1,
          "r_loops": 1,
          "r_total_time_ms": 910.051,
          "table": {
            "table_name": "t1",
            "access_type": "index",
            "key": "PRIMARY",
            "key_length": "1028",
            "used_key_parts": ["a", "b", "c", "d"],
            "r_loops": 1,
            "rows": 1,
            "r_rows": 163840,
            "r_table_time_ms": 794.313,
            "r_other_time_ms": 115.711,
            "r_engine_stats": {
              "pages_accessed": 11708,
              "pages_read_count": 271,
              "pages_read_time_ms": 13.03
            },
            "filtered": 100,
            "r_filtered": 100,
            "using_index": true
          }
        }
      }
      

      only 271 pages were read? which took just 13 ms of 794 ms spent in the engine?

      show status like 'innodb_pages_read';
      Variable_name   Value
      Innodb_pages_read       12112
      

      Note that Innodb_pages_read shows a more reasonable number of 12K.

      If I get the same query to do a backward scan by using ORDER BY DESC, I get a more reasonable numbers:

      analyze format=json 
      select * from t1 force index (PRIMARY) 
      order by a desc, b desc, c desc, d desc;
      

      ANALYZE
      {
        "query_block": {
          "select_id": 1,
          "r_loops": 1,
          "r_total_time_ms": 1129.6,
          "table": {
            "table_name": "t1",
            "access_type": "index",
            "key": "PRIMARY",
            "key_length": "1028",
            "used_key_parts": ["a", "b", "c", "d"],
            "r_loops": 1,
            "rows": 1,
            "r_rows": 163841,
            "r_table_time_ms": 1036.171,
            "r_other_time_ms": 93.424,
            "r_engine_stats": {
              "pages_accessed": 11708,
              "pages_read_count": 11707,
              "pages_read_time_ms": 315.503
            },
            "filtered": 100,
            "r_filtered": 100,
            "using_index": true
          }
        }
      }
      

      Attachments

        Issue Links

          Activity

            psergei Sergei Petrunia created issue -
            psergei Sergei Petrunia made changes -
            Field Original Value New Value
            Description Consider an attached testcase.

            It creates a table with 160K rows, each row being about 1K.
            This should take about 10K InnoDB disk pages (assuming page fill ratio=100%).

            Then, I restart the server, and do an ANALYZE for a full table scan.

            I take care to prevent any other activity:
            - Disable buffer pool save/load
            - Disable background statistics collection
            - Following Marko's advise, use innodb_fast_shutdown=0

            I get:
            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 143
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a asc, b asc, c asc, d asc;
            ANALYZE
            {code}

            {code:js}
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 910.051,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163840,
                  "r_table_time_ms": 794.313,
                  "r_other_time_ms": 115.711,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 271,
                    "pages_read_time_ms": 13.03
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}

            only 271 pages were read?

            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 12112
            {code}

            Note that Innodb_pages_read shows a more reasonable number of 12K.
            Consider an attached testcase

            It creates a table with 160K rows, each row being about 1K.
            This should take about 10K InnoDB disk pages (assuming page fill ratio=100%).

            Then, I restart the server, and do an ANALYZE for a full table scan.

            I take care to prevent any other activity:
            - Disable buffer pool save/load
            - Disable background statistics collection
            - Following Marko's advise, use innodb_fast_shutdown=0

            I get:
            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 143
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a asc, b asc, c asc, d asc;
            ANALYZE
            {code}

            {code:js}
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 910.051,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163840,
                  "r_table_time_ms": 794.313,
                  "r_other_time_ms": 115.711,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 271,
                    "pages_read_time_ms": 13.03
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}

            only 271 pages were read?

            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 12112
            {code}

            Note that Innodb_pages_read shows a more reasonable number of 12K.
            psergei Sergei Petrunia made changes -
            Attachment mdev34190-test.diff [ 73503 ]
            psergei Sergei Petrunia made changes -
            Description Consider an attached testcase

            It creates a table with 160K rows, each row being about 1K.
            This should take about 10K InnoDB disk pages (assuming page fill ratio=100%).

            Then, I restart the server, and do an ANALYZE for a full table scan.

            I take care to prevent any other activity:
            - Disable buffer pool save/load
            - Disable background statistics collection
            - Following Marko's advise, use innodb_fast_shutdown=0

            I get:
            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 143
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a asc, b asc, c asc, d asc;
            ANALYZE
            {code}

            {code:js}
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 910.051,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163840,
                  "r_table_time_ms": 794.313,
                  "r_other_time_ms": 115.711,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 271,
                    "pages_read_time_ms": 13.03
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}

            only 271 pages were read?

            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 12112
            {code}

            Note that Innodb_pages_read shows a more reasonable number of 12K.
            Consider an attached testcase [^mdev34190-test.diff]

            It creates a table with 160K rows, each row being about 1K.
            This should take about 10K InnoDB disk pages (assuming page fill ratio=100%).

            Then, I restart the server, and do an ANALYZE for a full table scan.

            I take care to prevent any other activity:
            - Disable buffer pool save/load
            - Disable background statistics collection
            - Following Marko's advise, use innodb_fast_shutdown=0

            I get:
            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 143
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a asc, b asc, c asc, d asc;
            ANALYZE
            {code}

            {code:js}
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 910.051,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163840,
                  "r_table_time_ms": 794.313,
                  "r_other_time_ms": 115.711,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 271,
                    "pages_read_time_ms": 13.03
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}

            only 271 pages were read?

            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 12112
            {code}

            Note that Innodb_pages_read shows a more reasonable number of 12K.
            psergei Sergei Petrunia made changes -
            Description Consider an attached testcase [^mdev34190-test.diff]

            It creates a table with 160K rows, each row being about 1K.
            This should take about 10K InnoDB disk pages (assuming page fill ratio=100%).

            Then, I restart the server, and do an ANALYZE for a full table scan.

            I take care to prevent any other activity:
            - Disable buffer pool save/load
            - Disable background statistics collection
            - Following Marko's advise, use innodb_fast_shutdown=0

            I get:
            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 143
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a asc, b asc, c asc, d asc;
            ANALYZE
            {code}

            {code:js}
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 910.051,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163840,
                  "r_table_time_ms": 794.313,
                  "r_other_time_ms": 115.711,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 271,
                    "pages_read_time_ms": 13.03
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}

            only 271 pages were read?

            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 12112
            {code}

            Note that Innodb_pages_read shows a more reasonable number of 12K.
            Consider an attached testcase [^mdev34190-test.diff]

            It creates a table with 160K rows, each row being about 1K.
            This should take about 10K InnoDB disk pages (assuming page fill ratio=100%).

            Then, I restart the server, and do an ANALYZE for a full table scan.

            I take care to prevent any other activity:
            - Disable buffer pool save/load
            - Disable background statistics collection
            - Following Marko's advise, use innodb_fast_shutdown=0

            I get:
            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 143
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a asc, b asc, c asc, d asc;
            ANALYZE
            {code}

            {code:js}
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 910.051,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163840,
                  "r_table_time_ms": 794.313,
                  "r_other_time_ms": 115.711,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 271,
                    "pages_read_time_ms": 13.03
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}

            only 271 pages were read? which took just 13 ms of 794 ms spent in the engine?

            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 12112
            {code}

            Note that Innodb_pages_read shows a more reasonable number of 12K.
            psergei Sergei Petrunia made changes -
            Description Consider an attached testcase [^mdev34190-test.diff]

            It creates a table with 160K rows, each row being about 1K.
            This should take about 10K InnoDB disk pages (assuming page fill ratio=100%).

            Then, I restart the server, and do an ANALYZE for a full table scan.

            I take care to prevent any other activity:
            - Disable buffer pool save/load
            - Disable background statistics collection
            - Following Marko's advise, use innodb_fast_shutdown=0

            I get:
            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 143
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a asc, b asc, c asc, d asc;
            ANALYZE
            {code}

            {code:js}
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 910.051,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163840,
                  "r_table_time_ms": 794.313,
                  "r_other_time_ms": 115.711,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 271,
                    "pages_read_time_ms": 13.03
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}

            only 271 pages were read? which took just 13 ms of 794 ms spent in the engine?

            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 12112
            {code}

            Note that Innodb_pages_read shows a more reasonable number of 12K.
            Consider an attached testcase [^mdev34190-test.diff]

            It creates a table with 160K rows, each row being about 1K.
            This should take about 10K InnoDB disk pages (assuming page fill ratio=100%).

            Then, I restart the server, and do an ANALYZE for a full table scan.

            I take care to prevent any other activity:
            - Disable buffer pool save/load
            - Disable background statistics collection
            - Following Marko's advise, use innodb_fast_shutdown=0

            I get:
            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 143
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a asc, b asc, c asc, d asc;
            ANALYZE
            {code}

            {code:js}
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 910.051,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163840,
                  "r_table_time_ms": 794.313,
                  "r_other_time_ms": 115.711,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 271,
                    "pages_read_time_ms": 13.03
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}

            only 271 pages were read? which took just 13 ms of 794 ms spent in the engine?

            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 12112
            {code}

            Note that Innodb_pages_read shows a more reasonable number of 12K.

            If I get the same query to do a backward scan by using ORDER BY DESC, I get a more reasonable numbers:
            {code:json}
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a desc, b desc, c desc, d desc;
            ANALYZE
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 1129.6,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163841,
                  "r_table_time_ms": 1036.171,
                  "r_other_time_ms": 93.424,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 11707,
                    "pages_read_time_ms": 315.503
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}
            psergei Sergei Petrunia made changes -
            Description Consider an attached testcase [^mdev34190-test.diff]

            It creates a table with 160K rows, each row being about 1K.
            This should take about 10K InnoDB disk pages (assuming page fill ratio=100%).

            Then, I restart the server, and do an ANALYZE for a full table scan.

            I take care to prevent any other activity:
            - Disable buffer pool save/load
            - Disable background statistics collection
            - Following Marko's advise, use innodb_fast_shutdown=0

            I get:
            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 143
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a asc, b asc, c asc, d asc;
            ANALYZE
            {code}

            {code:js}
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 910.051,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163840,
                  "r_table_time_ms": 794.313,
                  "r_other_time_ms": 115.711,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 271,
                    "pages_read_time_ms": 13.03
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}

            only 271 pages were read? which took just 13 ms of 794 ms spent in the engine?

            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 12112
            {code}

            Note that Innodb_pages_read shows a more reasonable number of 12K.

            If I get the same query to do a backward scan by using ORDER BY DESC, I get a more reasonable numbers:
            {code:json}
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a desc, b desc, c desc, d desc;
            ANALYZE
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 1129.6,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163841,
                  "r_table_time_ms": 1036.171,
                  "r_other_time_ms": 93.424,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 11707,
                    "pages_read_time_ms": 315.503
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}
            Consider an attached testcase [^mdev34190-test.diff]

            It creates a table with 160K rows, each row being about 1K.
            This should take about 10K InnoDB disk pages (assuming page fill ratio=100%).

            Then, I restart the server, and do an ANALYZE for a full table scan.

            I take care to prevent any other activity:
            - Disable buffer pool save/load
            - Disable background statistics collection
            - Following Marko's advise, use innodb_fast_shutdown=0

            I get:
            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 143
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a asc, b asc, c asc, d asc;
            ANALYZE
            {code}

            {code:js}
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 910.051,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163840,
                  "r_table_time_ms": 794.313,
                  "r_other_time_ms": 115.711,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 271,
                    "pages_read_time_ms": 13.03
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}

            only 271 pages were read? which took just 13 ms of 794 ms spent in the engine?

            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 12112
            {code}

            Note that Innodb_pages_read shows a more reasonable number of 12K.

            If I get the same query to do a backward scan by using ORDER BY DESC, I get a more reasonable numbers:
            {code:json}
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a desc, b desc, c desc, d desc;
            {code}
            {code:json}
            ANALYZE
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 1129.6,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163841,
                  "r_table_time_ms": 1036.171,
                  "r_other_time_ms": 93.424,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 11707,
                    "pages_read_time_ms": 315.503
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}
            psergei Sergei Petrunia made changes -
            Description Consider an attached testcase [^mdev34190-test.diff]

            It creates a table with 160K rows, each row being about 1K.
            This should take about 10K InnoDB disk pages (assuming page fill ratio=100%).

            Then, I restart the server, and do an ANALYZE for a full table scan.

            I take care to prevent any other activity:
            - Disable buffer pool save/load
            - Disable background statistics collection
            - Following Marko's advise, use innodb_fast_shutdown=0

            I get:
            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 143
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a asc, b asc, c asc, d asc;
            ANALYZE
            {code}

            {code:js}
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 910.051,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163840,
                  "r_table_time_ms": 794.313,
                  "r_other_time_ms": 115.711,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 271,
                    "pages_read_time_ms": 13.03
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}

            only 271 pages were read? which took just 13 ms of 794 ms spent in the engine?

            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 12112
            {code}

            Note that Innodb_pages_read shows a more reasonable number of 12K.

            If I get the same query to do a backward scan by using ORDER BY DESC, I get a more reasonable numbers:
            {code:json}
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a desc, b desc, c desc, d desc;
            {code}
            {code:json}
            ANALYZE
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 1129.6,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163841,
                  "r_table_time_ms": 1036.171,
                  "r_other_time_ms": 93.424,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 11707,
                    "pages_read_time_ms": 315.503
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}
            Consider an attached testcase [^mdev34190-test.diff]

            It creates a table with 160K rows, each row being about 1K.
            This should take about 10K InnoDB disk pages (assuming page fill ratio=100%).

            Then, I restart the server, and do an ANALYZE for a full table scan.

            I take care to prevent any other activity:
            - Disable buffer pool save/load
            - Disable background statistics collection
            - Following Marko's advise, use innodb_fast_shutdown=0

            I get:
            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 143
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a asc, b asc, c asc, d asc;
            ANALYZE
            {code}

            {code:js}
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 910.051,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163840,
                  "r_table_time_ms": 794.313,
                  "r_other_time_ms": 115.711,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 271,
                    "pages_read_time_ms": 13.03
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}

            only 271 pages were read? which took just 13 ms of 794 ms spent in the engine?

            {code}
            show status like 'innodb_pages_read';
            Variable_name Value
            Innodb_pages_read 12112
            {code}

            Note that Innodb_pages_read shows a more reasonable number of 12K.

            If I get the same query to do a backward scan by using ORDER BY DESC, I get a more reasonable numbers:
            {code:json}
            analyze format=json
            select * from t1 force index (PRIMARY)
            order by a desc, b desc, c desc, d desc;
            {code}
            {code:js}
            ANALYZE
            {
              "query_block": {
                "select_id": 1,
                "r_loops": 1,
                "r_total_time_ms": 1129.6,
                "table": {
                  "table_name": "t1",
                  "access_type": "index",
                  "key": "PRIMARY",
                  "key_length": "1028",
                  "used_key_parts": ["a", "b", "c", "d"],
                  "r_loops": 1,
                  "rows": 1,
                  "r_rows": 163841,
                  "r_table_time_ms": 1036.171,
                  "r_other_time_ms": 93.424,
                  "r_engine_stats": {
                    "pages_accessed": 11708,
                    "pages_read_count": 11707,
                    "pages_read_time_ms": 315.503
                  },
                  "filtered": 100,
                  "r_filtered": 100,
                  "using_index": true
                }
              }
            }
            {code}
            psergei Sergei Petrunia made changes -
            Attachment mdev34190-poc1.diff [ 73504 ]
            psergei Sergei Petrunia made changes -
            psergei Sergei Petrunia made changes -
            psergei Sergei Petrunia made changes -
            psergei Sergei Petrunia made changes -
            Affects Version/s 10.6 [ 24028 ]
            psergei Sergei Petrunia made changes -
            Fix Version/s 10.6 [ 24028 ]
            psergei Sergei Petrunia made changes -
            Component/s Optimizer [ 10200 ]
            Component/s Storage Engine - InnoDB [ 10129 ]
            psergei Sergei Petrunia made changes -
            Labels analyze-stmt
            psergei Sergei Petrunia made changes -
            Assignee Sergei Petrunia [ psergey ]
            psergei Sergei Petrunia made changes -
            Status Open [ 1 ] In Progress [ 3 ]
            psergei Sergei Petrunia made changes -
            Assignee Sergei Petrunia [ psergey ] Michael Widenius [ monty ]
            Status In Progress [ 3 ] In Review [ 10002 ]
            monty Michael Widenius made changes -
            Status In Review [ 10002 ] Stalled [ 10000 ]
            monty Michael Widenius made changes -
            Assignee Michael Widenius [ monty ] Sergei Petrunia [ psergey ]
            psergei Sergei Petrunia made changes -
            Fix Version/s 10.6.19 [ 29833 ]
            Fix Version/s 10.11.9 [ 29834 ]
            Fix Version/s 11.1.6 [ 29835 ]
            Fix Version/s 11.2.5 [ 29836 ]
            Fix Version/s 11.4.3 [ 29837 ]
            Fix Version/s 11.5.2 [ 29838 ]
            Fix Version/s 10.6 [ 24028 ]
            Resolution Fixed [ 1 ]
            Status Stalled [ 10000 ] Closed [ 6 ]

            People

              psergei Sergei Petrunia
              psergei Sergei Petrunia
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.