Details
-
Bug
-
Status: Closed (View Workflow)
-
Major
-
Resolution: Cannot Reproduce
-
10.2.13
-
None
-
centos 7
Description
even if I set
+--------------------------------------+--------------+
|
| Variable_name | Value |
|
+--------------------------------------+--------------+
|
| innodb_stats_auto_recalc | ON |
|
| innodb_stats_include_delete_marked | OFF |
|
| innodb_stats_method | nulls_equal |
|
| innodb_stats_modified_counter | 0 |
|
| innodb_stats_on_metadata | OFF |
|
| innodb_stats_persistent | ON |
|
| innodb_stats_persistent_sample_pages | 999999999999 |
|
| innodb_stats_sample_pages | 999999999999 |
|
| innodb_stats_traditional | ON |
|
| innodb_stats_transient_sample_pages | 999999999999 |
|
| innodb_status_output | OFF |
|
| innodb_status_output_locks | OFF |
|
+--------------------------------------+--------------+
|
nothing at all forces analyze table to accurate update the field table_rows in information_schema.tables, and there is no known way to user-update that field, which is certainly a solution.
I need that field to be 100% accurate because I use that number to retrieve a random row on a large table, and there is no fastest way than doing RAND()*table_rows. The tables never change. Now I do "analyze table" after loading the data from scratch once a month.