Details
-
Task
-
Status: Open (View Workflow)
-
Major
-
Resolution: Unresolved
-
None
Description
If check_tmp_key fails in the case of too many key parts, then can we find a way to pick the most efficient subset of keys in that case? Simply picking any subset seems like it would be more efficient than picking nothing, but can we measure and pick the most efficient subset?
We'll need to know n_distinct for the columns of derived table (columns with high n_distinct are better candidates), similar to what's needed for MDEV-36321.
Attachments
Issue Links
- relates to
-
MDEV-36321 Indexes on derived tables with GROUP BY produce wrong out_rows estimates
-
- Approved
-
-
MDEV-37044 derived_with_keys optimization not applied where it should
-
- Closed
-