There are several problems with the function btr_estimate_n_rows_in_range_low(), which implements the core of ha_innobase::records_in_range(). In addition to the systematic error that was reported in MDEV-19424, there are race conditions.
Furthermore, according to Igor Babaev, the entire function handler::records_in_range() should be replaced with something that returns the position of a key, as a floating-point number between 0 (start of index) and 1 (end of index). Perhaps we could take a collection of keys as a parameter, so that we can minimize the amount of operations on index or page latches?
The function btr_estimate_n_rows_in_range_low() suffers from race conditions, because it is not protecting the upper layers of the index tree between individual dives into the index tree. At least in some cases, we would detect that a page split or merge took place, and we would return completely made-up statistics (rows_in_range_arbitrary_ret_val = 10). If we really need those race-condition-prone accesses due to performance reasons, then we should perhaps retry such optimistic access a few times, and ultimately fall back to latching the upper level pages.
Last, it seems that it would be best to remove BTR_ESTIMATE and simply implement the relevant part of the btr_cur_search_to_nth_level() logic in this function. Perhaps we could also refactor btr_cur_search_to_nth_level() to use some common inline functions, to avoid source code duplication.