Details
-
Bug
-
Status: Closed (View Workflow)
-
Minor
-
Resolution: Won't Fix
-
None
-
None
-
None
-
None
Description
The following query
SELECT b FROM t1, t2 AS alias1, t2 AS alias2
|
WHERE alias2.c = SOME (
|
SELECT b FROM t1, t2
|
WHERE b != alias1.d )
|
OR b = alias1.d
|
LIMIT_ROWS_EXAMINED 5000
|
on the test data (small dataset) examines at least 15667 rows.
EXPLAIN:
id select_type table type possible_keys key key_len ref rows filtered Extra
|
1 PRIMARY alias1 ALL NULL NULL NULL NULL 100 100.00
|
1 PRIMARY alias2 index NULL PRIMARY 4 NULL 100 100.00 Using index; Using join buffer (flat, BNL join)
|
1 PRIMARY t1 ALL NULL NULL NULL NULL 106 100.00 Using where; Using join buffer (flat, BNL join)
|
2 DEPENDENT SUBQUERY t2 index NULL PRIMARY 4 NULL 100 100.00 Using index
|
2 DEPENDENT SUBQUERY t1 ALL NULL NULL NULL NULL 106 100.00 Using where; Using join buffer (flat, BNL join)
|
Warnings:
|
Note 1276 Field or reference 'test.alias1.d' of SELECT #2 was resolved in SELECT #1
|
Note 1003 select `test`.`t1`.`b` AS `b` from `test`.`t1` join `test`.`t2` `alias1` join `test`.`t2` `alias2` where (<expr_cache><`test`.`alias2`.`c`,`test`.`alias1`.`d`>(<in_optimizer>(`test`.`alias2`.`c`,<exists>(select `test`.`t1`.`b` from `test`.`t1` join `test`.`t2` where ((`test`.`t1`.`b` <> `test`.`alias1`.`d`) and (<cache>(`test`.`alias2`.`c`) = `test`.`t1`.`b`))))) or (`test`.`t1`.`b` = `test`.`alias1`.`d`))
|
Minimal optimizer_switch:
in_to_exists=on,subquery_cache=on
|
Full optimizer_switch (default):
index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=off,table_elimination=on
|
bzr version-info:
revision-id: timour@askmonty.org-20120213145248-tlvsd794o83odf2r
|
date: 2012-02-13 16:52:48 +0200
|
build-date: 2012-02-20 05:07:05 +0400
|
revno: 3417
|
Test case:
SET optimizer_switch = 'in_to_exists=on,subquery_cache=on';
|
|
CREATE TABLE t1 ( a INT AUTO_INCREMENT PRIMARY KEY, b INT );
|
INSERT INTO t1 (b) VALUES
|
(0),(2),(0),(1),(2),(NULL),(3),(8),(4),(4),(NULL),(9),(2),
|
(6),(6),(5),(3),(2),(6),(2),(3),(9),(9),(1),(7),(0),(0),
|
(112),(0),(4),(6),(0),(8),(4),(2),(1),(3),(2),(4),(144),
|
(7),(6),(3),(3),(6),(6),(9),(4),(6),(2),(28),(5),(NULL),
|
(8),(8),(0),(9),(NULL),(5),(172),(2),(244),(7),(0),(1),
|
(6),(NULL),(8),(8),(5),(5),(5),(7),(30),(NULL),(6),(8),
|
(NULL),(9),(2),(5),(4),(4),(0),(9),(1),(8),(9),(0),(7),
|
(2),(0),(5),(4),(5),(6),(NULL),(2),(9),(3),(0),(6),(5),
|
(8),(4),(NULL);
|
|
CREATE TABLE t2 ( c INT AUTO_INCREMENT PRIMARY KEY, d INT );
|
INSERT INTO t2 (d) VALUES
|
(2),(6),(6),(8),(2),(6),(8),(3),(3),(9),(6),(8),(3),(3),
|
(NULL),(7),(4),(7),(7),(8),(6),(3),(210),(1),(2),(251),(4),
|
(4),(9),(4),(NULL),(4),(NULL),(1),(6),(2),(NULL),(4),(248),
|
(4),(8),(4),(5),(9),(2),(4),(211),(2),(4),(125),(9),(4),
|
(8),(8),(NULL),(8),(4),(8),(NULL),(1),(3),(5),(8),(7),(2),
|
(NULL),(3),(NULL),(9),(1),(0),(0),(8),(NULL),(9),(1),(9),
|
(7),(2),(1),(9),(5),(4),(6),(5),(2),(4),(0),(3),(0),(6),
|
(1),(9),(4),(9),(4),(1),(2),(8),(1);
|
|
SELECT b FROM t1, t2 AS alias1, t2 AS alias2
|
WHERE alias2.c = SOME (
|
SELECT b FROM t1, t2
|
WHERE b != alias1.d )
|
OR b = alias1.d
|
LIMIT_ROWS_EXAMINED 5000;
|
|
Attachments
Issue Links
- relates to
-
MDEV-28 LIMIT ROWS EXAMINED clause to limit the number of rows examined during SELECT processing
-
- Closed
-
The big "momentum" of query processing in this case is a result of using
a blocking JOIN algorithm. If we set join_cache_level=0, then the executioner
stops almost immediately after the limit is reached.
This problem could be fixed by going through all variations of join algorithm, and
adding careful tests for thd->killed, however, I am not sure it is a very good idea.
In any case there will be algorithms that accumulate some data in buffers. If we
want the executioner to stop ASAP after reaching the limit, we would have to
add tests for thd->killed all over the place.