Details
-
Bug
-
Status: Open (View Workflow)
-
Major
-
Resolution: Unresolved
-
10.5.11, 10.5.16
-
None
-
None
-
Debian / FC
Description
Hi Guys,
found out some rather unexpected behaviour when trying to optimize some queries. Basically i have ID->LABEL mapping tables and another large statistical innodb table with several billion rows.
When trying to optimize it, I just changed one large query to get data for eg. single month, to 30 queries, running it 10 at a time. Then the data is merged on the client. Works around 10x faster.
Interesting thing is that it only works when all tables are innodb or toku. For myisam and aria when I run 10 threads simultaneously, only the load is ~x8 the normal... the execution time is as slow as if these 10 queries were run in serial not in parallel. For aria results are around 20% better than myisam. If I use innodb it's close to 10 times (1000%) faster.
Unfortunately can't do testing using recent server version now, as the main table is close to 6b rows and over half TB in size. Could build some test case if anyone is interested in looking into this. Question is if myisam/aria are still developed or is it abandonware and we should just use innodb for everything?
foreach ($dates as $date) { |
$data = dbpool::fill_array("SELECT _subnet, user_id, sum(clicks) as clicks, sum(spending) as spending |
FROM stats_creative
|
INNER JOIN {$this->tmp_domains} ON domain_id = {$this->tmp_domains}.id
|
INNER JOIN {$this->tmp_creative_to_uid} ON creative_id = {$this->tmp_creative_to_uid}.id
|
WHERE date IN ($$)
|
GROUP BY _subnet, user_id", $date); |
}
|