Details
-
Bug
-
Status: Closed (View Workflow)
-
Critical
-
Resolution: Fixed
-
10.3(EOL)
-
None
-
Linux
Description
When doing 'CHECKSUM TABLE t' on a partitioned spider table it will fetch all rows from the different data nodes sequentially and store the result on the spider head. On very large tables the mysqld process will be killed due to OOM (without a trace in the error log).
One suggested workaround is to set spider_quick_mode = 3 before running such statement, but we would prefer that the command is sent to each data node and executed in parallel and then aggregate (xor?) the result on the spider head.
This appears to be a specific case of the more general issue that a large result may cause an Out-Of-Memory on the spider head. This should never be the case, and thus we would prefer that spider have an upper limit on how much results it can cache on the spider head, or some other way to avoid a valid query causing a server crash due to out of memory.
Attachments
Issue Links
- causes
-
MDEV-19842 Crash while creating statistics for Spider table
- Closed
- relates to
-
MDEV-16520 Out-Of-Memory running big aggregate query on Spider Engine
- Closed
-
MDEV-16880 Provide checksum aggregate functions, and partition-level checksums
- Open