Details
-
Bug
-
Status: Closed (View Workflow)
-
Critical
-
Resolution: Fixed
-
10.3(EOL)
-
None
-
Linux
Description
When doing 'CHECKSUM TABLE t' on a partitioned spider table it will fetch all rows from the different data nodes sequentially and store the result on the spider head. On very large tables the mysqld process will be killed due to OOM (without a trace in the error log).
One suggested workaround is to set spider_quick_mode = 3 before running such statement, but we would prefer that the command is sent to each data node and executed in parallel and then aggregate (xor?) the result on the spider head.
This appears to be a specific case of the more general issue that a large result may cause an Out-Of-Memory on the spider head. This should never be the case, and thus we would prefer that spider have an upper limit on how much results it can cache on the spider head, or some other way to avoid a valid query causing a server crash due to out of memory.
Attachments
Issue Links
- causes
-
MDEV-19842 Crash while creating statistics for Spider table
-
- Closed
-
- relates to
-
MDEV-16520 Out-Of-Memory running big aggregate query on Spider Engine
-
- Closed
-
-
MDEV-16880 Provide checksum aggregate functions, and partition-level checksums
-
- Open
-
Activity
Field | Original Value | New Value |
---|---|---|
Fix Version/s | 10.3 [ 22126 ] | |
Assignee | Jacob Mathew [ jacob-mathew ] |
Attachment | heap_profile.pdf [ 45673 ] |
Link | This issue relates to MDEV-16880 [ MDEV-16880 ] |
Link |
This issue relates to |
Assignee | Jacob Mathew [ jacob-mathew ] | Kentoku [ kentoku ] |
Status | Open [ 1 ] | In Progress [ 3 ] |
Assignee | Kentoku [ kentoku ] | Michael Widenius [ monty ] |
Status | In Progress [ 3 ] | In Review [ 10002 ] |
Priority | Major [ 3 ] | Critical [ 2 ] |
Fix Version/s | 10.4 [ 22408 ] |
Assignee | Michael Widenius [ monty ] | Sergei Golubchik [ serg ] |
Status | In Review [ 10002 ] | Stalled [ 10000 ] |
Status | Stalled [ 10000 ] | In Progress [ 3 ] |
Status | In Progress [ 3 ] | Stalled [ 10000 ] |
Assignee | Sergei Golubchik [ serg ] | Kentoku [ kentoku ] |
Assignee | Kentoku [ kentoku ] | Sergei Golubchik [ serg ] |
Status | Stalled [ 10000 ] | In Review [ 10002 ] |
Assignee | Sergei Golubchik [ serg ] | Kentoku [ kentoku ] |
Status | In Review [ 10002 ] | Stalled [ 10000 ] |
issue.field.resolutiondate | 2019-06-10 15:26:34.0 | 2019-06-10 15:26:34.834 |
Fix Version/s | 10.4.6 [ 23412 ] | |
Fix Version/s | 10.3 [ 22126 ] | |
Fix Version/s | 10.4 [ 22408 ] | |
Resolution | Fixed [ 1 ] | |
Status | Stalled [ 10000 ] | Closed [ 6 ] |
Link |
This issue causes |
Workflow | MariaDB v3 [ 87390 ] | MariaDB v4 [ 154405 ] |
Zendesk Related Tickets | 177946 |
The attached heap_profile.pdf was created by:
export HEAPPROFILE=/tmp/mybin.hprof
export LD_PRELOAD="/usr/lib64/libtcmalloc_and_profiler.so.4"
start mysqld
ran the CHECKSUM TABLE big_table and then generated a pdf by:
sudo pprof --base=/tmp/mybin.hprof.0554.heap --pdf /usr/local/mysql/bin/mysqld /tmp/mybin.hprof.0632.heap > ~/heap_profile.pdf