Details
-
Bug
-
Status: Closed (View Workflow)
-
Blocker
-
Resolution: Done
-
5.6.4, 6.3.1
-
None
-
None
-
2021-14, 2021-15, 2021-16, 2021-17
Description
The client faced bulk insertion operation(and presumably INSERT operations) slowdown. The system has no obvious hardware bottlenecks and despite the fact that the storage is remote (FCoE) there is no evidence FCoE contributes a lot to the overall timings of bulk ingestion operations.
The table is about 80 columns(mostly dicts) and bulk ingestion of 82 records takes 18 seconds where 12 are spent in preprocessing phase. Preprocessing phase involves BRM communication and so-called HWM chunk backup(a backup of the most recent compressed chunk of a segment/dict files).
cpimport strace file showed that BRM communcations socket operations and mutex-es involved contributes a lot to the overall timings.
The immediate workaround is to horizontally scale BRM, namely EM whilst the permanent solution is to introduce lookup structure to speed up EM operations.
Attachments
Issue Links
- causes
-
MCOL-5050 Worker node crash after DDL . Possibly docker only
- Closed
-
MCOL-5057 EM index code miscalculates RAM needed to allocate its structures
- Closed
- includes
-
MCOL-5037 Up-merge EM Index into develop-6
- Closed
-
MCOL-5089 Merge RBTree-based Extent Map with EM Index to remove scaleability slow-downs -develop5
- Closed
-
MCOL-5090 Up-merge EMIndex + RBTree-based EM into develop-6
- Closed
-
MCOL-5091 Up-merge RBTree-based EM into develop
- Closed
- is part of
-
MCOL-5313 Re-test new Exttent Map implementation
- Closed
- relates to
-
MCOL-4988 Table lock remained after DML failure due to cluster was in readonly mode
- Closed