[MCOL-1305] Bulk import of large CSV file failed because of SIGSEGV on Pentaho server. Created: 2018-03-26 Updated: 2023-10-26 Resolved: 2018-04-12 |
|
| Status: | Closed |
| Project: | MariaDB ColumnStore |
| Component/s: | None |
| Affects Version/s: | 1.1.4 |
| Fix Version/s: | 1.1.4 |
| Type: | Bug | Priority: | Major |
| Reporter: | Elena Kotsinova (Inactive) | Assignee: | Elena Kotsinova (Inactive) |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Environment: |
CS 1.1.3 on CentOS 7 |
||
| Attachments: |
|
| Sprint: | 2018-07, 2018-08 |
| Description |
|
1. Start bulk load of a large CSV file with pentaho bulk load adapter. File contains 121 million rows (9 GB in size).
See attached hs_err_pid32515.log Result: |
| Comments |
| Comment by Andrew Hutchings (Inactive) [ 2018-03-26 ] |
|
I think this is a duplicate of |
| Comment by Elena Kotsinova (Inactive) [ 2018-03-26 ] |
|
Not sure that both are related. |
| Comment by Andrew Hutchings (Inactive) [ 2018-03-26 ] |
|
data volume can trigger it too. The easy trigger is basically going over a certain amount of extents and having at least 1 PM remote from where the API is executed. This will cause the HWM packet at commit time to be long enough to trigger compression on it. Which in-turn will cause the crash or other weirdness. |
| Comment by David Thompson (Inactive) [ 2018-04-02 ] |
|
Can you retest with 1.1.4 mcsapi? |
| Comment by Elena Kotsinova (Inactive) [ 2018-04-12 ] |
|
load of 9GB flat file - 121070191 records, 10 columns finished with no errors. |