[MCOL-1160] Bulk write API doesn't start new block for dictionary Created: 2018-01-15 Updated: 2023-10-26 Resolved: 2018-01-31 |
|
| Status: | Closed |
| Project: | MariaDB ColumnStore |
| Component/s: | None |
| Affects Version/s: | None |
| Fix Version/s: | 1.1.3 |
| Type: | Bug | Priority: | Major |
| Reporter: | Andrew Hutchings (Inactive) | Assignee: | Daniel Lee (Inactive) |
| Resolution: | Fixed | Votes: | 0 |
| Labels: | None | ||
| Issue Links: |
|
||||||||
| Sprint: | 2018-02, 2018-03 | ||||||||
| Description |
|
The bulk write API continues the same block for dictionary entries. PrimProc uses a block level cache and this isn't flushed on a bulk write so new appended entries aren't seen by PrimProc showing CpNoTf until restart. test.t1 is a table with an int and varchar(64). Test case:
|
| Comments |
| Comment by Andrew Hutchings (Inactive) [ 2018-01-15 ] | |
|
workaround:
This will flush the PrimProc cache and the values will show correctly. | |
| Comment by Andrew Hutchings (Inactive) [ 2018-01-15 ] | |
|
cpimport solves this by sending a PrimProc flush for the dictionary blocks in TableInfo::setParseComplete() calling cacheutils::flushPrimProcallverBlocks() | |
| Comment by Andrew Hutchings (Inactive) [ 2018-01-23 ] | |
|
Pull requests open for the API and engine. Both are needed for the test to pass (although mixed API/engine versions will still work). For QA: You need the patch in both engine and API. There is a test for this in the API's built-in regression suite. | |
| Comment by Daniel Lee (Inactive) [ 2018-01-31 ] | |
|
Build verified: Github source 1.1.3-1 Test mcol1160 is test #16 in the test suite. |