[MCOL-3962] document how to run cpimport in 1.4 Created: 2020-04-24 Updated: 2021-01-16 Resolved: 2021-01-16 |
|
| Status: | Closed |
| Project: | MariaDB ColumnStore |
| Component/s: | Documentation |
| Affects Version/s: | 1.4.3 |
| Fix Version/s: | N/A |
| Type: | Bug | Priority: | Minor |
| Reporter: | David Hill (Inactive) | Assignee: | Geoff Montee (Inactive) |
| Resolution: | Won't Fix | Votes: | 0 |
| Labels: | None | ||
| Description |
|
Customer asked how to run cpimport in 1.4. I located this document, but it didnt show any examples. Please add examples to document(s). https://mariadb.com/docs/reference/col1.4/cli/cpimport/ This is what I did to test it. Should be document that it can be run by using sudo. sudo /usr/bin/cpimport -j299 |
| Comments |
| Comment by David Hill (Inactive) [ 2020-04-30 ] |
|
Request from customer for more documentation Are all previous features of cpimport and colxml supported with s3? This will require a rewrite of every piece of code used to load the database. In Redshift the column order of the input data must match the table definition. ColumnStore has never had this restriction, allowing us to ignore columns in the input file that do not appear in the table, to supply default values for missing columns in the data and automatically handle out of order columns compared to the table definition. Can it still read .tsv files? (tab separated) https://mariadb.com/docs/release-notes/mariadb-columnstore-1-4-2-release-notes/#s3-storage-manager Says cpimport is used to load data from s3. Where is it putting the data? In a different s3 bucket? It cannot hold all the data in memory, there is too much. |
| Comment by Todd Stoffel (Inactive) [ 2021-01-16 ] |
|
Obsoleted by newer versions. |