Details
-
New Feature
-
Status: Closed (View Workflow)
-
Minor
-
Resolution: Fixed
-
1.1.2, 1.1.6
-
None
-
None
-
Multi-server ColumnStore instance
Description
Thank you for releasing Bulk Insert API that enable applications to stream data remotely to the ColumnStore nodes.
By their nature, data streaming applications run continuously. Redundant applications could increase data streaming uptime, since if one application fails, a second application would still be running.
For such redundancy, I would like to run two applications that write to the same table from remote hosts. Applications alternate holding the table write lock in a following sequence:
The first application:
- polls the database to check that the table is not locked;
- when the table is not locked, calls createBulkInsert() and locks the table;
- buffers data by calling writeRow();
- calls commit() successfully and releases the lock.
The second application:
- polls the database to check that the table is not locked;
- when the table is no longer locked, the second application calls createBulkInsert() and locks the table;
- buffers data by calling writeRow();
- calls commit() successfully and releases the lock.
Then first application gets the table lock, and so on.
Could the Bulk Insert API be extended with a call to check if the table is currently locked?
Attachments
Issue Links
- relates to
-
MCOL-1079 mcsapi getTableLock not failing
- Closed
-
MCOL-1094 mcsapi should have view/clear table lock features
- Closed
-
MCOL-1108 After rollback() an active transaction is reported by mcsadmin shutdownSystem
- Closed
-
MCOL-1362 Add a export function that utilizes (sequential) write from Spark workers
- Closed
-
MCOL-1726 mcsapi stale transactions
- Closed