Details
-
Task
-
Status: Open (View Workflow)
-
Critical
-
Resolution: Unresolved
-
None
-
None
-
None
-
None
Description
Idea 1:
sometimes engineers need the exact disk/metafiles to triage whats happening. building some kind of service that bundles all relevant disk, with permission of customer that devs can redeploy/triage. basically cloning an environment or sending a full backup over the internet or to an FTP server. could be massive for large customers and time consuming.
Idea 2:
Build a flag to save csv's/imports/ stuff not captured already by columnstore_reproductions
https://github.com/mariadb-corporation/columnstore-tooling/blob/main/support/debug/columnstore_reproductions.sh
columnstore_reproductions is based off the debug.log and is a script to replay queries ran that are logged in the debug.log,the script bundles the table schemas and data for said tables or can easily populate fake data. unfortunately i cant replay inserts/cpimports because i would need the csv used, and inserts are part of binlogs not debug.log, but finding a way to either toggle something so columnstore keeps the data it imported for replay purposes or some other means of replaying workload