Given 800 000 000 records with a couple Dictionary columns with lots of equal length strings in the data set. It took 4 167 seconds to ingest the data set into CS.
After the patch it takes only 467 seconds.
There were two main sources of latency:
- Dctnry::getTokenFromArray represented de-dup buffer as array and called memcpy for any equal-sized string
- COND_WAIT_SECONDS was 3 seconds per default