[MCOL-5043] Reduce a number of pre-spawned ExeMgr threads Created: 2022-04-06  Updated: 2023-12-21

Status: Stalled
Project: MariaDB ColumnStore
Component/s: ExeMgr, PrimProc
Affects Version/s: 6.2.3
Fix Version/s: 23.10

Type: New Feature Priority: Major
Reporter: Roman Assignee: Roman
Resolution: Unresolved Votes: 0
Labels: rm_perf

Issue Links:
Blocks
blocks MCOL-4593 Multiple concurrent queries with aggr... Stalled
Relates
relates to MCOL-5044 Improve PP thread pool with a fair sc... Closed
relates to MCOL-5045 Computational resources and Workload ... Open
relates to MCOL-4691 Major Regression: Selects with aggreg... Closed
Sprint: 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10, 2023-11, 2023-12

 Description   

MCS spawns lots of idle thread pool jobs for parallel query execution, e.g. every 2nd phase of a parallel 2-step aggregation spawns 24 threads and parallel sorting spawns 16 threads by default. The pool threads are just idle until data starts to flow from the lower parts of the executed query. Every thread uses sync primitives, e.g. mutex-es or cond_variable. When multiple queries are processed by an MCS cluster the concurrency sync primitives overhead is enormous and can reach 25% of non-virtualized CPU horsepower.
The suggested solution is to reduce a number of threads on the start down to one. EM adds more parallel threads if needed only.
Consider above mentioned 2nd step of a parallel aggregation. It pre-spawns of threads that reads data from an input queue and puts records(RowPointers to be exact) into buckets(bucket number = hash % buckets number). The thread later populates hash map with the calculated bucket number with the RowPointers and the hash calculated. The suggestion is to enable the code to detect if the input queue is filled up to a certain limit for a period of time and to add a new processing thread at this point. If it is the code must spawn another thread/-s.


Generated at Thu Feb 08 02:54:55 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.