Details
-
Task
-
Status: Open (View Workflow)
-
Trivial
-
Resolution: Unresolved
-
None
-
None
Description
1) Two query cache operations could be 'optimized' by user using a timeout variable.
1.1)At insert query (insert/store_query function) we have a try_lock() using WAIT/TIMEOUT
We could include a variable called "query_cache_insert_timeout", using a microsecond value, if value <=0 use a WAIT try_lock, if >0 use a TIMEOUT (check that we will need a new parameter to try_lock)
1.2)At fetching query () we have a try_lock() using TIMEOUT and a hardcoded timeout at try_lock function of 50ms, here we could include a variable called "query_cache_fetch_timeout", using a microsecond value, if value <=0 use a 50ms TIMEOUT try_lock (check that we will need a new parameter to try_lock)
2) include new status variables to know better when we got a mutex contention or something related to slow query cache response
2.1) when executing the same query with two or more process we increase the refused counter, include a status variable 'Qcache_concurrency_inserts' about how many queries was refused when executing the same query. this give some idea about tune (should be nice this value be 'per query' instead of a global value - check that we can use query_cache_queries (qc_info plugin) information schema to report this), another status is 'Qcache_max_concurrency_inserts' telling how many inserts was blocked (1000 parallel queries is a problem but 2 isn't, just an example... that's the same value of select max(concurrent_counter) from information_schema.query_cache_queries, when we don't have query cache reset)
2.2) include a counter and a variable about slow flushes (inserting a query while query cache is locked sending data to client), for example
variable "query_cache_slow_flush" value as microseconds
status "Qcache_slow_flush" (when flush get bigger than variable increase by 1, a very big value here shows a big problem about concurrency of select and insert into query cache)
2.3) maybe others status (must thing about others good metrics), about max lock time while inserting (waiting release of a lock), and max lock time while fetching (waiting a lock to fetch query cache), these values show how much time we spend waiting lock before execute the query, for example if we have many selects with 10ms, and we have a lock time of 50ms, we have a big problem, but if we have many queries with 160ms and lock time is 50ms and hit rate >50% we maybe have a big problem, just an example... the main idea is intrument query cache to be tuneable by USER with SQL_NO_CACHE and others system variables (and maybe turn it off when tune isn't possible)
2.4) a max wait lock time of qc writes/reads
2.5) a mean wait lock time of qc writes/reads
Attachments
Issue Links
- is blocked by
-
MDEV-18764 Query cache lock contention
-
- Open
-
- links to
Activity
Field | Original Value | New Value |
---|---|---|
Description |
1) Two query cache operations could be 'optimized' by user using a timeout variable. 1.1)At insert query (insert/store_query function) we have a try_lock() using WAIT/TIMEOUT We could include a variable called "query_cache_insert_timeout", using a microsecond value, if value <=0 use a WAIT try_lock, if >0 use a TIMEOUT (check that we will need a new parameter to try_lock) 1.2)At fetching query () we have a try_lock() using TIMEOUT and a hardcoded timeout at try_lock function of 50ms, here we could include a variable called "query_cache_fetch_timeout", using a microsecond value, if value <=0 use a 50ms TIMEOUT try_lock (check that we will need a new parameter to try_lock) 2) when executing the same query with two or more process we increase the refused counter, should be interesting a counter about how many query was refused when executing the same query. this give some idea about tune (should be nice this value be 'per query' instead of a global value) |
1) Two query cache operations could be 'optimized' by user using a timeout variable. 1.1)At insert query (insert/store_query function) we have a try_lock() using WAIT/TIMEOUT We could include a variable called "query_cache_insert_timeout", using a microsecond value, if value <=0 use a WAIT try_lock, if >0 use a TIMEOUT (check that we will need a new parameter to try_lock) 1.2)At fetching query () we have a try_lock() using TIMEOUT and a hardcoded timeout at try_lock function of 50ms, here we could include a variable called "query_cache_fetch_timeout", using a microsecond value, if value <=0 use a 50ms TIMEOUT try_lock (check that we will need a new parameter to try_lock) 2) include new status variables to know better when we got a mutex contention or something related to slow query cache response 2.1) when executing the same query with two or more process we increase the refused counter, include a status variable 'Qcache_concurrency_inserts' about how many queries was refused when executing the same query. this give some idea about tune (should be nice this value be 'per query' instead of a global value), another status is 'Qcache_max_concurrency_inserts' telling how many inserts was blocked (1000 parallel queries is a problem but 2 isn't, just an example) 2.2) include a counter and a variable about slow flushes (inserting a query while query cache is locked sending data to client), for example variable "query_cache_slow_flush" value as microseconds status "Qcache_slow_flush" (when flush get bigger than variable increase by 1) 2.3) maybe others status about max lock time while inserting (wait release of a lock), and max lock time while fetching (waiting a lock to fetch query cache) |
Description |
1) Two query cache operations could be 'optimized' by user using a timeout variable. 1.1)At insert query (insert/store_query function) we have a try_lock() using WAIT/TIMEOUT We could include a variable called "query_cache_insert_timeout", using a microsecond value, if value <=0 use a WAIT try_lock, if >0 use a TIMEOUT (check that we will need a new parameter to try_lock) 1.2)At fetching query () we have a try_lock() using TIMEOUT and a hardcoded timeout at try_lock function of 50ms, here we could include a variable called "query_cache_fetch_timeout", using a microsecond value, if value <=0 use a 50ms TIMEOUT try_lock (check that we will need a new parameter to try_lock) 2) include new status variables to know better when we got a mutex contention or something related to slow query cache response 2.1) when executing the same query with two or more process we increase the refused counter, include a status variable 'Qcache_concurrency_inserts' about how many queries was refused when executing the same query. this give some idea about tune (should be nice this value be 'per query' instead of a global value), another status is 'Qcache_max_concurrency_inserts' telling how many inserts was blocked (1000 parallel queries is a problem but 2 isn't, just an example) 2.2) include a counter and a variable about slow flushes (inserting a query while query cache is locked sending data to client), for example variable "query_cache_slow_flush" value as microseconds status "Qcache_slow_flush" (when flush get bigger than variable increase by 1) 2.3) maybe others status about max lock time while inserting (wait release of a lock), and max lock time while fetching (waiting a lock to fetch query cache) |
1) Two query cache operations could be 'optimized' by user using a timeout variable. 1.1)At insert query (insert/store_query function) we have a try_lock() using WAIT/TIMEOUT We could include a variable called "query_cache_insert_timeout", using a microsecond value, if value <=0 use a WAIT try_lock, if >0 use a TIMEOUT (check that we will need a new parameter to try_lock) 1.2)At fetching query () we have a try_lock() using TIMEOUT and a hardcoded timeout at try_lock function of 50ms, here we could include a variable called "query_cache_fetch_timeout", using a microsecond value, if value <=0 use a 50ms TIMEOUT try_lock (check that we will need a new parameter to try_lock) 2) include new status variables to know better when we got a mutex contention or something related to slow query cache response 2.1) when executing the same query with two or more process we increase the refused counter, include a status variable 'Qcache_concurrency_inserts' about how many queries was refused when executing the same query. this give some idea about tune (should be nice this value be 'per query' instead of a global value - check that we can use query_cache_queries (qc_info plugin) information schema to report this), another status is 'Qcache_max_concurrency_inserts' telling how many inserts was blocked (1000 parallel queries is a problem but 2 isn't, just an example... that's the same value of select max(concurrent_counter) from information_schema.query_cache_queries, when we don't have query cache reset) 2.2) include a counter and a variable about slow flushes (inserting a query while query cache is locked sending data to client), for example variable "query_cache_slow_flush" value as microseconds status "Qcache_slow_flush" (when flush get bigger than variable increase by 1, a very big value here shows a big problem about concurrency of select and insert into query cache) 2.3) maybe others status (must thing about others good metrics), about max lock time while inserting (waiting release of a lock), and max lock time while fetching (waiting a lock to fetch query cache), these values show how much time we spend waiting lock before execute the query, for example if we have many selects with 10ms, and we have a lock time of 50ms, we have a big problem, but if we have many queries with 160ms and lock time is 50ms and hit rate >50% we maybe have a big problem, just an example... the main idea is intrument query cache to be tuneable by USER with SQL_NO_CACHE and others system variables (and maybe turn it off when tune isn't possible) |
Summary | Rewrite query cache to use timeout, add status about concurrency | Improve query cache to use timeout, add status about concurrency |
Remote Link | This issue links to "first part of mdev (Web Link)" [ 19802 ] |
Remote Link | This issue links to "concurrency refused counter (Web Link)" [ 19803 ] |
Remote Link | This issue links to "oracle like query cache (Web Link)" [ 19804 ] |
Description |
1) Two query cache operations could be 'optimized' by user using a timeout variable. 1.1)At insert query (insert/store_query function) we have a try_lock() using WAIT/TIMEOUT We could include a variable called "query_cache_insert_timeout", using a microsecond value, if value <=0 use a WAIT try_lock, if >0 use a TIMEOUT (check that we will need a new parameter to try_lock) 1.2)At fetching query () we have a try_lock() using TIMEOUT and a hardcoded timeout at try_lock function of 50ms, here we could include a variable called "query_cache_fetch_timeout", using a microsecond value, if value <=0 use a 50ms TIMEOUT try_lock (check that we will need a new parameter to try_lock) 2) include new status variables to know better when we got a mutex contention or something related to slow query cache response 2.1) when executing the same query with two or more process we increase the refused counter, include a status variable 'Qcache_concurrency_inserts' about how many queries was refused when executing the same query. this give some idea about tune (should be nice this value be 'per query' instead of a global value - check that we can use query_cache_queries (qc_info plugin) information schema to report this), another status is 'Qcache_max_concurrency_inserts' telling how many inserts was blocked (1000 parallel queries is a problem but 2 isn't, just an example... that's the same value of select max(concurrent_counter) from information_schema.query_cache_queries, when we don't have query cache reset) 2.2) include a counter and a variable about slow flushes (inserting a query while query cache is locked sending data to client), for example variable "query_cache_slow_flush" value as microseconds status "Qcache_slow_flush" (when flush get bigger than variable increase by 1, a very big value here shows a big problem about concurrency of select and insert into query cache) 2.3) maybe others status (must thing about others good metrics), about max lock time while inserting (waiting release of a lock), and max lock time while fetching (waiting a lock to fetch query cache), these values show how much time we spend waiting lock before execute the query, for example if we have many selects with 10ms, and we have a lock time of 50ms, we have a big problem, but if we have many queries with 160ms and lock time is 50ms and hit rate >50% we maybe have a big problem, just an example... the main idea is intrument query cache to be tuneable by USER with SQL_NO_CACHE and others system variables (and maybe turn it off when tune isn't possible) |
1) Two query cache operations could be 'optimized' by user using a timeout variable. 1.1)At insert query (insert/store_query function) we have a try_lock() using WAIT/TIMEOUT We could include a variable called "query_cache_insert_timeout", using a microsecond value, if value <=0 use a WAIT try_lock, if >0 use a TIMEOUT (check that we will need a new parameter to try_lock) 1.2)At fetching query () we have a try_lock() using TIMEOUT and a hardcoded timeout at try_lock function of 50ms, here we could include a variable called "query_cache_fetch_timeout", using a microsecond value, if value <=0 use a 50ms TIMEOUT try_lock (check that we will need a new parameter to try_lock) 2) include new status variables to know better when we got a mutex contention or something related to slow query cache response 2.1) when executing the same query with two or more process we increase the refused counter, include a status variable 'Qcache_concurrency_inserts' about how many queries was refused when executing the same query. this give some idea about tune (should be nice this value be 'per query' instead of a global value - check that we can use query_cache_queries (qc_info plugin) information schema to report this), another status is 'Qcache_max_concurrency_inserts' telling how many inserts was blocked (1000 parallel queries is a problem but 2 isn't, just an example... that's the same value of select max(concurrent_counter) from information_schema.query_cache_queries, when we don't have query cache reset) 2.2) include a counter and a variable about slow flushes (inserting a query while query cache is locked sending data to client), for example variable "query_cache_slow_flush" value as microseconds status "Qcache_slow_flush" (when flush get bigger than variable increase by 1, a very big value here shows a big problem about concurrency of select and insert into query cache) 2.3) maybe others status (must thing about others good metrics), about max lock time while inserting (waiting release of a lock), and max lock time while fetching (waiting a lock to fetch query cache), these values show how much time we spend waiting lock before execute the query, for example if we have many selects with 10ms, and we have a lock time of 50ms, we have a big problem, but if we have many queries with 160ms and lock time is 50ms and hit rate >50% we maybe have a big problem, just an example... the main idea is intrument query cache to be tuneable by USER with SQL_NO_CACHE and others system variables (and maybe turn it off when tune isn't possible) 2.4) a max wait lock time of qc writes/reads 2.5) a mean wait lock time of qc writes/reads |
Fix Version/s | 10.2.0 [ 14601 ] |
Workflow | MariaDB v2 [ 55002 ] | MariaDB v3 [ 64168 ] |
Fix Version/s | 10.2 [ 14601 ] |
Link | This issue is blocked by MDEV-18764 [ MDEV-18764 ] |
Workflow | MariaDB v3 [ 64168 ] | MariaDB v4 [ 130262 ] |