Details
-
Bug
-
Status: Closed (View Workflow)
-
Major
-
Resolution: Fixed
-
None
Description
Linux mutex does not ensure that mutex are given in a FIFO order, but instead allows 'fast threads' to steal the mutex, which causes other threads to starve.
This is a problem with tpool as if there is burst of aio_reads, it will create 'nproc*2' threads to handle the request even if one thread could do the job (if the block is on a very fast device or on a memory device). If the file is on a hard disk things may be even worse.
This can be seen by just starting the server on a data directory with a large 'ib_buffer_pool' file.
In this case the startup code will on my machine create 72 threads on startup to fill the
buffer pool, which is not a good idea for most system (especially desktops) as the memory
used may not be released to the operating system.
In addition the current code does not honor the variables srv_n_read_io_threads or srv_n_write_io_threads.
The suggested fix is to use a separate tpool for just async io and use a limited number of threads for this. Here is some suggested code to use as a base:
--- b/storage/innobase/srv/srv0srv.cc
|
+++ b/storage/innobase/srv/srv0srv.cc
|
@@ -580,12 +580,13 @@ static void thread_pool_thread_end()
|
|
void srv_thread_pool_init()
|
{
|
+ uint max_threads= srv_n_read_io_threads + srv_n_write_io_threads;
|
DBUG_ASSERT(!srv_thread_pool);
|
|
#if defined (_WIN32)
|
- srv_thread_pool= tpool::create_thread_pool_win();
|
+ srv_thread_pool= tpool::create_thread_pool_win(1, max_threads);
|
#else
|
- srv_thread_pool= tpool::create_thread_pool_generic();
|
+ srv_thread_pool= tpool::create_thread_pool_generic(1, max_threads);
|
#endif
|
srv_thread_pool->set_thread_callbacks(thread_pool_thread_init,
|
thread_pool_thread_end); |
Attachments
Issue Links
- relates to
-
MDEV-11378 AliSQL: [Perf] Issue#23 MERGE INNODB AIO REQUEST
- Open
-
MDEV-16264 Implement a common work queue for InnoDB background tasks
- Closed