Linux mutex does not ensure that mutex are given in a FIFO order, but instead allows 'fast threads' to steal the mutex, which causes other threads to starve.
This is a problem with tpool as if there is burst of aio_reads, it will create 'nproc*2' threads to handle the request even if one thread could do the job (if the block is on a very fast device or on a memory device). If the file is on a hard disk things may be even worse.
This can be seen by just starting the server on a data directory with a large 'ib_buffer_pool' file.
In this case the startup code will on my machine create 72 threads on startup to fill the
buffer pool, which is not a good idea for most system (especially desktops) as the memory
used may not be released to the operating system.
In addition the current code does not honor the variables srv_n_read_io_threads or srv_n_write_io_threads.
The suggested fix is to use a separate tpool for just async io and use a limited number of threads for this. Here is some suggested code to use as a base: