I see cache coherency and interconnect traffic issue with respect to the random function generator ut_rnd_ulint_counter() while doing a backoff in spin lock.
I was looking at MySQL repository and see that there is a patch
I also see MYSQL backed off it in 5.20 because of performance reason but the changes brought some scalability issue with the patch dedc8b3d567fbb92ce912f1559fe6a08b2857045
However I am trying to port this to Mariadb but there a lot of code difference so no way I can port directly and provide a patch to test. I see files my_thread_local.h being not available and if I try to create there are some merge clashes with my_pthread.h .
Please let me know your opinion and if required I can share a patch soon with the above issues resolved.
Attachments
Issue Links
relates to
MDEV-21256Improve InnoDB random number generator performance
I see cache coherency and interconnect traffic issue with respect to the random function generator ut_rnd_ulint_counter() while doing a backoff in spin lock.
I was looking at MySQL repository and see that there is a patch
I also see MYSQL backed off it in 5.20 because of performance reason but the changes brought some scalability issue with the patch dedc8b3d567fbb92ce912f1559fe6a08b2857045
Again on Version 8.0 this has been committed again with the patch dedc8b3d567fbb92ce912f1559fe6a08b2857045 with some new changes.
However I am trying to port this to Mariadb but there a lot of code difference so no way I can port directly and provide a patch to test. I see files my_thread_local.h being not available and if I try to create there are some merge clashes with my_pthread.h .
Please let me know your opinion and if required I can share a patch soon with the above issues resolved.
Hi Sergey,
I see cache coherency and interconnect traffic issue with respect to the random function generator ut_rnd_ulint_counter() while doing a backoff in spin lock.
I was looking at MySQL repository and see that there is a patch
I also see MYSQL backed off it in 5.20 because of performance reason but the changes brought some scalability issue with the patch dedc8b3d567fbb92ce912f1559fe6a08b2857045
However I am trying to port this to Mariadb but there a lot of code difference so no way I can port directly and provide a patch to test. I see files my_thread_local.h being not available and if I try to create there are some merge clashes with my_pthread.h .
Please let me know your opinion and if required I can share a patch soon with the above issues resolved.
MySQL switched from global RNG state to per-thread state using thread-local-storage (TLS). But TLS performance as such is far from perfect. If my idea explained in MDEV-14374 is proved to be faster, we won't need RNG in spin-loop at all.
Sergey Vojtovich
added a comment - MySQL switched from global RNG state to per-thread state using thread-local-storage (TLS). But TLS performance as such is far from perfect. If my idea explained in MDEV-14374 is proved to be faster, we won't need RNG in spin-loop at all.
If I am correct we need to test in enter function ()
while (!try_lock()) {
ut_delay(ut_rnd_interval(0, max_delay));
if (++n_spins == max_spins) {
os_thread_yield();
max_spins+= step;
}
instead
do {
while (lock_word != UNLOCKED) skip // spin until lock seems free
} while CAS(locked, UNLOCKED, LOCKED) // actual atomic locking
Sandeep sethia
added a comment - - edited If I am correct we need to test in enter function ()
while (!try_lock()) {
ut_delay(ut_rnd_interval(0, max_delay));
if (++n_spins == max_spins) {
os_thread_yield();
max_spins+= step;
}
instead
do {
while (lock_word != UNLOCKED) skip // spin until lock seems free
} while CAS(locked, UNLOCKED, LOCKED) // actual atomic locking
Also there's potential false sharing between srv_spin_wait_delay/srv_n_spin_wait_rounds and statistics variables. I doubt they can contribute much to scalability, still worth checking.
in MySQL I see 10-13% increases in TPS .This is consistent across 64,128,256 threads.
What is vanilla in your comment?
Also right now the random value is given same across all the threads . it should be based on contention of threads and not some function value based?
Sandeep sethia
added a comment - Hi ,
Just to understand from your comment u get best numbers on
nornd_1: 160053 TPS
I am getting the same number for Mariadb and MySQL for same threads level for 20 tables with 1000000 records.
But if put a patch for https://github.com/mysql/mysql-server/commit/32b184cad3ebd68281fb0076a24a634d2f330aa1
in MySQL I see 10-13% increases in TPS .This is consistent across 64,128,256 threads.
What is vanilla in your comment?
Also right now the random value is given same across all the threads . it should be based on contention of threads and not some function value based?
Vanilla is 10.3 as of 7cb3520c0632ad912b309489ad86a90f9fc9bd0b (10.3 branch as of today).
Yes, I also wonder why I'm getting lower TPS compared to what you see (~144 000 vs ~160 000). Probably some difference in benchmark/settings/hardware? At least I have 100 000 records versus 1 000 000 that you have. Same 20 tables, 64 threads.
It would be nice if you could try nornd_1 change on your side. It's not just about fixing shared variable, but also about using lower value for ut_delay specifically in mutex.
Also right now the random value is given same across all the threads . it should be based on contention of threads and not some function value based?
I'm afraid I don't quite get this question. Could you elaborate? You mean all the threads get the same random value? Or you mean delay should be based on mutex contention?
Sergey Vojtovich
added a comment - Vanilla is 10.3 as of 7cb3520c0632ad912b309489ad86a90f9fc9bd0b (10.3 branch as of today).
Yes, I also wonder why I'm getting lower TPS compared to what you see (~144 000 vs ~160 000). Probably some difference in benchmark/settings/hardware? At least I have 100 000 records versus 1 000 000 that you have. Same 20 tables, 64 threads.
It would be nice if you could try nornd_1 change on your side. It's not just about fixing shared variable, but also about using lower value for ut_delay specifically in mutex.
Also right now the random value is given same across all the threads . it should be based on contention of threads and not some function value based?
I'm afraid I don't quite get this question. Could you elaborate? You mean all the threads get the same random value? Or you mean delay should be based on mutex contention?
I wonder if we can use it YIELD on ARM as a replacement for Intel PAUSE.
Sergey Vojtovich
added a comment - Just for the record, this article suggests that YIELD can be used in spinloop: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0489f/CIHEGBBF.html
I wonder if we can use it YIELD on ARM as a replacement for Intel PAUSE.
Sandeep sethia
added a comment - I tried nornd_1 but I see scalability issue in my numbers.The tps around 74k instead of 160k which I am getting by defaults
My sysbench command for run and prepare
./bin/sysbench --test=share/sysbench/oltp_update_index.lua --max-requests=0 --max-time=50 --tables=20 --table_size=100000 --report-interval=10 --mysql-user=root --mysql-db=test --mysql_storage_engine=innodb --mysql-socket=/tmp/mysql.sock --num-threads=64 run
and my cnf file is
[mysqld]
innodb_flush_neighbors = 0
innodb_write_io_threads = 8
loose-innodb_adaptive_hash_index_parts = 16
loose-innodb_adaptive_hash_index_partitions = 16
#innodb_spin_wait_delay = 200
max_prepared_stmt_count=1048576
innodb_buffer_pool_size = 50g
max_connections = 5000
max_user_connections = 5000
innodb_use_native_aio = 1
innodb_log_file_size = 2G
innodb_flush_sync=0
innodb_io_capacity =5000
innodb_flush_log_at_trx_commit =0
#innodb_concurrency_tickets =5000
#innodb_thread_concurrency=128
max_heap_table_size = 256M
sort_buffer_size = 2M
read_buffer_size =2M
binlog_cache_size = 1M
thread_cache_size = 500
innodb_buffer_pool_instances=30
#enforce-gtid-consistency = 1
All the runs are happening in tmpfs.
Sergey Vojtovich
added a comment - marko , please review fix for this bug: https://github.com/MariaDB/server/commit/62eeb1c760196207ac57338237a0d58df925847f
Looks OK to me.
Side note: After this fix, ut_rnd_interval() is really only used in one place (if we ignore the uncovered fault injection fil_tablespace_iterate_failure). It could make sense to move the function to the only file where it is being used.
Marko Mäkelä
added a comment - Looks OK to me.
Side note: After this fix, ut_rnd_interval() is really only used in one place (if we ignore the uncovered fault injection fil_tablespace_iterate_failure ). It could make sense to move the function to the only file where it is being used.
In case if the atomics in arm/intel gets optimized or for the matter gets more degraded I feel this can still need to be adjusted. I think what's really needed is a measure the distribution of the number of CPU cycles the mutexes actually need to be delayed. Because "The argument gives the desired delay in microseconds on 100 MHz Pentium + Visual C++." is clearly not relevant any more
Sandeep sethia
added a comment - Sergey,
In case if the atomics in arm/intel gets optimized or for the matter gets more degraded I feel this can still need to be adjusted. I think what's really needed is a measure the distribution of the number of CPU cycles the mutexes actually need to be delayed. Because "The argument gives the desired delay in microseconds on 100 MHz Pentium + Visual C++." is clearly not relevant any more
ssethia, the argument is not relevant indeed. But I'm not completely sure what you're suggesting.
Scalability is at least 3-dimensional thing: hardware, number of threads and workload. Usually when we improve some benchmark results that works on some crossing of these three, we inevitably make another crossing performance worse. The only way that gives non-negative result in all crossings I'm aware of is removing cache line contention (mutex, rwlock, false sharing, variables) from hot path.
In the scope of this task we removed RNG variable contention, which goes perfectly inline with what I say above.
If you have any specific suggestion how exactly we should tune mutex delay, I'm all for creating another JIRA task and discussing it separately.
Sergey Vojtovich
added a comment - ssethia , the argument is not relevant indeed. But I'm not completely sure what you're suggesting.
Scalability is at least 3-dimensional thing: hardware, number of threads and workload. Usually when we improve some benchmark results that works on some crossing of these three, we inevitably make another crossing performance worse. The only way that gives non-negative result in all crossings I'm aware of is removing cache line contention (mutex, rwlock, false sharing, variables) from hot path.
In the scope of this task we removed RNG variable contention, which goes perfectly inline with what I say above.
If you have any specific suggestion how exactly we should tune mutex delay, I'm all for creating another JIRA task and discussing it separately.
MySQL switched from global RNG state to per-thread state using thread-local-storage (TLS). But TLS performance as such is far from perfect. If my idea explained in
MDEV-14374is proved to be faster, we won't need RNG in spin-loop at all.