lock_sys is one of three major InnoDB scalability bottlenecks. Scalability issues are especially obvious under sysbench OLTP update index/non-index benchmarks.
There's no clarity on how exactly it should be optimised yet.
Attachments
Issue Links
blocks
MDEV-21452Use condition variables and normal mutexes instead of InnoDB os_event and mutex
Closed
causes
MDEV-24861Assertion `trx->rsegs.m_redo.rseg' failed in innodb_prepare_commit_versioned
Closed
MDEV-35708lock_rec_get_prev() returns only the first record lock
Closed
includes
MDEV-24731Excessive mutex contention in DeadlockChecker::check_and_resolve()
Closed
is blocked by
MDEV-24671Assertion failure in lock_wait_table_reserve_slot()
Closed
relates to
MDEV-11392AliSQL: [perf] Issue#31 OPTIMIZE CHECK/GRANT OF INNODB TABLE LOCK
As noted in MDEV-20483, the apparent reason for table_locks to exist is that it is a cache of trx_locks that can only be accessed by the thread that is executing the transaction. This allows callers of lock_table_has() to avoid accessing trx_t::mutex. Maybe we should simply omit table locks from trx_locks, and keep them in table_locks only?
Maybe we could store table_locks in a lock-free hash table, so that they can traversed by diagnostic printouts.
Similarly, maybe we can extend MDEV-16406 Refactor the InnoDB record locks
by using a lock-free hash table that maps
(trx_id,space_id,page_number,heap_number) or a subset of it, such as (space_id,page_number), to a bitmap.
Marko Mäkelä
added a comment - As noted in MDEV-20483 , the apparent reason for table_locks to exist is that it is a cache of trx_locks that can only be accessed by the thread that is executing the transaction. This allows callers of lock_table_has() to avoid accessing trx_t::mutex . Maybe we should simply omit table locks from trx_locks , and keep them in table_locks only?
Maybe we could store table_locks in a lock-free hash table, so that they can traversed by diagnostic printouts.
Similarly, maybe we can extend
MDEV-16406 Refactor the InnoDB record locks
by using a lock-free hash table that maps
(trx_id,space_id,page_number,heap_number) or a subset of it, such as (space_id,page_number), to a bitmap.
we partitoned the record lock hash which works fine. the diffcult part is how to detect deadlock with multiple partition. Lock-free lock system is a good idea, AFAIK, Aws aurora did a similar job to make it lock free. There's a paper describing this: https://ts.data61.csiro.au/publications/nicta_full_text/6465.pdf
zhai weixiang
added a comment - we partitoned the record lock hash which works fine. the diffcult part is how to detect deadlock with multiple partition. Lock-free lock system is a good idea, AFAIK, Aws aurora did a similar job to make it lock free. There's a paper describing this: https://ts.data61.csiro.au/publications/nicta_full_text/6465.pdf
I think that it might be helpful to first implement MDEV-16232 to avoid the creation of explicit record locks during non-conflicting DELETE and UPDATE. Table IX lock creation would still cause contention on lock_sys.
Marko Mäkelä
added a comment - - edited MySQL 8.0.21 implemented a sharded lock_sys mutex.
Edit: they seem to have replaced the mutex with a partitioned rw-latch, using page_id_t as the key . For deadlock detection, they appear to employ a global rw-latch.
I think that it might be helpful to first implement MDEV-16232 to avoid the creation of explicit record locks during non-conflicting DELETE and UPDATE . Table IX lock creation would still cause contention on lock_sys .
In MDEV-21452 I was trying to use trx_t::mutex instead of lock_sys.mutex with a condition variable, to resume lock_wait_suspend_thread(). That did not quite work: we would get some race condition, most likely due to the wait ending prematurely. The change was mostly adding trx->mutex acquisition around lock_reset_lock_and_trx_wait() calls.
The code around suspending and resuming threads is rather convoluted. In particular, que_thr_stop_for_mysql() and lock_wait_suspend_thread() are separated from each other, and the trx->mutex is being acquired and released multiple times while a lock wait is being registered. Also, there are multiple state fields related to lock waits, both in que_thr_t and trx_t.
I think that our performance would improve a little if we cleaned this up a little.
Marko Mäkelä
added a comment - In MDEV-21452 I was trying to use trx_t::mutex instead of lock_sys.mutex with a condition variable, to resume lock_wait_suspend_thread() . That did not quite work: we would get some race condition, most likely due to the wait ending prematurely. The change was mostly adding trx->mutex acquisition around lock_reset_lock_and_trx_wait() calls.
The code around suspending and resuming threads is rather convoluted. In particular, que_thr_stop_for_mysql() and lock_wait_suspend_thread() are separated from each other, and the trx->mutex is being acquired and released multiple times while a lock wait is being registered. Also, there are multiple state fields related to lock waits, both in que_thr_t and trx_t .
I think that our performance would improve a little if we cleaned this up a little.
It replaces lock_sys.mutex with an rw-lock. Many operations, such as deadlock detection, will be protected by the exclusive global lock.
To improve scalability, it introduces 256+256 mutexes, indexed by page_id_t or dict_table_t::id. Each shard is protected by a combination of shared global rw-lock and the mutex.
Because implementing MDEV-16232 would require much more effort than this, I think that we must consider this approach for MariaDB.
One complication is that lock_wait_suspend_thread() in MDEV-21452 will use a condition variable in combination with lock_sys.mutex, which would be replaced by rw-latch above. It might be easiest to combine the condition variable with lock_sys.wait_mutex instead of lock_sys.mutex. Only on Microsoft Windows, the native SRWLOCK could be combined with CONDITION_VARIABLE.
Marko Mäkelä
added a comment - I reviewed the MySQL 8.0.21 WL#10314 again.
It replaces lock_sys.mutex with an rw-lock. Many operations, such as deadlock detection, will be protected by the exclusive global lock.
To improve scalability, it introduces 256+256 mutexes, indexed by page_id_t or dict_table_t::id . Each shard is protected by a combination of shared global rw-lock and the mutex.
Because implementing MDEV-16232 would require much more effort than this, I think that we must consider this approach for MariaDB.
One complication is that lock_wait_suspend_thread() in MDEV-21452 will use a condition variable in combination with lock_sys.mutex , which would be replaced by rw-latch above. It might be easiest to combine the condition variable with lock_sys.wait_mutex instead of lock_sys.mutex . Only on Microsoft Windows, the native SRWLOCK could be combined with CONDITION_VARIABLE .
I replaced lock_wait_suspend_thread() with a simple lock_wait() that will use pthread_cond_timedwait() for waiting for the lock to the granted. If that call fails with an error, we know that a timeout has occurred. The wait may be interrupted by ha_kill_query() or deadlock detection, which will simply invoke pthread_cond_signal().
There is no need for a separate lock_wait_timeout_task, which would wake up once per second. Also, the relative latching order of lock_sys.wait_mutex and lock_sys.mutex (which will be replaced with lock_sys.latch) will be swapped. Hopefully this necessary refactoring will provide some additional performance benefit.
Marko Mäkelä
added a comment - I replaced lock_wait_suspend_thread() with a simple lock_wait() that will use pthread_cond_timedwait() for waiting for the lock to the granted. If that call fails with an error, we know that a timeout has occurred. The wait may be interrupted by ha_kill_query() or deadlock detection, which will simply invoke pthread_cond_signal() .
There is no need for a separate lock_wait_timeout_task , which would wake up once per second. Also, the relative latching order of lock_sys.wait_mutex and lock_sys.mutex (which will be replaced with lock_sys.latch ) will be swapped. Hopefully this necessary refactoring will provide some additional performance benefit.
Also srv_slot_t can be removed and the locality of reference improved by storing trx->lock.wait_lock and trx->lock.cond in adjacent addresses.
Marko Mäkelä
added a comment - Also srv_slot_t can be removed and the locality of reference improved by storing trx->lock.wait_lock and trx->lock.cond in adjacent addresses.
Marko Mäkelä
added a comment - - edited zhaiwx1987 , I adapted the MDEV-11392 idea from MySQL Bug #72948 , but I introduced a single counter dict_table_t::n_lock_x_or_s . There is actually quite a bit of room for improvement in lock_sys , in addition to what was done in MySQL 8.0.21 WL#10314 .
The lock_wait() refactoring was causing some assertion failures in the start/stop que_thr_t bookkeeping. I think that it is simplest to remove that bookkeeping along with removing some unnecessary data members or enum values. Edit: This was done in MDEV-24671. As an added bonus, innodb_lock_wait_timeout is enforced more timely (no extra 1-second delay).
It turns out that the partitioned lock_sys.mutex will not work efficiently with the old DeadlockChecker. It must be refactored, similar to what was done in Oracle Bug #29882690 in MySQL 8.0.18.
Marko Mäkelä
added a comment - - edited The lock_wait() refactoring was causing some assertion failures in the start/stop que_thr_t bookkeeping. I think that it is simplest to remove that bookkeeping along with removing some unnecessary data members or enum values. Edit: This was done in MDEV-24671 . As an added bonus, innodb_lock_wait_timeout is enforced more timely (no extra 1-second delay).
It turns out that the partitioned lock_sys.mutex will not work efficiently with the old DeadlockChecker . It must be refactored, similar to what was done in Oracle Bug #29882690 in MySQL 8.0.18.
As a minimal change, I moved the DeadlockChecker::search() invocation to lock_wait(). A separate deadlock checker thread or task might still be useful. For that, I do not think that there is a need to introduce any blocking_trx data member. In our code, it should be safe to follow the chain of trx->lock.wait_lock->trx while holding lock_sys.wait_mutex and possibly also trx->mutex.
Marko Mäkelä
added a comment - As a minimal change, I moved the DeadlockChecker::search() invocation to lock_wait() . A separate deadlock checker thread or task might still be useful. For that, I do not think that there is a need to introduce any blocking_trx data member. In our code, it should be safe to follow the chain of trx->lock.wait_lock->trx while holding lock_sys.wait_mutex and possibly also trx->mutex .
We replaced lock_sys.mutex with a lock_sys.latch (MDEV-24167) that is 4 or 8 bytes on Linux, Microsoft Windows or OpenBSD. On other systems, a native rw-lock or a mutex and two condition variables will be used.
The entire world of transactional locks can be stopped by acquiring lock_sys.latch in exclusive mode.
Scalability is achieved by making most users use a combination of a shared lock_sys.latch and a lock-specific dict_table_t::lock_mutex or lock_sys_t::hash_latch that is embedded in each cache line of the lock_sys.rec_hash, lock_sys.prdt_hash, or lock_sys.prdt_page_hash. The lock_sys_t::hash_latch is always 4 or 8 bytes. On other systems than Linux, OpenBSD, and Microsoft Windows, the lock_sys_t::hash_latch::release() will always acquire a mutex and signal a condition variable. This is a known scalability bottleneck and could be improved further on such systems, by splitting the mutex and condition variable. (If such systems supported a lightweight mutex that is at most sizeof(void*), then we could happily use that.)
Until MDEV-24738 has been fixed, the deadlock detector will remain a significant bottleneck, because each lock_wait() would acquire lock_sys.latch in exclusive mode. This bottleneck can be avoided by setting innodb_deadlock_detect=OFF.
Marko Mäkelä
added a comment - We replaced lock_sys.mutex with a lock_sys.latch ( MDEV-24167 ) that is 4 or 8 bytes on Linux, Microsoft Windows or OpenBSD. On other systems, a native rw-lock or a mutex and two condition variables will be used.
The entire world of transactional locks can be stopped by acquiring lock_sys.latch in exclusive mode.
Scalability is achieved by making most users use a combination of a shared lock_sys.latch and a lock-specific dict_table_t::lock_mutex or lock_sys_t::hash_latch that is embedded in each cache line of the lock_sys.rec_hash , lock_sys.prdt_hash , or lock_sys.prdt_page_hash . The lock_sys_t::hash_latch is always 4 or 8 bytes. On other systems than Linux, OpenBSD, and Microsoft Windows, the lock_sys_t::hash_latch::release() will always acquire a mutex and signal a condition variable. This is a known scalability bottleneck and could be improved further on such systems, by splitting the mutex and condition variable. (If such systems supported a lightweight mutex that is at most sizeof(void*) , then we could happily use that.)
Until MDEV-24738 has been fixed, the deadlock detector will remain a significant bottleneck, because each lock_wait() would acquire lock_sys.latch in exclusive mode. This bottleneck can be avoided by setting innodb_deadlock_detect=OFF .
People
Marko Mäkelä
Sergey Vojtovich
Votes:
2Vote for this issue
Watchers:
12Start watching this issue
Dates
Created:
Updated:
Resolved:
Git Integration
Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.
{"report":{"fcp":2052.9000000953674,"ttfb":892.6999998092651,"pageVisibility":"visible","entityId":79014,"key":"jira.project.issue.view-issue","isInitial":true,"threshold":1000,"elementTimings":{},"userDeviceMemory":8,"userDeviceProcessors":64,"apdex":0.5,"journeyId":"89d977c5-5e9c-42a9-befe-06c08639bf6e","navigationType":0,"readyForUser":2202.9000000953674,"redirectCount":0,"resourceLoadedEnd":1826.9000000953674,"resourceLoadedStart":901.7999997138977,"resourceTiming":[{"duration":407.90000009536743,"initiatorType":"link","name":"https://jira.mariadb.org/s/2c21342762a6a02add1c328bed317ffd-CDN/lu2cib/820016/12ta74/0a8bac35585be7fc6c9cc5a0464cd4cf/_/download/contextbatch/css/_super/batch.css","startTime":901.7999997138977,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":901.7999997138977,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":1309.6999998092651,"responseStart":0,"secureConnectionStart":0},{"duration":408.5,"initiatorType":"link","name":"https://jira.mariadb.org/s/7ebd35e77e471bc30ff0eba799ebc151-CDN/lu2cib/820016/12ta74/494e4c556ecbb29f90a3d3b4f09cb99c/_/download/contextbatch/css/jira.browse.project,project.issue.navigator,jira.view.issue,jira.general,jira.global,atl.general,-_super/batch.css?agile_global_admin_condition=true&jag=true&jira.create.linked.issue=true&slack-enabled=true&whisper-enabled=true","startTime":902.6999998092651,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":902.6999998092651,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":1311.1999998092651,"responseStart":0,"secureConnectionStart":0},{"duration":686.3999996185303,"initiatorType":"script","name":"https://jira.mariadb.org/s/0917945aaa57108d00c5076fea35e069-CDN/lu2cib/820016/12ta74/0a8bac35585be7fc6c9cc5a0464cd4cf/_/download/contextbatch/js/_super/batch.js?locale=en","startTime":902.9000000953674,"connectEnd":902.9000000953674,"connectStart":902.9000000953674,"domainLookupEnd":902.9000000953674,"domainLookupStart":902.9000000953674,"fetchStart":902.9000000953674,"redirectEnd":0,"redirectStart":0,"requestStart":1314.5,"responseEnd":1589.2999997138977,"responseStart":1354.6999998092651,"secureConnectionStart":902.9000000953674},{"duration":819.6999998092651,"initiatorType":"script","name":"https://jira.mariadb.org/s/2d8175ec2fa4c816e8023260bd8c1786-CDN/lu2cib/820016/12ta74/494e4c556ecbb29f90a3d3b4f09cb99c/_/download/contextbatch/js/jira.browse.project,project.issue.navigator,jira.view.issue,jira.general,jira.global,atl.general,-_super/batch.js?agile_global_admin_condition=true&jag=true&jira.create.linked.issue=true&locale=en&slack-enabled=true&whisper-enabled=true","startTime":903.0999999046326,"connectEnd":903.0999999046326,"connectStart":903.0999999046326,"domainLookupEnd":903.0999999046326,"domainLookupStart":903.0999999046326,"fetchStart":903.0999999046326,"redirectEnd":0,"redirectStart":0,"requestStart":1315.5,"responseEnd":1722.7999997138977,"responseStart":1353.5,"secureConnectionStart":903.0999999046326},{"duration":457.3999996185303,"initiatorType":"script","name":"https://jira.mariadb.org/s/a9324d6758d385eb45c462685ad88f1d-CDN/lu2cib/820016/12ta74/c92c0caa9a024ae85b0ebdbed7fb4bd7/_/download/contextbatch/js/atl.global,-_super/batch.js?locale=en","startTime":903.4000000953674,"connectEnd":903.4000000953674,"connectStart":903.4000000953674,"domainLookupEnd":903.4000000953674,"domainLookupStart":903.4000000953674,"fetchStart":903.4000000953674,"redirectEnd":0,"redirectStart":0,"requestStart":1323.4000000953674,"responseEnd":1360.7999997138977,"responseStart":1356.5,"secureConnectionStart":903.4000000953674},{"duration":457.90000009536743,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2cib/820016/12ta74/1.0/_/download/batch/jira.webresources:calendar-en/jira.webresources:calendar-en.js","startTime":903.5,"connectEnd":903.5,"connectStart":903.5,"domainLookupEnd":903.5,"domainLookupStart":903.5,"fetchStart":903.5,"redirectEnd":0,"redirectStart":0,"requestStart":1324,"responseEnd":1361.4000000953674,"responseStart":1357.9000000953674,"secureConnectionStart":903.5},{"duration":457.30000019073486,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2cib/820016/12ta74/1.0/_/download/batch/jira.webresources:calendar-localisation-moment/jira.webresources:calendar-localisation-moment.js","startTime":903.6999998092651,"connectEnd":903.6999998092651,"connectStart":903.6999998092651,"domainLookupEnd":903.6999998092651,"domainLookupStart":903.6999998092651,"fetchStart":903.6999998092651,"redirectEnd":0,"redirectStart":0,"requestStart":1324.1999998092651,"responseEnd":1361,"responseStart":1357.0999999046326,"secureConnectionStart":903.6999998092651},{"duration":414.2999997138977,"initiatorType":"link","name":"https://jira.mariadb.org/s/b04b06a02d1959df322d9cded3aeecc1-CDN/lu2cib/820016/12ta74/a2ff6aa845ffc9a1d22fe23d9ee791fc/_/download/contextbatch/css/jira.global.look-and-feel,-_super/batch.css","startTime":903.9000000953674,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":903.9000000953674,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":1318.1999998092651,"responseStart":0,"secureConnectionStart":0},{"duration":458.09999990463257,"initiatorType":"script","name":"https://jira.mariadb.org/rest/api/1.0/shortcuts/820016/47140b6e0a9bc2e4913da06536125810/shortcuts.js?context=issuenavigation&context=issueaction","startTime":904,"connectEnd":904,"connectStart":904,"domainLookupEnd":904,"domainLookupStart":904,"fetchStart":904,"redirectEnd":0,"redirectStart":0,"requestStart":1325,"responseEnd":1362.0999999046326,"responseStart":1358.5,"secureConnectionStart":904},{"duration":415.30000019073486,"initiatorType":"link","name":"https://jira.mariadb.org/s/3ac36323ba5e4eb0af2aa7ac7211b4bb-CDN/lu2cib/820016/12ta74/d176f0986478cc64f24226b3d20c140d/_/download/contextbatch/css/com.atlassian.jira.projects.sidebar.init,-_super,-project.issue.navigator,-jira.view.issue/batch.css?jira.create.linked.issue=true","startTime":904.1999998092651,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":904.1999998092651,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":1319.5,"responseStart":0,"secureConnectionStart":0},{"duration":458.2000002861023,"initiatorType":"script","name":"https://jira.mariadb.org/s/5d5e8fe91fbc506585e83ea3b62ccc4b-CDN/lu2cib/820016/12ta74/d176f0986478cc64f24226b3d20c140d/_/download/contextbatch/js/com.atlassian.jira.projects.sidebar.init,-_super,-project.issue.navigator,-jira.view.issue/batch.js?jira.create.linked.issue=true&locale=en","startTime":904.2999997138977,"connectEnd":904.2999997138977,"connectStart":904.2999997138977,"domainLookupEnd":904.2999997138977,"domainLookupStart":904.2999997138977,"fetchStart":904.2999997138977,"redirectEnd":0,"redirectStart":0,"requestStart":1325.1999998092651,"responseEnd":1362.5,"responseStart":1359.0999999046326,"secureConnectionStart":904.2999997138977},{"duration":833.3000001907349,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2cib/820016/12ta74/1.0/_/download/batch/jira.webresources:bigpipe-js/jira.webresources:bigpipe-js.js","startTime":906.0999999046326,"connectEnd":906.0999999046326,"connectStart":906.0999999046326,"domainLookupEnd":906.0999999046326,"domainLookupStart":906.0999999046326,"fetchStart":906.0999999046326,"redirectEnd":0,"redirectStart":0,"requestStart":1556.9000000953674,"responseEnd":1739.4000000953674,"responseStart":1730.5999999046326,"secureConnectionStart":906.0999999046326},{"duration":920.7000002861023,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2cib/820016/12ta74/1.0/_/download/batch/jira.webresources:bigpipe-init/jira.webresources:bigpipe-init.js","startTime":906.1999998092651,"connectEnd":906.1999998092651,"connectStart":906.1999998092651,"domainLookupEnd":906.1999998092651,"domainLookupStart":906.1999998092651,"fetchStart":906.1999998092651,"redirectEnd":0,"redirectStart":0,"requestStart":1802.5,"responseEnd":1826.9000000953674,"responseStart":1826.0999999046326,"secureConnectionStart":906.1999998092651},{"duration":249.59999990463257,"initiatorType":"xmlhttprequest","name":"https://jira.mariadb.org/rest/webResources/1.0/resources","startTime":1730.5999999046326,"connectEnd":1730.5999999046326,"connectStart":1730.5999999046326,"domainLookupEnd":1730.5999999046326,"domainLookupStart":1730.5999999046326,"fetchStart":1730.5999999046326,"redirectEnd":0,"redirectStart":0,"requestStart":1946.0999999046326,"responseEnd":1980.1999998092651,"responseStart":1979.6999998092651,"secureConnectionStart":1730.5999999046326}],"fetchStart":1,"domainLookupStart":1,"domainLookupEnd":1,"connectStart":1,"connectEnd":1,"requestStart":637,"responseStart":893,"responseEnd":897,"domLoading":899,"domInteractive":2345,"domContentLoadedEventStart":2345,"domContentLoadedEventEnd":2416,"domComplete":3247,"loadEventStart":3247,"loadEventEnd":3247,"userAgent":"Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)","marks":[{"name":"bigPipe.sidebar-id.start","time":2252},{"name":"bigPipe.sidebar-id.end","time":2252.7999997138977},{"name":"bigPipe.activity-panel-pipe-id.start","time":2253},{"name":"bigPipe.activity-panel-pipe-id.end","time":2255.9000000953674},{"name":"activityTabFullyLoaded","time":2445.0999999046326}],"measures":[],"correlationId":"d3cff6dcf5f5f2","effectiveType":"4g","downlink":9.3,"rtt":0,"serverDuration":176,"dbReadsTimeInMs":30,"dbConnsTimeInMs":42,"applicationHash":"9d11dbea5f4be3d4cc21f03a88dd11d8c8687422","experiments":[]}}
As noted in
MDEV-20483, the apparent reason for table_locks to exist is that it is a cache of trx_locks that can only be accessed by the thread that is executing the transaction. This allows callers of lock_table_has() to avoid accessing trx_t::mutex. Maybe we should simply omit table locks from trx_locks, and keep them in table_locks only?Maybe we could store table_locks in a lock-free hash table, so that they can traversed by diagnostic printouts.
Similarly, maybe we can extend
MDEV-16406 Refactor the InnoDB record locks
by using a lock-free hash table that maps
(trx_id,space_id,page_number,heap_number) or a subset of it, such as (space_id,page_number), to a bitmap.