Details
-
Bug
-
Status: Confirmed (View Workflow)
-
Major
-
Resolution: Unresolved
-
10.5, 10.3(EOL), 10.4(EOL)
Description
The rw_trx_hash is implement by lf_hash, After running oltp_write_only with 1024 threads sysbench, there is lots of trx_id insert into lf_hash, lf_hash will increase it's bucket size. However, after finish the sysbench, and running cleanup operation, the item will been deleted in lf_hash, however the bucket size won't decrease, there is still lots of dummy nodes in it.
After that running oltp_read_write sysbench with 256 thread, the performance will get regression compare to running oltp_read_write sysbench directly without runing oltp_write_only with 1024 threads, even we have clean the data.
in the test case, we can find that the lf_hash size is 512, however, there is only 2 item in it, so in the iterator operation, we need to iterator 512 dummy node to get the 2 item, that cause the performance regression.
so can we add a new operation to reduce the bucket size and delete dummy node.
Attachments
Issue Links
- relates to
-
MDEV-21423 lock-free trx_sys get performance regression cause by lf_find and ut_delay
- Stalled
-
MDEV-28445 Secondary index locking invokes costly trx_sys.get_min_trx_id()
- Closed
-
MDEV-30357 Performance regression in locking reads from secondary indexes
- Closed
-
MDEV-33067 SCN(Sequence Commit Number) based MVCC
- Open