Details
-
Task
-
Status: Stalled (View Workflow)
-
Major
-
Resolution: Unresolved
-
None
-
None
Description
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT:
1. Data structures
1.1 A Global Lock Tree Manager object
1.2 A separate Lock Tree for each table
1.3 Each transaction keeps a track of ranges it is holding locks
2. Functions
2.1 Initializing the Lock Manager
2.2 Create Lock Tree for a table
2.3 Getting a lock
2.4 Releasing a lock.
2.5 Releasing all of the transaction's locks
1. Data structures
1.1 A Global Lock Tree Manager object
There needs to be a global locktree_manager.
See PerconaFT/src/ydb-internal.h,
struct __toku_db_env_internal {
|
toku::locktree_manager ltm;
|
1.2 A separate Lock Tree for each table
TokuDB uses a separate Lock Tree for each table db->i->lt.
1.3 Each transaction keeps a track of ranges it is holding locks
Each transaction has a list of ranges that it is holding locks on. It is referred to like so
db_txn_struct_i(txn)->lt_map
|
and is stored in this structure, together with a mutex to protect it:
struct __toku_db_txn_internal { |
// maps a locktree to a buffer of key ranges that are locked. |
// it is protected by the txn_mutex, so hot indexing and a client |
// thread can concurrently operate on this txn. |
toku::omt<txn_lt_key_ranges> lt_map;
|
toku_mutex_t txn_mutex;
|
The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread).
(See toku_txn_destroy for how to free this)
2. Functions
Most functions that are mentioned here are from storage/tokudb/PerconaFT/src/, ydb_txn.cc, ydb_row_lock.cc - this is TokuDB's layer above the Lock Tree.
2.1 Initializing the Lock Manager
TODO
2.2 Create Lock Tree for a table
TokuDB does it when it opens a table's table_share. It is done like so:
db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id,
|
toku_ft_get_comparator(db->i->ft_handle),
|
&on_create_extra);
|
Then, one needs to release it:
db->dbenv->i->ltm.release_lt(db->i->lt);
|
after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty).
(TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table)
2.3 Getting a lock
This function has an example:
// Get a range lock.
|
// Return when the range lock is acquired or the default lock tree timeout has expired.
|
int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, |
toku::lock_request::type lock_type) {
|
It is also possible to start an asynchronous lock request and then wait for it (see toku_db_start_range_lock, toku_db_wait_range_lock). We don't have a use for this it seems
Point locks are obtained by passing the same key as left_key and right_key.
2.4 Releasing a lock.
TokuDB doesn't seem to release individual locks (all locks are held until transaction either commits or is aborted).
LockTree has a function to release locks from a specified range:
locktree::release_locks(TXNID txnid, const range_buffer *ranges) |
Besides calling that, one will need to
- wake up all waiting lock requests. release_locks doesn't wake them up. There is toku::lock_request::retry_all_lock_requests call which retries all pending requests (Which doesn't seem to be efficient... but maybe it is ok?)
- Remove the released lock from the list of locks it is holding (which is in db_txn_struct_i(txn)->lt_map). This is actually not essential because that list is only used for the purpose of releasing the locks when the transaction is finished.
2.5 Releasing all of the transaction's locks
See PerconaFT/src/ydb_txn.cc:
static void toku_txn_release_locks(DB_TXN *txn) { |
// Prevent access to the locktree map while releasing. |
// It is possible for lock escalation to attempt to |
// modify this data structure while the txn commits. |
toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex);
|
|
size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); |
for (size_t i = 0; i < num_ranges; i++) { |
txn_lt_key_ranges ranges;
|
int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); |
invariant_zero(r);
|
toku_db_release_lt_key_ranges(txn, &ranges);
|
}
|
|
toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex);
|
}
|
Attachments
Issue Links
- includes
-
MDEV-17873 MyRocks-Gap-Lock: lock wait doesn't set correct STATE
-
- Open
-
-
MDEV-17874 MyRocks-Gap-Lock: Lock memory overhead
-
- Closed
-
-
MDEV-17887 MyRocks-Gap-Lock: information about current lock waits
-
- Closed
-
-
MDEV-18104 MyRocks-Gap-Lock: range locking bounds are incorrect for multi-part keys
-
- Closed
-
-
MDEV-18227 MyRocks-Gap-Lock: Lock escalation and updates to transaction's list of owned locks
-
- Closed
-
-
MDEV-19451 MyRocks: Range Locking: shared point lock support
-
- Open
-
-
MDEV-19986 MyRocks: Range Locking: SeekForUpdate support
-
- Open
-
-
MDEV-21314 Range Locking: individual rows are locked when scanning PK
-
- Open
-
- relates to
-
MDEV-18856 Benchmark range locking
-
- Closed
-
-
MDEV-21574 MyRocks: Range Locking: RCU-based cache for the root node
-
- Open
-
-
MDEV-21186 Benchmark range locking - nov-dec 2019
-
- Closed
-
Activity
Field | Original Value | New Value |
---|---|---|
Description |
This task is for tracking https://github.com/facebook/mysql-5.6/issues/800
|
This task is for tracking https://github.com/facebook/mysql-5.6/issues/800.
== Data structures == ==== A Global Lock Tree Manager === PerconaFT/src/ydb-internal.h struct __toku_db_env_internal { toku::locktree_manager ltm; ==== Each table has its own Lock Tree === See db->i->lt. ==== Each transaction has ranges it holds lock on == Each transaction has a list of ranges that it is holding locks on: db_txn_struct_i(txn)->lt_map struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; and a mutex to protect it: toku_mutex_t txn_mutex; init: db_txn_struct_i(result)->lt_map.create_no_array(); (or create()?) |
Description |
This task is for tracking https://github.com/facebook/mysql-5.6/issues/800.
== Data structures == ==== A Global Lock Tree Manager === PerconaFT/src/ydb-internal.h struct __toku_db_env_internal { toku::locktree_manager ltm; ==== Each table has its own Lock Tree === See db->i->lt. ==== Each transaction has ranges it holds lock on == Each transaction has a list of ranges that it is holding locks on: db_txn_struct_i(txn)->lt_map struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; and a mutex to protect it: toku_mutex_t txn_mutex; init: db_txn_struct_i(result)->lt_map.create_no_array(); (or create()?) |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. Data structures h3. A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. A separate Lock Tree for each table See {{db->i->lt}} ==== Each transaction has ranges it holds lock on == Each transaction has a list of ranges that it is holding locks on: db_txn_struct_i(txn)->lt_map struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; and a mutex to protect it: toku_mutex_t txn_mutex; init: db_txn_struct_i(result)->lt_map.create_no_array(); (or create()?) |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. Data structures h3. A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. A separate Lock Tree for each table See {{db->i->lt}} ==== Each transaction has ranges it holds lock on == Each transaction has a list of ranges that it is holding locks on: db_txn_struct_i(txn)->lt_map struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; and a mutex to protect it: toku_mutex_t txn_mutex; init: db_txn_struct_i(result)->lt_map.create_no_array(); (or create()?) |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3 .1.3 Each transaction keeps a track of ranges it holds lock on == Each transaction has a list of ranges that it is holding locks on: {code:cpp} db_txn_struct_i(txn)->lt_map {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; {code} {code} and a mutex to protect it: toku_mutex_t txn_mutex; {code} |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3 .1.3 Each transaction keeps a track of ranges it holds lock on == Each transaction has a list of ranges that it is holding locks on: {code:cpp} db_txn_struct_i(txn)->lt_map {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; {code} {code} and a mutex to protect it: toku_mutex_t txn_mutex; {code} |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3 .1.3 Each transaction keeps a track of ranges it holds lock on Each transaction has a list of ranges that it is holding locks on: {code:cpp} db_txn_struct_i(txn)->lt_map {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; {code} and a mutex to protect it: {code} toku_mutex_t txn_mutex; {code} Note that lock escalation process may modify this list (?). |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3 .1.3 Each transaction keeps a track of ranges it holds lock on Each transaction has a list of ranges that it is holding locks on: {code:cpp} db_txn_struct_i(txn)->lt_map {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; {code} and a mutex to protect it: {code} toku_mutex_t txn_mutex; {code} Note that lock escalation process may modify this list (?). |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on: {code:cpp} db_txn_struct_i(txn)->lt_map {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; {code} and a mutex to protect it: {code} toku_mutex_t txn_mutex; {code} Note that lock escalation process may modify this list (?). |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on: {code:cpp} db_txn_struct_i(txn)->lt_map {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; {code} and a mutex to protect it: {code} toku_mutex_t txn_mutex; {code} Note that lock escalation process may modify this list (?). |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on: {code:cpp} db_txn_struct_i(txn)->lt_map {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; {code} and a mutex to protect it: {code} toku_mutex_t txn_mutex; {code} Note that lock escalation process may modify this list (?). h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} h3. 2.3 Getting a write lock h3. 2.4 Getting a read lock |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on: {code:cpp} db_txn_struct_i(txn)->lt_map {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; {code} and a mutex to protect it: {code} toku_mutex_t txn_mutex; {code} Note that lock escalation process may modify this list (?). h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} h3. 2.3 Getting a write lock h3. 2.4 Getting a read lock |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on: {code:cpp} db_txn_struct_i(txn)->lt_map {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; {code} and a mutex to protect it: {code} toku_mutex_t txn_mutex; {code} Note that lock escalation process may modify this list (?). h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} h3. 2.4 Releasing a lock. |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on: {code:cpp} db_txn_struct_i(txn)->lt_map {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; {code} and a mutex to protect it: {code} toku_mutex_t txn_mutex; {code} Note that lock escalation process may modify this list (?). h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} h3. 2.4 Releasing a lock. |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} h3. 2.4 Releasing a lock. |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} h3. 2.4 Releasing a lock. |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) h3. 2.4 Releasing a lock. |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) h3. 2.4 Releasing a lock. |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. h3. 2.5 Releasing all locks. |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. h3. 2.5 Releasing all locks. |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. h3. 2.5 Releasing all locks. See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. h3. 2.5 Releasing all locks. See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. h3. 2.5 Releasing all locks. See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. h3. 2.5 Releasing all locks. See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. h3. 2.5 Releasing all locks. See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. h3. 2.5 Releasing all locks. See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. TokuDB doesn't seem to release individual locks (all locks are held until transaction either commits or is aborted). LockTree has a function to release locks from a specified range: {code:cpp} locktree::release_locks(TXNID txnid, const range_buffer *ranges) {code} Transaction will also need to remove them from the list of locks it is holding (note: this is actually not essential because that list is only used for the purpose of releasing the locks when transaction is finished) h3. 2.5 Releasing all locks. See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. h3. 2.5 Releasing all locks. See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. TokuDB doesn't seem to release individual locks (all locks are held until transaction either commits or is aborted). LockTree has a function to release locks from a specified range: {code:cpp} locktree::release_locks(TXNID txnid, const range_buffer *ranges) {code} Transaction will also need to remove them from the list of locks it is holding (note: this is actually not essential because that list is only used for the purpose of releasing the locks when transaction is finished) h3. 2.5 Releasing all locks. See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initialize the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. h3. 2.5 Releasing all locks. See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initializing the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. TokuDB doesn't seem to release individual locks (all locks are held until transaction either commits or is aborted). LockTree has a function to release locks from a specified range: {code:cpp} locktree::release_locks(TXNID txnid, const range_buffer *ranges) {code} Transaction will also need to remove them from the list of locks it is holding (note: this is actually not essential because that list is only used for the purpose of releasing the locks when transaction is finished) h3. 2.5 Releasing all of transaction's locks See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initializing the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. TokuDB doesn't seem to release individual locks (all locks are held until transaction either commits or is aborted). LockTree has a function to release locks from a specified range: {code:cpp} locktree::release_locks(TXNID txnid, const range_buffer *ranges) {code} Transaction will also need to remove them from the list of locks it is holding (note: this is actually not essential because that list is only used for the purpose of releasing the locks when transaction is finished) h3. 2.5 Releasing all of transaction's locks See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initializing the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. TokuDB doesn't seem to release individual locks (all locks are held until transaction either commits or is aborted). LockTree has a function to release locks from a specified range: {code:cpp} locktree::release_locks(TXNID txnid, const range_buffer *ranges) {code} Transaction will also need to remove them from the list of locks it is holding (note: this is actually not essential because that list is only used for the purpose of releasing the locks when the transaction is finished) h3. 2.5 Releasing all of transaction's locks See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initializing the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. TokuDB doesn't seem to release individual locks (all locks are held until transaction either commits or is aborted). LockTree has a function to release locks from a specified range: {code:cpp} locktree::release_locks(TXNID txnid, const range_buffer *ranges) {code} Transaction will also need to remove them from the list of locks it is holding (note: this is actually not essential because that list is only used for the purpose of releasing the locks when the transaction is finished) h3. 2.5 Releasing all of transaction's locks See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initializing the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. TokuDB doesn't seem to release individual locks (all locks are held until transaction either commits or is aborted). LockTree has a function to release locks from a specified range: {code:cpp} locktree::release_locks(TXNID txnid, const range_buffer *ranges) {code} Besides calling that, one will need to * wake up all waiting lock requests (Yes. that function will not do that. A * Remove the released lock from the list of locks it is holding (which is in {{db_txn_struct_i(txn)->lt_map}}). This is actually not essential because that list is only used for the purpose of releasing the locks when the transaction is finished. h3. 2.5 Releasing all of the transaction's locks See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initializing the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. TokuDB doesn't seem to release individual locks (all locks are held until transaction either commits or is aborted). LockTree has a function to release locks from a specified range: {code:cpp} locktree::release_locks(TXNID txnid, const range_buffer *ranges) {code} Besides calling that, one will need to * wake up all waiting lock requests (Yes. that function will not do that. A * Remove the released lock from the list of locks it is holding (which is in {{db_txn_struct_i(txn)->lt_map}}). This is actually not essential because that list is only used for the purpose of releasing the locks when the transaction is finished. h3. 2.5 Releasing all of the transaction's locks See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initializing the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. TokuDB doesn't seem to release individual locks (all locks are held until transaction either commits or is aborted). LockTree has a function to release locks from a specified range: {code:cpp} locktree::release_locks(TXNID txnid, const range_buffer *ranges) {code} Besides calling that, one will need to * wake up all waiting lock requests. {{release_locks}} doesn't wake them up. There is {{toku::lock_request::retry_all_lock_requests}} call which retries all pending requests (Which doesn't seem to be efficient... but maybe it is ok?) * Remove the released lock from the list of locks it is holding (which is in {{db_txn_struct_i(txn)->lt_map}}). This is actually not essential because that list is only used for the purpose of releasing the locks when the transaction is finished. h3. 2.5 Releasing all of the transaction's locks See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initializing the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. TokuDB doesn't seem to release individual locks (all locks are held until transaction either commits or is aborted). LockTree has a function to release locks from a specified range: {code:cpp} locktree::release_locks(TXNID txnid, const range_buffer *ranges) {code} Besides calling that, one will need to * wake up all waiting lock requests. {{release_locks}} doesn't wake them up. There is {{toku::lock_request::retry_all_lock_requests}} call which retries all pending requests (Which doesn't seem to be efficient... but maybe it is ok?) * Remove the released lock from the list of locks it is holding (which is in {{db_txn_struct_i(txn)->lt_map}}). This is actually not essential because that list is only used for the purpose of releasing the locks when the transaction is finished. h3. 2.5 Releasing all of the transaction's locks See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: 1. Data structures 1.1 A Global Lock Tree Manager object 1.2 A separate Lock Tree for each table 1.3 Each transaction keeps a track of ranges it is holding locks 2. Functions 2.1 Initializing the Lock Manager 2.2 Create Lock Tree for a table 2.3 Getting a lock 2.4 Releasing a lock. 2.5 Releasing all of the transaction's locks h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initializing the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. TokuDB doesn't seem to release individual locks (all locks are held until transaction either commits or is aborted). LockTree has a function to release locks from a specified range: {code:cpp} locktree::release_locks(TXNID txnid, const range_buffer *ranges) {code} Besides calling that, one will need to * wake up all waiting lock requests. {{release_locks}} doesn't wake them up. There is {{toku::lock_request::retry_all_lock_requests}} call which retries all pending requests (Which doesn't seem to be efficient... but maybe it is ok?) * Remove the released lock from the list of locks it is holding (which is in {{db_txn_struct_i(txn)->lt_map}}). This is actually not essential because that list is only used for the purpose of releasing the locks when the transaction is finished. h3. 2.5 Releasing all of the transaction's locks See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
Attachment | screenshot-1.png [ 46411 ] |
Attachment | screenshot-2.png [ 46412 ] |
Status | Open [ 1 ] | Confirmed [ 10101 ] |
Status | Confirmed [ 10101 ] | In Progress [ 3 ] |
Link | This issue includes MDEV-17873 [ MDEV-17873 ] |
Link |
This issue includes |
Link |
This issue includes |
Link |
This issue includes |
Link |
This issue includes |
Attachment | screenshot-3.png [ 47176 ] |
Link |
This issue relates to |
Link | This issue includes MDEV-19451 [ MDEV-19451 ] |
Link | This issue includes MDEV-19986 [ MDEV-19986 ] |
Status | In Progress [ 3 ] | Stalled [ 10000 ] |
Link | This issue includes MDEV-21314 [ MDEV-21314 ] |
Link | This issue relates to MDEV-21574 [ MDEV-21574 ] |
Link |
This issue relates to |
Workflow | MariaDB v3 [ 86097 ] | MariaDB v4 [ 131685 ] |
Description |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: 1. Data structures 1.1 A Global Lock Tree Manager object 1.2 A separate Lock Tree for each table 1.3 Each transaction keeps a track of ranges it is holding locks 2. Functions 2.1 Initializing the Lock Manager 2.2 Create Lock Tree for a table 2.3 Getting a lock 2.4 Releasing a lock. 2.5 Releasing all of the transaction's locks h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} - this is TokuDB's layer above the Lock Tree. h3. 2.1 Initializing the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. TokuDB doesn't seem to release individual locks (all locks are held until transaction either commits or is aborted). LockTree has a function to release locks from a specified range: {code:cpp} locktree::release_locks(TXNID txnid, const range_buffer *ranges) {code} Besides calling that, one will need to * wake up all waiting lock requests. {{release_locks}} doesn't wake them up. There is {{toku::lock_request::retry_all_lock_requests}} call which retries all pending requests (Which doesn't seem to be efficient... but maybe it is ok?) * Remove the released lock from the list of locks it is holding (which is in {{db_txn_struct_i(txn)->lt_map}}). This is actually not essential because that list is only used for the purpose of releasing the locks when the transaction is finished. h3. 2.5 Releasing all of the transaction's locks See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
(The upstream task is: https://github.com/facebook/mysql\-5.6/issues/800 ) Notes about how to use PerconaFT: 1. Data structures 1.1 A Global Lock Tree Manager object 1.2 A separate Lock Tree for each table 1.3 Each transaction keeps a track of ranges it is holding locks 2. Functions 2.1 Initializing the Lock Manager 2.2 Create Lock Tree for a table 2.3 Getting a lock 2.4 Releasing a lock. 2.5 Releasing all of the transaction's locks h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb\-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} \- this is TokuDB's layer above the Lock Tree. h3. 2.1 Initializing the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table\_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release\_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. TokuDB doesn't seem to release individual locks (all locks are held until transaction either commits or is aborted). LockTree has a function to release locks from a specified range: {code:cpp} locktree::release_locks(TXNID txnid, const range_buffer *ranges) {code} Besides calling that, one will need to * wake up all waiting lock requests. {{release_locks}} doesn't wake them up. There is {{toku::lock_request::retry_all_lock_requests}} call which retries all pending requests (Which doesn't seem to be efficient... but maybe it is ok?) * Remove the released lock from the list of locks it is holding (which is in {{db_txn_struct_i(txn)->lt_map}}). This is actually not essential because that list is only used for the purpose of releasing the locks when the transaction is finished. h3. 2.5 Releasing all of the transaction's locks See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
Summary | Gap Lock support in MyRocks | Gap Lock support in MyRocks |
Description |
(The upstream task is: https://github.com/facebook/mysql\-5.6/issues/800 ) Notes about how to use PerconaFT: 1. Data structures 1.1 A Global Lock Tree Manager object 1.2 A separate Lock Tree for each table 1.3 Each transaction keeps a track of ranges it is holding locks 2. Functions 2.1 Initializing the Lock Manager 2.2 Create Lock Tree for a table 2.3 Getting a lock 2.4 Releasing a lock. 2.5 Releasing all of the transaction's locks h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb\-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} \- this is TokuDB's layer above the Lock Tree. h3. 2.1 Initializing the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table\_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release\_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. TokuDB doesn't seem to release individual locks (all locks are held until transaction either commits or is aborted). LockTree has a function to release locks from a specified range: {code:cpp} locktree::release_locks(TXNID txnid, const range_buffer *ranges) {code} Besides calling that, one will need to * wake up all waiting lock requests. {{release_locks}} doesn't wake them up. There is {{toku::lock_request::retry_all_lock_requests}} call which retries all pending requests (Which doesn't seem to be efficient... but maybe it is ok?) * Remove the released lock from the list of locks it is holding (which is in {{db_txn_struct_i(txn)->lt_map}}). This is actually not essential because that list is only used for the purpose of releasing the locks when the transaction is finished. h3. 2.5 Releasing all of the transaction's locks See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
(The upstream task is: https://github.com/facebook/mysql-5.6/issues/800 )
Notes about how to use PerconaFT: 1. Data structures 1.1 A Global Lock Tree Manager object 1.2 A separate Lock Tree for each table 1.3 Each transaction keeps a track of ranges it is holding locks 2. Functions 2.1 Initializing the Lock Manager 2.2 Create Lock Tree for a table 2.3 Getting a lock 2.4 Releasing a lock. 2.5 Releasing all of the transaction's locks h2. 1. Data structures h3. 1.1 A Global Lock Tree Manager object There needs to be a global {{locktree_manager}}. See PerconaFT/src/ydb-internal.h, {noformat} struct __toku_db_env_internal { toku::locktree_manager ltm; {noformat} h3. 1.2 A separate Lock Tree for each table TokuDB uses a separate Lock Tree for each table {{db->i->lt}}. h3.1.3 Each transaction keeps a track of ranges it is holding locks Each transaction has a list of ranges that it is holding locks on. It is referred to like so {code:cpp} db_txn_struct_i(txn)->lt_map {code} and is stored in this structure, together with a mutex to protect it: {code:cpp} struct __toku_db_txn_internal { // maps a locktree to a buffer of key ranges that are locked. // it is protected by the txn_mutex, so hot indexing and a client // thread can concurrently operate on this txn. toku::omt<txn_lt_key_ranges> lt_map; toku_mutex_t txn_mutex; {code} The mutex is there, because the list may be modified by the lock escalation process (which may be invoked from a different thread). (See toku_txn_destroy for how to free this) h2. 2. Functions Most functions that are mentioned here are from {{storage/tokudb/PerconaFT/src/}}, {{ydb_txn.cc}}, {{ydb_row_lock.cc}} \- this is TokuDB's layer above the Lock Tree. h3. 2.1 Initializing the Lock Manager TODO h3. 2.2 Create Lock Tree for a table TokuDB does it when it opens a table's table_share. It is done like so: {code:cpp} db->i->lt = db->dbenv->i->ltm.get_lt(db->i->dict_id, toku_ft_get_comparator(db->i->ft_handle), &on_create_extra); {code} Then, one needs to release it: {code:cpp} db->dbenv->i->ltm.release_lt(db->i->lt); {code} after the last release\_lt call, the Lock Tree will be deleted (it is guaranteed to be empty). (TODO: this is easy to arrange if Toku locks are invoked from MyRocks level. But if they are invoked from RocksDB, this is harder as RocksDB doesn't have any concept of tables or indexes. For start, we can pretend all keys are in one table) h3. 2.3 Getting a lock This function has an example: {code:cpp} // Get a range lock. // Return when the range lock is acquired or the default lock tree timeout has expired. int toku_db_get_range_lock(DB *db, DB_TXN *txn, const DBT *left_key, const DBT *right_key, toku::lock_request::type lock_type) { {code} It is also possible to start an asynchronous lock request and then wait for it (see {{toku_db_start_range_lock}}, {{toku_db_wait_range_lock}}). We don't have a use for this it seems (?) Point locks are obtained by passing the same key as left_key and right_key. h3. 2.4 Releasing a lock. TokuDB doesn't seem to release individual locks (all locks are held until transaction either commits or is aborted). LockTree has a function to release locks from a specified range: {code:cpp} locktree::release_locks(TXNID txnid, const range_buffer *ranges) {code} Besides calling that, one will need to * wake up all waiting lock requests. {{release_locks}} doesn't wake them up. There is {{toku::lock_request::retry_all_lock_requests}} call which retries all pending requests (Which doesn't seem to be efficient... but maybe it is ok?) * Remove the released lock from the list of locks it is holding (which is in {{db_txn_struct_i(txn)->lt_map}}). This is actually not essential because that list is only used for the purpose of releasing the locks when the transaction is finished. h3. 2.5 Releasing all of the transaction's locks See {{PerconaFT/src/ydb_txn.cc}}: {code:cpp} static void toku_txn_release_locks(DB_TXN *txn) { // Prevent access to the locktree map while releasing. // It is possible for lock escalation to attempt to // modify this data structure while the txn commits. toku_mutex_lock(&db_txn_struct_i(txn)->txn_mutex); size_t num_ranges = db_txn_struct_i(txn)->lt_map.size(); for (size_t i = 0; i < num_ranges; i++) { txn_lt_key_ranges ranges; int r = db_txn_struct_i(txn)->lt_map.fetch(i, &ranges); invariant_zero(r); toku_db_release_lt_key_ranges(txn, &ranges); } toku_mutex_unlock(&db_txn_struct_i(txn)->txn_mutex); } {code} |
TokuDB's lock tree is here: storage/tokudb/PerconaFT/locktree. They lock
ranges.
(gdb) wher
#0 toku::locktree::sto_try_acquire (this=0x7fff700342c0, prepared_lkr=0x7fffd4b6c390, txnid=11, left_key=0x7fffd4b6c750, right_key=0x7fffd4b6c770) at /home/psergey/dev-git/10.3-r2/storage/tokudb/PerconaFT/locktree/locktree.cc:291
#1 0x00007ffff4d6eaa1 in toku::locktree::acquire_lock (this=0x7fff700342c0, is_write_request=true, txnid=11, left_key=0x7fffd4b6c750, right_key=0x7fffd4b6c770, conflicts=0x7fffd4b6c4c0) at /home/psergey/dev-git/10.3-r2/storage/tokudb/PerconaFT/locktree/locktree.cc:380
#2 0x00007ffff4d6eb73 in toku::locktree::try_acquire_lock (this=0x7fff700342c0, is_write_request=true, txnid=11, left_key=0x7fffd4b6c750, right_key=0x7fffd4b6c770, conflicts=0x7fffd4b6c4c0, big_txn=false) at /home/psergey/dev-git/10.3-r2/storage/tokudb/PerconaFT/locktree/locktree.cc:399
#3 0x00007ffff4d6ec1a in toku::locktree::acquire_write_lock (this=0x7fff700342c0, txnid=11, left_key=0x7fffd4b6c750, right_key=0x7fffd4b6c770, conflicts=0x7fffd4b6c4c0, big_txn=false) at /home/psergey/dev-git/10.3-r2/storage/tokudb/PerconaFT/locktree/locktree.cc:412
#4 0x00007ffff4d72dc4 in toku::lock_request::start (this=0x7fffd4b6c5b0) at /home/psergey/dev-git/10.3-r2/storage/tokudb/PerconaFT/locktree/lock_request.cc:165
#5 0x00007ffff4d603aa in toku_db_start_range_lock (db=0x7fff700271e0, txn=0x7fff70060600, left_key=0x7fffd4b6c750, right_key=0x7fffd4b6c770, lock_type=toku::lock_request::WRITE, request=0x7fffd4b6c5b0) at /home/psergey/dev-git/10.3-r2/storage/tokudb/PerconaFT/src/ydb_row_lock.cc:211
#6 0x00007ffff4d6022e in toku_db_get_range_lock (db=0x7fff700271e0, txn=0x7fff70060600, left_key=0x7fffd4b6c750, right_key=0x7fffd4b6c770, lock_type=toku::lock_request::WRITE) at /home/psergey/dev-git/10.3-r2/storage/tokudb/PerconaFT/src/ydb_row_lock.cc:182
#7 0x00007ffff4e31643 in c_set_bounds (dbc=0x7fff7005f000, left_key=0x7fffd4b6c750, right_key=0x7fffd4b6c770, pre_acquire=true, out_of_range_error=-30989) at /home/psergey/dev-git/10.3-r2/storage/tokudb/PerconaFT/src/ydb_cursor.cc:714
#8 0x00007ffff4d195df in ha_tokudb::prelock_range (this=0x7fff7002cdf8, start_key=0x7fff7002cee0, end_key=0x7fff7002cf00) at /home/psergey/dev-git/10.3-r2/storage/tokudb/ha_tokudb.cc:5978
#9 0x00007ffff4d19a31 in ha_tokudb::read_range_first (this=0x7fff7002cdf8, start_key=0x7fff7002cee0, end_key=0x7fff7002cf00, eq_range=false, sorted=true) at /home/psergey/dev-git/10.3-r2/storage/tokudb/ha_tokudb.cc:6025
#10 0x0000555555d761dc in handler::multi_range_read_next (this=0x7fff7002cdf8, range_info=0x7fffd4b6c950) at /home/psergey/dev-git/10.3-r2/sql/multi_range_read.cc:291
#11 0x0000555555d763be in Mrr_simple_index_reader::get_next (this=0x7fff7002d3d8, range_info=0x7fffd4b6c950) at /home/psergey/dev-git/10.3-r2/sql/multi_range_read.cc:323
#12 0x0000555555d7901a in DsMrr_impl::dsmrr_next (this=0x7fff7002d298, range_info=0x7fffd4b6c950) at /home/psergey/dev-git/10.3-r2/sql/multi_range_read.cc:1399
#13 0x00007ffff4d30b56 in ha_tokudb::multi_range_read_next (this=0x7fff7002cdf8, range_info=0x7fffd4b6c950) at /home/psergey/dev-git/10.3-r2/storage/tokudb/ha_tokudb_mrr_maria.cc:42
#14 0x000055555601f3a2 in QUICK_RANGE_SELECT::get_next (this=0x7fff7002f800) at /home/psergey/dev-git/10.3-r2/sql/opt_range.cc:11454
#15 0x0000555556030e64 in rr_quick (info=0x7fff700162b0) at /home/psergey/dev-git/10.3-r2/sql/records.cc:366
#16 0x0000555555b3b03b in READ_RECORD::read_record (this=0x7fff700162b0) at /home/psergey/dev-git/10.3-r2/sql/records.h:73
#17 0x0000555555c3e4a4 in join_init_read_record (tab=0x7fff700161e8) at /home/psergey/dev-git/10.3-r2/sql/sql_select.cc:20227
#18 0x0000555555c3c256 in sub_select (join=0x7fff700145b0, join_tab=0x7fff700161e8, end_of_records=false) at /home/psergey/dev-git/10.3-r2/sql/sql_select.cc:19301
#19 0x0000555555c3b821 in do_select (join=0x7fff700145b0, procedure=0x0) at /home/psergey/dev-git/10.3-r2/sql/sql_select.cc:18844