Details
-
Bug
-
Status: Closed (View Workflow)
-
Major
-
Resolution: Fixed
-
10.0.15
-
None
-
Debian Wheezy amd64
Description
Running concurrent "INSERT INTO table SELECT MAX(id)+1 FROM table" statements are creating a deadlock on Aria tables using ROW_FORMAT=PAGE (which is the default for Aria).
Step to reproduce
Run this PHP script that :
- Creates an Aria table having a single int column used as a PK
- Run INSERTs on this table from two concurrent processes using the value for id of SELECT MAX(id)+1 on the table itself
Here is the script :
<?php
|
// forking a second process that will run at the same time and do the same thing
|
$pid = pcntl_fork();
|
// the script will abort after 2 seconds of runtime if no new call to pcntl_alarm is done
|
pcntl_alarm(2);
|
|
// connect to database
|
$conn = mysqli_connect("localhost", "username", "password");
|
mysqli_select_db($conn, "database");
|
|
if($pid == 0) {
|
// we are on the main process, no need for the fork to execute this
|
mysqli_query($conn, "DROP TABLE errorsX;");
|
mysqli_query($conn, "CREATE TABLE errorsX (
|
id SMALLINT(5) NOT NULL,
|
PRIMARY KEY (id)
|
)
|
ENGINE=Aria ROW_FORMAT=PAGE;");
|
mysqli_query($conn, "INSERT INTO errorsX (id) VALUES (1);");
|
} else {
|
// the fork will wait for 0.5sec before starting to INSERT to be sure the main process had time to recreate the table properly
|
usleep(500000);
|
}
|
|
$insertCount = 0;
|
while($insertCount < 250) {
|
// the script timer is reset to 2 seconds
|
pcntl_alarm(2);
|
if(mysqli_query($conn, "INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX;")) {
|
// the INSERT succeeded
|
echo ".";
|
} else {
|
// the INSERT failed
|
echo("\n".mysqli_error($conn)."\n");
|
}
|
$insertCount++;
|
|
// we wait for 0.01s before looping
|
usleep(10000);
|
}
|
|
die("\nThe INSERT limit has been reached which indicates that no locking issue has been hit.\n");
|
?>
|
Actual result
The alarm signal is terminating the script as one INSERT loop takes more than 2 seconds to run :
$ php testconcurrentinsert.php
|
...............Alarm clock
|
The table is locked by two INSERT queries waiting for table lock :
MariaDB [testdb]> SHOW FULL PROCESSLIST;
|
+-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+
|
| Id | User | Host | db | Command | Time | State | Info | Progress |
|
+-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+
|
| 356 | user | localhost | testdb | Query | 7 | Waiting for table level lock | INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX | 0.000 |
|
| 357 | user | localhost | testdb | Query | 7 | Waiting for table level lock | INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX | 0.000 |
|
+-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+
|
2 rows in set (0.00 sec)
|
|
MariaDB [testdb]> SELECT * FROM information_schema.METADATA_LOCK_INFO;
|
+-----------+-------------------------+-----------------+---------------------+--------------+------------+
|
| THREAD_ID | LOCK_MODE | LOCK_DURATION | LOCK_TYPE | TABLE_SCHEMA | TABLE_NAME |
|
+-----------+-------------------------+-----------------+---------------------+--------------+------------+
|
| 356 | MDL_INTENTION_EXCLUSIVE | MDL_STATEMENT | Global read lock | | |
|
| 357 | MDL_INTENTION_EXCLUSIVE | MDL_STATEMENT | Global read lock | | |
|
| 356 | MDL_SHARED_WRITE | MDL_TRANSACTION | Table metadata lock | testdb | errorsX |
|
| 357 | MDL_SHARED_WRITE | MDL_TRANSACTION | Table metadata lock | testdb | errorsX |
|
+-----------+-------------------------+-----------------+---------------------+--------------+------------+
|
4 rows in set (0.01 sec)
|
These queries will never end until a KILL statement is issued or the server is restarted.
Issuing another INSERT query on the same table will work as long as an aggregate function on the primary key (id) is not used for the value of id.
The issue doesnt appear using ROW_FORMAT=DYNAMIC or on MyISAM, InnoDB and TokuDB engines.
Having the query cache enabled or disabled doesnt have any influence on the issue.
Expected result
Each forks should run 250 INSERTs (a dot is printed at each successful INSERT) and end by outputting the die() informational message such as :
$ php testconcurrentinsert.php
|
...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................
|
The INSERT limit has been reached which indicates that no locking issue has been hit.
|
.........................................
|
The INSERT limit has been reached which indicates that no locking issue has been hit.
|
Attachments
Issue Links
Activity
Comment | [ tested ENGINE=MEMORY, don't report errors, i think it's only related to ARIA engine ] |
Comment |
[ error persist with aria engine when using SET @@global.concurrent_insert = 'NEVER'; it's not a problem of concurrent insert ] |
Comment |
[ could you connect from other mysql and check the MDL values with METADATA_LOCK_INFO information schema plugin? https://mariadb.com/kb/en/mariadb/documentation/plugins/metadata_lock_info/ SELECT * FROM `information_schema`.`METADATA_LOCK_INFO` ] |
Description |
Running concurrent "_INSERT INTO table SELECT MAX(id)+1 FROM table_" statements are creating a deadlock on Aria tables using _ROW\_FORMAT=PAGE_ (which is the default for Aria). h4. Step to reproduce Run this PHP script that : - Creates an Aria table having a single int column used as a PK - Run INSERTs on this table from two concurrent processes using the value for id of _SELECT MAX(id)+1_ on the table itself Here is the script : {noformat}<?php // forking a second process that will run at the same time and do the same thing $pid = pcntl_fork(); // the script will abort after 2 seconds of runtime if no new call to pcntl_alarm is done pcntl_alarm(2); // connect to database $conn = mysqli_connect("localhost", "username", "password"); mysqli_select_db($conn, "database"); if($pid == 0) { // we are on the main process, no need for the fork to execute this mysqli_query($conn, "DROP TABLE errorsX;"); mysqli_query($conn, "CREATE TABLE errorsX ( id SMALLINT(5) NOT NULL, PRIMARY KEY (id) ) ENGINE=Aria ROW_FORMAT=PAGE;"); mysqli_query($conn, "INSERT INTO errorsX (id) VALUES (1);"); } else { // the fork will wait for 0.5sec before starting to INSERT to be sure the main process had time to recreate the table properly usleep(500000); } $insertCount = 0; while($insertCount < 250) { // the script timer is reset to 2 seconds pcntl_alarm(2); if(mysqli_query($conn, "INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX;")) { // the INSERT succeeded echo "."; } else { // the INSERT failed echo("\n".mysqli_error($conn)."\n"); } $insertCount++; // we wait for 0.01s before looping usleep(10000); } die("\nThe INSERT limit has been reached which indicates that no locking issue has been hit.\n"); ?>{noformat} h4. Actual result The alarm signal is terminating the script as one _INSERT_ loop takes more than 2 seconds to run : {noformat}$ php testconcurrentinsert.php ...............Alarm clock{noformat} The table is locked by two _INSERT_ queries *waiting for table lock* : {noformat}MariaDB [testdb]> SHOW FULL PROCESSLIST; +-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+ | Id | User | Host | db | Command | Time | State | Info | Progress | +-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+ | 356 | user | localhost | testdb | Query | 7 | Waiting for table level lock | INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX | 0.000 | | 357 | user | localhost | testdb | Query | 7 | Waiting for table level lock | INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX | 0.000 | +-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+ 2 rows in set (0.00 sec){noformat} *These queries will never end until a _KILL_ statement is issued or the server is restarted.* Issuing another _INSERT_ query on the same table will work as long as an aggregate function on the primary key (id) is not used for the value of id. The issue doesnt appear using _ROW\_FORMAT=DYNAMIC_ or on MyISAM, InnoDB and TokuDB engines. Having the query cache enabled or disabled doesnt have any influence on the issue. h4. Expected result Each forks should run 250 INSERTs (a dot is printed at each successful INSERT) and end by outputting the die() informational message such as : {noformat}$ php testconcurrentinsert.php ........................................................................................................................................................................................................................................................................................................................................................................................................................................................................... The INSERT limit has been reached which indicates that no locking issue has been hit. ......................................... The INSERT limit has been reached which indicates that no locking issue has been hit.{noformat} |
Running concurrent "_INSERT INTO table SELECT MAX(id)+1 FROM table_" statements are creating a deadlock on Aria tables using _ROW\_FORMAT=PAGE_ (which is the default for Aria). h4. Step to reproduce Run this PHP script that : - Creates an Aria table having a single int column used as a PK - Run INSERTs on this table from two concurrent processes using the value for id of _SELECT MAX(id)+1_ on the table itself Here is the script : {noformat}<?php // forking a second process that will run at the same time and do the same thing $pid = pcntl_fork(); // the script will abort after 2 seconds of runtime if no new call to pcntl_alarm is done pcntl_alarm(2); // connect to database $conn = mysqli_connect("localhost", "username", "password"); mysqli_select_db($conn, "database"); if($pid == 0) { // we are on the main process, no need for the fork to execute this mysqli_query($conn, "DROP TABLE errorsX;"); mysqli_query($conn, "CREATE TABLE errorsX ( id SMALLINT(5) NOT NULL, PRIMARY KEY (id) ) ENGINE=Aria ROW_FORMAT=PAGE;"); mysqli_query($conn, "INSERT INTO errorsX (id) VALUES (1);"); } else { // the fork will wait for 0.5sec before starting to INSERT to be sure the main process had time to recreate the table properly usleep(500000); } $insertCount = 0; while($insertCount < 250) { // the script timer is reset to 2 seconds pcntl_alarm(2); if(mysqli_query($conn, "INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX;")) { // the INSERT succeeded echo "."; } else { // the INSERT failed echo("\n".mysqli_error($conn)."\n"); } $insertCount++; // we wait for 0.01s before looping usleep(10000); } die("\nThe INSERT limit has been reached which indicates that no locking issue has been hit.\n"); ?>{noformat} h4. Actual result The alarm signal is terminating the script as one _INSERT_ loop takes more than 2 seconds to run : {noformat}$ php testconcurrentinsert.php ...............Alarm clock{noformat} The table is locked by two _INSERT_ queries *waiting for table lock* : {noformat}MariaDB [testdb]> SHOW FULL PROCESSLIST; +-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+ | Id | User | Host | db | Command | Time | State | Info | Progress | +-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+ | 356 | user | localhost | testdb | Query | 7 | Waiting for table level lock | INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX | 0.000 | | 357 | user | localhost | testdb | Query | 7 | Waiting for table level lock | INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX | 0.000 | +-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+ 2 rows in set (0.00 sec) MariaDB [testdb]> SELECT * FROM information_schema.METADATA_LOCK_INFO; +-----------+-------------------------+-----------------+---------------------+--------------+------------+ | THREAD_ID | LOCK_MODE | LOCK_DURATION | LOCK_TYPE | TABLE_SCHEMA | TABLE_NAME | +-----------+-------------------------+-----------------+---------------------+--------------+------------+ | 356 | MDL_INTENTION_EXCLUSIVE | MDL_STATEMENT | Global read lock | | | | 357 | MDL_INTENTION_EXCLUSIVE | MDL_STATEMENT | Global read lock | | | | 356 | MDL_SHARED_WRITE | MDL_TRANSACTION | Table metadata lock | testdb | errorsX | | 357 | MDL_SHARED_WRITE | MDL_TRANSACTION | Table metadata lock | testdb | errorsX | +-----------+-------------------------+-----------------+---------------------+--------------+------------+ 4 rows in set (0.01 sec){noformat} *These queries will never end until a _KILL_ statement is issued or the server is restarted.* Issuing another _INSERT_ query on the same table will work as long as an aggregate function on the primary key (id) is not used for the value of id. The issue doesnt appear using _ROW\_FORMAT=DYNAMIC_ or on MyISAM, InnoDB and TokuDB engines. Having the query cache enabled or disabled doesnt have any influence on the issue. h4. Expected result Each forks should run 250 INSERTs (a dot is printed at each successful INSERT) and end by outputting the die() informational message such as : {noformat}$ php testconcurrentinsert.php ........................................................................................................................................................................................................................................................................................................................................................................................................................................................................... The INSERT limit has been reached which indicates that no locking issue has been hit. ......................................... The INSERT limit has been reached which indicates that no locking issue has been hit.{noformat} |
Due Date | 2014-12-28 |
Fix Version/s | 10.0 [ 16000 ] | |
Assignee | Michael Widenius [ monty ] |
Due Date | 2014-12-28 |
Comment |
[ i don't know if this is relevant, and if it's the real problem, reading last comment from Jean... at https://mariadb.com/kb/en/mariadb/documentation/storage-engines/aria/aria-storage-formats/ * Slower if there are multiple duplicated keys, as Aria will first write a row, then keys, and only then check for duplicates i don't know where the read lock occurs, but it's a expected result? ] |
Status | Open [ 1 ] | In Progress [ 3 ] |
Fix Version/s | 10.0.16 [ 17900 ] | |
Fix Version/s | 10.0 [ 16000 ] | |
Resolution | Fixed [ 1 ] | |
Status | In Progress [ 3 ] | Closed [ 6 ] |
Workflow | MariaDB v2 [ 58948 ] | MariaDB v3 [ 65601 ] |
Workflow | MariaDB v3 [ 65601 ] | MariaDB v4 [ 148620 ] |
Here with mariadb 10.0.14 i executed and i have this error:
INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX;
/* Erro SQL (1062): Duplicate entry '531' for key 'PRIMARY' */
but... why should this fail ?! isn't INSERT "atomic" in this case?
with myisam,innodb,tokudb i don't have this error
—
could you connect from other mysql and check the MDL values with METADATA_LOCK_INFO information schema plugin? https://mariadb.com/kb/en/mariadb/documentation/plugins/metadata_lock_info/
SELECT * FROM `information_schema`.`METADATA_LOCK_INFO`
in my tests i don't have deadlocks, just error reporting about duplicate keys