Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-7314

Concurrent "INSERT INTO table SELECT MAX(id)+1 FROM table" are hitting deadlocks on Aria tables using ROW_FORMAT=PAGE

Details

    • Bug
    • Status: Closed (View Workflow)
    • Major
    • Resolution: Fixed
    • 10.0.15
    • 10.0.16
    • Storage Engine - Aria
    • None
    • Debian Wheezy amd64

    Description

      Running concurrent "INSERT INTO table SELECT MAX(id)+1 FROM table" statements are creating a deadlock on Aria tables using ROW_FORMAT=PAGE (which is the default for Aria).

      Step to reproduce

      Run this PHP script that :

      • Creates an Aria table having a single int column used as a PK
      • Run INSERTs on this table from two concurrent processes using the value for id of SELECT MAX(id)+1 on the table itself

      Here is the script :

      <?php
      // forking a second process that will run at the same time and do the same thing
      $pid = pcntl_fork();
      // the script will abort after 2 seconds of runtime if no new call to pcntl_alarm is done
      pcntl_alarm(2);
       
      // connect to database
      $conn = mysqli_connect("localhost", "username", "password");
      mysqli_select_db($conn, "database");
       
      if($pid == 0) {
      	// we are on the main process, no need for the fork to execute this
      	mysqli_query($conn, "DROP TABLE errorsX;");
      	mysqli_query($conn, "CREATE TABLE errorsX (
      			id SMALLINT(5) NOT NULL,
      			PRIMARY KEY (id)
      		)
      		ENGINE=Aria ROW_FORMAT=PAGE;");
      	mysqli_query($conn, "INSERT INTO errorsX (id) VALUES (1);");
      } else {
      	// the fork will wait for 0.5sec before starting to INSERT to be sure the main process had time to recreate the table properly
      	usleep(500000);
      }
       
      $insertCount = 0;
      while($insertCount < 250) {
      	// the script timer is reset to 2 seconds
      	pcntl_alarm(2);
      	if(mysqli_query($conn, "INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX;")) {
      		// the INSERT succeeded
      		echo ".";
      	} else {
      		// the INSERT failed
      		echo("\n".mysqli_error($conn)."\n");
      	}
      	$insertCount++;
       
      	// we wait for 0.01s before looping
      	usleep(10000);
      }
       
      die("\nThe INSERT limit has been reached which indicates that no locking issue has been hit.\n");
      ?>

      Actual result

      The alarm signal is terminating the script as one INSERT loop takes more than 2 seconds to run :

      $ php testconcurrentinsert.php
      ...............Alarm clock

      The table is locked by two INSERT queries waiting for table lock :

      MariaDB [testdb]> SHOW FULL PROCESSLIST;
      +-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+
      | Id  | User | Host      | db     | Command | Time | State                        | Info                                                   | Progress |
      +-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+
      | 356 | user | localhost | testdb | Query   |    7 | Waiting for table level lock | INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX |    0.000 |
      | 357 | user | localhost | testdb | Query   |    7 | Waiting for table level lock | INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX |    0.000 |
      +-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+
      2 rows in set (0.00 sec)
       
      MariaDB [testdb]> SELECT * FROM information_schema.METADATA_LOCK_INFO;
      +-----------+-------------------------+-----------------+---------------------+--------------+------------+
      | THREAD_ID | LOCK_MODE               | LOCK_DURATION   | LOCK_TYPE           | TABLE_SCHEMA | TABLE_NAME |
      +-----------+-------------------------+-----------------+---------------------+--------------+------------+
      |       356 | MDL_INTENTION_EXCLUSIVE | MDL_STATEMENT   | Global read lock    |              |            |
      |       357 | MDL_INTENTION_EXCLUSIVE | MDL_STATEMENT   | Global read lock    |              |            |
      |       356 | MDL_SHARED_WRITE        | MDL_TRANSACTION | Table metadata lock | testdb       | errorsX    |
      |       357 | MDL_SHARED_WRITE        | MDL_TRANSACTION | Table metadata lock | testdb       | errorsX    |
      +-----------+-------------------------+-----------------+---------------------+--------------+------------+
      4 rows in set (0.01 sec)

      These queries will never end until a KILL statement is issued or the server is restarted.
      Issuing another INSERT query on the same table will work as long as an aggregate function on the primary key (id) is not used for the value of id.
      The issue doesnt appear using ROW_FORMAT=DYNAMIC or on MyISAM, InnoDB and TokuDB engines.
      Having the query cache enabled or disabled doesnt have any influence on the issue.

      Expected result

      Each forks should run 250 INSERTs (a dot is printed at each successful INSERT) and end by outputting the die() informational message such as :

      $ php testconcurrentinsert.php
      ...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................
      The INSERT limit has been reached which indicates that no locking issue has been hit.
      .........................................
      The INSERT limit has been reached which indicates that no locking issue has been hit.

      Attachments

        Issue Links

          Activity

            jb-boin Jean Weisbuch created issue -
            rspadim roberto spadim made changes -
            Field Original Value New Value
            rspadim roberto spadim made changes -
            Comment [ tested ENGINE=MEMORY, don't report errors, i think it's only related to ARIA engine ]
            rspadim roberto spadim made changes -
            Comment [ error persist with aria engine when using
            SET @@global.concurrent_insert = 'NEVER';

            it's not a problem of concurrent insert ]
            rspadim roberto spadim made changes -
            Comment [ could you connect from other mysql and check the MDL values with METADATA_LOCK_INFO information schema plugin? https://mariadb.com/kb/en/mariadb/documentation/plugins/metadata_lock_info/

            SELECT * FROM `information_schema`.`METADATA_LOCK_INFO` ]
            jb-boin Jean Weisbuch made changes -
            Description Running concurrent "_INSERT INTO table SELECT MAX(id)+1 FROM table_" statements are creating a deadlock on Aria tables using _ROW\_FORMAT=PAGE_ (which is the default for Aria).

            h4. Step to reproduce
            Run this PHP script that :
            - Creates an Aria table having a single int column used as a PK
            - Run INSERTs on this table from two concurrent processes using the value for id of _SELECT MAX(id)+1_ on the table itself

            Here is the script :
            {noformat}<?php
            // forking a second process that will run at the same time and do the same thing
            $pid = pcntl_fork();
            // the script will abort after 2 seconds of runtime if no new call to pcntl_alarm is done
            pcntl_alarm(2);

            // connect to database
            $conn = mysqli_connect("localhost", "username", "password");
            mysqli_select_db($conn, "database");

            if($pid == 0) {
            // we are on the main process, no need for the fork to execute this
            mysqli_query($conn, "DROP TABLE errorsX;");
            mysqli_query($conn, "CREATE TABLE errorsX (
            id SMALLINT(5) NOT NULL,
            PRIMARY KEY (id)
            )
            ENGINE=Aria ROW_FORMAT=PAGE;");
            mysqli_query($conn, "INSERT INTO errorsX (id) VALUES (1);");
            } else {
            // the fork will wait for 0.5sec before starting to INSERT to be sure the main process had time to recreate the table properly
            usleep(500000);
            }

            $insertCount = 0;
            while($insertCount < 250) {
            // the script timer is reset to 2 seconds
            pcntl_alarm(2);
            if(mysqli_query($conn, "INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX;")) {
            // the INSERT succeeded
            echo ".";
            } else {
            // the INSERT failed
            echo("\n".mysqli_error($conn)."\n");
            }
            $insertCount++;

            // we wait for 0.01s before looping
            usleep(10000);
            }

            die("\nThe INSERT limit has been reached which indicates that no locking issue has been hit.\n");
            ?>{noformat}

            h4. Actual result
            The alarm signal is terminating the script as one _INSERT_ loop takes more than 2 seconds to run :
            {noformat}$ php testconcurrentinsert.php
            ...............Alarm clock{noformat}

            The table is locked by two _INSERT_ queries *waiting for table lock* :
            {noformat}MariaDB [testdb]> SHOW FULL PROCESSLIST;
            +-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+
            | Id | User | Host | db | Command | Time | State | Info | Progress |
            +-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+
            | 356 | user | localhost | testdb | Query | 7 | Waiting for table level lock | INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX | 0.000 |
            | 357 | user | localhost | testdb | Query | 7 | Waiting for table level lock | INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX | 0.000 |
            +-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+
            2 rows in set (0.00 sec){noformat}
            *These queries will never end until a _KILL_ statement is issued or the server is restarted.*
            Issuing another _INSERT_ query on the same table will work as long as an aggregate function on the primary key (id) is not used for the value of id.
            The issue doesnt appear using _ROW\_FORMAT=DYNAMIC_ or on MyISAM, InnoDB and TokuDB engines.
            Having the query cache enabled or disabled doesnt have any influence on the issue.

            h4. Expected result
            Each forks should run 250 INSERTs (a dot is printed at each successful INSERT) and end by outputting the die() informational message such as :
            {noformat}$ php testconcurrentinsert.php
            ...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................
            The INSERT limit has been reached which indicates that no locking issue has been hit.
            .........................................
            The INSERT limit has been reached which indicates that no locking issue has been hit.{noformat}
            Running concurrent "_INSERT INTO table SELECT MAX(id)+1 FROM table_" statements are creating a deadlock on Aria tables using _ROW\_FORMAT=PAGE_ (which is the default for Aria).

            h4. Step to reproduce
            Run this PHP script that :
            - Creates an Aria table having a single int column used as a PK
            - Run INSERTs on this table from two concurrent processes using the value for id of _SELECT MAX(id)+1_ on the table itself

            Here is the script :
            {noformat}<?php
            // forking a second process that will run at the same time and do the same thing
            $pid = pcntl_fork();
            // the script will abort after 2 seconds of runtime if no new call to pcntl_alarm is done
            pcntl_alarm(2);

            // connect to database
            $conn = mysqli_connect("localhost", "username", "password");
            mysqli_select_db($conn, "database");

            if($pid == 0) {
            // we are on the main process, no need for the fork to execute this
            mysqli_query($conn, "DROP TABLE errorsX;");
            mysqli_query($conn, "CREATE TABLE errorsX (
            id SMALLINT(5) NOT NULL,
            PRIMARY KEY (id)
            )
            ENGINE=Aria ROW_FORMAT=PAGE;");
            mysqli_query($conn, "INSERT INTO errorsX (id) VALUES (1);");
            } else {
            // the fork will wait for 0.5sec before starting to INSERT to be sure the main process had time to recreate the table properly
            usleep(500000);
            }

            $insertCount = 0;
            while($insertCount < 250) {
            // the script timer is reset to 2 seconds
            pcntl_alarm(2);
            if(mysqli_query($conn, "INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX;")) {
            // the INSERT succeeded
            echo ".";
            } else {
            // the INSERT failed
            echo("\n".mysqli_error($conn)."\n");
            }
            $insertCount++;

            // we wait for 0.01s before looping
            usleep(10000);
            }

            die("\nThe INSERT limit has been reached which indicates that no locking issue has been hit.\n");
            ?>{noformat}

            h4. Actual result
            The alarm signal is terminating the script as one _INSERT_ loop takes more than 2 seconds to run :
            {noformat}$ php testconcurrentinsert.php
            ...............Alarm clock{noformat}

            The table is locked by two _INSERT_ queries *waiting for table lock* :
            {noformat}MariaDB [testdb]> SHOW FULL PROCESSLIST;
            +-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+
            | Id | User | Host | db | Command | Time | State | Info | Progress |
            +-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+
            | 356 | user | localhost | testdb | Query | 7 | Waiting for table level lock | INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX | 0.000 |
            | 357 | user | localhost | testdb | Query | 7 | Waiting for table level lock | INSERT INTO errorsX (id) SELECT MAX(id)+1 FROM errorsX | 0.000 |
            +-----+------+-----------+--------+---------+------+------------------------------+--------------------------------------------------------+----------+
            2 rows in set (0.00 sec)

            MariaDB [testdb]> SELECT * FROM information_schema.METADATA_LOCK_INFO;
            +-----------+-------------------------+-----------------+---------------------+--------------+------------+
            | THREAD_ID | LOCK_MODE | LOCK_DURATION | LOCK_TYPE | TABLE_SCHEMA | TABLE_NAME |
            +-----------+-------------------------+-----------------+---------------------+--------------+------------+
            | 356 | MDL_INTENTION_EXCLUSIVE | MDL_STATEMENT | Global read lock | | |
            | 357 | MDL_INTENTION_EXCLUSIVE | MDL_STATEMENT | Global read lock | | |
            | 356 | MDL_SHARED_WRITE | MDL_TRANSACTION | Table metadata lock | testdb | errorsX |
            | 357 | MDL_SHARED_WRITE | MDL_TRANSACTION | Table metadata lock | testdb | errorsX |
            +-----------+-------------------------+-----------------+---------------------+--------------+------------+
            4 rows in set (0.01 sec){noformat}
            *These queries will never end until a _KILL_ statement is issued or the server is restarted.*
            Issuing another _INSERT_ query on the same table will work as long as an aggregate function on the primary key (id) is not used for the value of id.
            The issue doesnt appear using _ROW\_FORMAT=DYNAMIC_ or on MyISAM, InnoDB and TokuDB engines.
            Having the query cache enabled or disabled doesnt have any influence on the issue.

            h4. Expected result
            Each forks should run 250 INSERTs (a dot is printed at each successful INSERT) and end by outputting the die() informational message such as :
            {noformat}$ php testconcurrentinsert.php
            ...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................
            The INSERT limit has been reached which indicates that no locking issue has been hit.
            .........................................
            The INSERT limit has been reached which indicates that no locking issue has been hit.{noformat}
            elenst Elena Stepanova made changes -
            Due Date 2014-12-28
            elenst Elena Stepanova made changes -
            elenst Elena Stepanova made changes -
            Fix Version/s 10.0 [ 16000 ]
            Assignee Michael Widenius [ monty ]
            elenst Elena Stepanova made changes -
            Due Date 2014-12-28
            rspadim roberto spadim made changes -
            Comment [ i don't know if this is relevant, and if it's the real problem, reading last comment from Jean...
            at MDEV-6817, the "concurrent" insert (or something like it), isn't considered and query cache 'save' an old result set, maybe we don't have the "read lock on the table" when using aria with page row format? at aria documentation we have another interesting information:

            https://mariadb.com/kb/en/mariadb/documentation/storage-engines/aria/aria-storage-formats/
            * Slower if there are multiple duplicated keys, as Aria will first write a row, then keys, and only then check for duplicates
            i don't know where the read lock occurs, but it's a expected result? ]
            monty Michael Widenius made changes -
            Status Open [ 1 ] In Progress [ 3 ]
            monty Michael Widenius made changes -
            Fix Version/s 10.0.16 [ 17900 ]
            Fix Version/s 10.0 [ 16000 ]
            Resolution Fixed [ 1 ]
            Status In Progress [ 3 ] Closed [ 6 ]
            ratzpo Rasmus Johansson (Inactive) made changes -
            Workflow MariaDB v2 [ 58948 ] MariaDB v3 [ 65601 ]
            serg Sergei Golubchik made changes -
            Workflow MariaDB v3 [ 65601 ] MariaDB v4 [ 148620 ]

            People

              monty Michael Widenius
              jb-boin Jean Weisbuch
              Votes:
              1 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.