Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-6817

query cache bug (resturning inconsistent/old result set) with aria table parallel inserts, row format = page

Details

    • 10.0.20

    Description

      hi guys, i'm running a stress test with query cache and aria...
      the table:

      CREATE TABLE `errors` (
      	`id1` INT(11) NOT NULL DEFAULT '0',
      	`id2` INT(11) NOT NULL DEFAULT '0',
      	`err_id` INT(11) NOT NULL DEFAULT '0',
      	`datahora` DECIMAL(22,7) NOT NULL DEFAULT '0.0000000',
      	`erro` LONGBLOB NOT NULL,
      	PRIMARY KEY (`id1`, `id2`, `err_id`)
      )
      COLLATE='latin1_swedish_ci'
      ENGINE=Aria;
      (using page table type!)

      i startup a php script:

      <?php
      $pid=pcntl_fork();
      if($pid==0) $pid=pcntl_fork();
      if($pid==0) $pid=pcntl_fork();
      // three different process
       
       
      // connect to database
      $conn=mysql_connect(/* connection parameters */);
      mysql_select_db(/*select database*/);
       
      // and start a infinite loop
      $contador=0;
      while(1){
      		$SQL=	"SELECT MAX(err_id) FROM errors ".
      			"WHERE id1=1 AND id2=2";
      IF($contador%1000==0 && $contador>1000) echo "\n$SQL\n";
      		$tmp_tbl=mysql_query($SQL);
      		$err_id=mysql_result($tmp_tbl,0,0);
      IF($contador%1000==0 && $contador>1000) echo "\n$err_id\n";
      		mysql_free($tmp_tbl);
      		$err_id++;if($err_id<=0) $err_id=1;
      		$SQL=	"INSERT INTO errors (".
      				"id1,id2,err_id,datahora,erro".
      			") VALUES (".
      				"1,2,$err_id,'".microtime(1)."','test '".
      			")";
      		$ok=mysql_query($SQL); /* autocommit = 1 */
      		
      		if($ok){
      			$contador=0;
      			continue;
      		}
      		$contador++;
      		usleep(500); // wait a bit...
      }

      aria variables:

      Variable_name Value
      aria_block_size 8192
      aria_checkpoint_interval 30
      aria_checkpoint_log_activity 1048576
      aria_force_start_after_recovery_failures 0
      aria_group_commit none
      aria_group_commit_interval 0
      aria_log_file_size 1073741824
      aria_log_purge_type immediate
      aria_max_sort_file_size 9223372036853727232
      aria_page_checksum ON
      aria_pagecache_age_threshold 300
      aria_pagecache_buffer_size 134217728
      aria_pagecache_division_limit 100
      aria_recover NORMAL
      aria_repair_threads 1
      aria_sort_buffer_size 268434432
      aria_stats_method nulls_unequal
      aria_sync_log_dir NEWFILE
      aria_used_for_temp_tables ON

      autocommit=1
      concurrent_insert isn't a problem, i tested with 0,1,2 this give the same error

      –
      what i think is happen?
      at query cache code there's a consideration about when or when not include the query at query cache, while a insert and another aria table is invalidated and a "select" query execute:

      time ->
      -----INSERT-------
      ---SELECCCCCCT--- (cached after invalidation - shoudn't be cached)
      ---------------------SELECT ---- (got from query cache, wrong query cache value)

      the value of the new insert field err_id isn't updated at the right time, in other words when i connect with a externall tool and set autocommit=0 (to get a different query cache flag, or use SQL_NO_CACHE) i see that the err_id is different from err_id of php script

      in other words
      1)the table wasn't invalidated correctly
      2)the query was cached in a state that aria engine should report to don't cache (concurrent insert? page flush? anything like a buffer or cache being used while updating table?)

      could we recheck if the aria engine invalidation is ok or not?
      i will test again with myisam, and after innodb
      —
      no problem with innodb
      no problem with myisam
      no problem with aria using row_format fixed/dynamic, only with aria-page

      Attachments

        Issue Links

          Activity

            rspadim roberto spadim created issue -
            rspadim roberto spadim made changes -
            Field Original Value New Value
            Description hi guys, i'm running a stress test with query cache and aria...
            the table:
            {code}
            CREATE TABLE `errors` (
            `id1` INT(11) NOT NULL DEFAULT '0',
            `id2` INT(11) NOT NULL DEFAULT '0',
            `err_id` INT(11) NOT NULL DEFAULT '0',
            `datahora` DECIMAL(22,7) NOT NULL DEFAULT '0.0000000',
            `erro` LONGBLOB NOT NULL,
            PRIMARY KEY (`id1`, `id2`, `err_id`)
            )
            COLLATE='latin1_swedish_ci'
            ENGINE=Aria;

            {code}


            i startup a php script:


            {code}
            <?php
            $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            // three different process


            // connect to database
            $conn=mysql_connect(/* connection parameters */);
            mysql_select_db(/*select database*/);

            // and start a infinite loop
            $contador=0;
            while(1){
            $SQL= "SELECT MAX(err_id) FROM errors ".
            "WHERE id1=1 AND id2=2";
            IF($contador%1000==0 && $contador>1000) echo "\n$SQL\n";
            $tmp_tbl=mysql_query($SQL);
            $err_id=mysql_result($tmp_tbl,0,0);
            IF($contador%1000==0 && $contador>1000) echo "\n$err_id\n";
            mysql_free($tmp_tbl);
            $err_id++;if($err_id<=0) $err_id=1;
            $SQL= "INSERT INTO errors (".
            "id1,id2,err_id,datahora,erro".
            ") VALUES (".
            "1,2,$err_id,'".microtime(1)."','test '".
            ")";
            $ok=mysql_query($SQL);

            if($ok){
            $contador=0;
            continue;
            }
            $contador++;
            usleep(500); // wait a bit...
            }
            {code}



            aria variables:
            || *Variable_name* || *Value* ||
            || aria_block_size || 8192 ||
            || aria_checkpoint_interval || 30 ||
            || aria_checkpoint_log_activity || 1048576 ||
            || aria_force_start_after_recovery_failures || 0 ||
            || aria_group_commit || none ||
            || aria_group_commit_interval || 0 ||
            || aria_log_file_size || 1073741824 ||
            || aria_log_purge_type || immediate ||
            || aria_max_sort_file_size || 9223372036853727232 ||
            || aria_page_checksum || ON ||
            || aria_pagecache_age_threshold || 300 ||
            || aria_pagecache_buffer_size || 134217728 ||
            || aria_pagecache_division_limit || 100 ||
            || aria_recover || NORMAL ||
            || aria_repair_threads || 1 ||
            || aria_sort_buffer_size || 268434432 ||
            || aria_stats_method || nulls_unequal ||
            || aria_sync_log_dir || NEWFILE ||
            || aria_used_for_temp_tables || ON ||

            autocommit=1
            concurrent_insert isn't a problem, i tested with 0,1,2 this give the same error

            --
            what i think is happen?
            at query cache code there's a consideration about when or when not include the query at query cache, while a insert and another aria table is invalidated and a "select" query execute, the value of the new insert field err_id isn't updated at the right time, in other words when i connect with a externall tool and set autocommit=0 (to get a different query cache flag) i see that the err_id is different from err_id of php script

            in other words the table wasn't invalidated correctly

            could we recheck if the aria engine invalidation is ok or not?
            i will test again with myisam, and after innodb
            hi guys, i'm running a stress test with query cache and aria...
            the table:
            {code}
            CREATE TABLE `errors` (
            `id1` INT(11) NOT NULL DEFAULT '0',
            `id2` INT(11) NOT NULL DEFAULT '0',
            `err_id` INT(11) NOT NULL DEFAULT '0',
            `datahora` DECIMAL(22,7) NOT NULL DEFAULT '0.0000000',
            `erro` LONGBLOB NOT NULL,
            PRIMARY KEY (`id1`, `id2`, `err_id`)
            )
            COLLATE='latin1_swedish_ci'
            ENGINE=Aria;
            (using page table type!)
            {code}


            i startup a php script:


            {code}
            <?php
            $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            // three different process


            // connect to database
            $conn=mysql_connect(/* connection parameters */);
            mysql_select_db(/*select database*/);

            // and start a infinite loop
            $contador=0;
            while(1){
            $SQL= "SELECT MAX(err_id) FROM errors ".
            "WHERE id1=1 AND id2=2";
            IF($contador%1000==0 && $contador>1000) echo "\n$SQL\n";
            $tmp_tbl=mysql_query($SQL);
            $err_id=mysql_result($tmp_tbl,0,0);
            IF($contador%1000==0 && $contador>1000) echo "\n$err_id\n";
            mysql_free($tmp_tbl);
            $err_id++;if($err_id<=0) $err_id=1;
            $SQL= "INSERT INTO errors (".
            "id1,id2,err_id,datahora,erro".
            ") VALUES (".
            "1,2,$err_id,'".microtime(1)."','test '".
            ")";
            $ok=mysql_query($SQL);

            if($ok){
            $contador=0;
            continue;
            }
            $contador++;
            usleep(500); // wait a bit...
            }
            {code}



            aria variables:
            || *Variable_name* || *Value* ||
            || aria_block_size || 8192 ||
            || aria_checkpoint_interval || 30 ||
            || aria_checkpoint_log_activity || 1048576 ||
            || aria_force_start_after_recovery_failures || 0 ||
            || aria_group_commit || none ||
            || aria_group_commit_interval || 0 ||
            || aria_log_file_size || 1073741824 ||
            || aria_log_purge_type || immediate ||
            || aria_max_sort_file_size || 9223372036853727232 ||
            || aria_page_checksum || ON ||
            || aria_pagecache_age_threshold || 300 ||
            || aria_pagecache_buffer_size || 134217728 ||
            || aria_pagecache_division_limit || 100 ||
            || aria_recover || NORMAL ||
            || aria_repair_threads || 1 ||
            || aria_sort_buffer_size || 268434432 ||
            || aria_stats_method || nulls_unequal ||
            || aria_sync_log_dir || NEWFILE ||
            || aria_used_for_temp_tables || ON ||

            autocommit=1
            concurrent_insert isn't a problem, i tested with 0,1,2 this give the same error

            --
            what i think is happen?
            at query cache code there's a consideration about when or when not include the query at query cache, while a insert and another aria table is invalidated and a "select" query execute, the value of the new insert field err_id isn't updated at the right time, in other words when i connect with a externall tool and set autocommit=0 (to get a different query cache flag) i see that the err_id is different from err_id of php script

            in other words the table wasn't invalidated correctly

            could we recheck if the aria engine invalidation is ok or not?
            i will test again with myisam, and after innodb

            testing with myisam (the real myisam engine, not the aria myisam like) and fixed/dynamic types
            no problem

            rspadim roberto spadim added a comment - testing with myisam (the real myisam engine, not the aria myisam like) and fixed/dynamic types no problem

            example of problem that php script shouldn't report:

            --------
            SELECT MAX(err_id) FROM errors WHERE id1=1 AND id2=2

            296

            SELECT MAX(err_id) FROM errors WHERE id1=1 AND id2=2

            296

            SELECT MAX(err_id) FROM errors WHERE id1=1 AND id2=2

            296

            SELECT MAX(err_id) FROM errors WHERE id1=1 AND id2=2

            296
            —

            i login to another heidisql and execute select , what i get is : 297

            i found that this bug only occur with aria and row_format=page, maybe we have a problem at aria commit or aria page log file, using aria with row_format=fixed or dynamic, no problem occurs

            rspadim roberto spadim added a comment - example of problem that php script shouldn't report: -------- SELECT MAX(err_id) FROM errors WHERE id1=1 AND id2=2 296 SELECT MAX(err_id) FROM errors WHERE id1=1 AND id2=2 296 SELECT MAX(err_id) FROM errors WHERE id1=1 AND id2=2 296 SELECT MAX(err_id) FROM errors WHERE id1=1 AND id2=2 296 — i login to another heidisql and execute select , what i get is : 297 i found that this bug only occur with aria and row_format=page, maybe we have a problem at aria commit or aria page log file, using aria with row_format=fixed or dynamic, no problem occurs
            rspadim roberto spadim made changes -
            Priority Major [ 3 ] Blocker [ 1 ]

            i didn't know how to solve this bug, but i think it's something at function
            my_bool ha_maria::register_query_cache_table(THD *thd, char *table_name,
            uint table_name_len,
            qc_engine_callback
            *engine_callback,
            ulonglong *engine_data)
            /* Return whether is ok to try to cache current statement. */
            DBUG_RETURN(!(file->s->non_transactional_concurrent_insert &&
            current_data_file_length != actual_data_file_length));

            —

            rspadim roberto spadim added a comment - i didn't know how to solve this bug, but i think it's something at function my_bool ha_maria::register_query_cache_table(THD *thd, char *table_name, uint table_name_len, qc_engine_callback *engine_callback, ulonglong *engine_data) /* Return whether is ok to try to cache current statement. */ DBUG_RETURN(!(file->s->non_transactional_concurrent_insert && current_data_file_length != actual_data_file_length)); —
            rspadim roberto spadim added a comment - - edited

            from ha_myisam.cc we have more one comment:

            /*
            This query execution might have started after the query cache was flushed
            by a concurrent INSERT. In this case, don't cache this statement as the
            data file length difference might not be visible yet if the tables haven't
            been unlocked by the concurrent insert thread.
            */
            if (file->state->uncacheable)
            DBUG_RETURN(FALSE);

            /* It is ok to try to cache current statement. */
            DBUG_RETURN(TRUE);

            maybe we should check file->state->uncacheable too, or not?

            rspadim roberto spadim added a comment - - edited from ha_myisam.cc we have more one comment: /* This query execution might have started after the query cache was flushed by a concurrent INSERT. In this case, don't cache this statement as the data file length difference might not be visible yet if the tables haven't been unlocked by the concurrent insert thread. */ if (file->state->uncacheable) DBUG_RETURN(FALSE); /* It is ok to try to cache current statement. */ DBUG_RETURN(TRUE); maybe we should check file->state->uncacheable too, or not?

            from myisam mi_locking.c
            if (concurrent_insert)
            info->s->state.state.uncacheable= TRUE;

            at maria i didn't find it, maybe because there's others considerations, but i don't understand the source yet

            rspadim roberto spadim added a comment - from myisam mi_locking.c if (concurrent_insert) info->s->state.state.uncacheable= TRUE; at maria i didn't find it, maybe because there's others considerations, but i don't understand the source yet
            rspadim roberto spadim added a comment - - edited

            from ma_state.c
            info->append_insert_at_end= concurrent_insert;

            or

            if (share->have_versioning)

            maybe we should use it at query_cache function of ha_maria.cc ?
            i'm seeing something about write cache too, maybe it's the source of problem, must check too

            rspadim roberto spadim added a comment - - edited from ma_state.c info->append_insert_at_end= concurrent_insert; or if (share->have_versioning) maybe we should use it at query_cache function of ha_maria.cc ? i'm seeing something about write cache too, maybe it's the source of problem, must check too
            rspadim roberto spadim made changes -
            Affects Version/s 10.1.1 [ 16801 ]

            same problem with mariadb 10.1.1 (github)

            rspadim roberto spadim added a comment - same problem with mariadb 10.1.1 (github)

            only occur with page row format

            rspadim roberto spadim added a comment - only occur with page row format
            rspadim roberto spadim made changes -
            rspadim roberto spadim made changes -
            Labels Aria querycache querycache_result
            rspadim roberto spadim made changes -
            Description hi guys, i'm running a stress test with query cache and aria...
            the table:
            {code}
            CREATE TABLE `errors` (
            `id1` INT(11) NOT NULL DEFAULT '0',
            `id2` INT(11) NOT NULL DEFAULT '0',
            `err_id` INT(11) NOT NULL DEFAULT '0',
            `datahora` DECIMAL(22,7) NOT NULL DEFAULT '0.0000000',
            `erro` LONGBLOB NOT NULL,
            PRIMARY KEY (`id1`, `id2`, `err_id`)
            )
            COLLATE='latin1_swedish_ci'
            ENGINE=Aria;
            (using page table type!)
            {code}


            i startup a php script:


            {code}
            <?php
            $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            // three different process


            // connect to database
            $conn=mysql_connect(/* connection parameters */);
            mysql_select_db(/*select database*/);

            // and start a infinite loop
            $contador=0;
            while(1){
            $SQL= "SELECT MAX(err_id) FROM errors ".
            "WHERE id1=1 AND id2=2";
            IF($contador%1000==0 && $contador>1000) echo "\n$SQL\n";
            $tmp_tbl=mysql_query($SQL);
            $err_id=mysql_result($tmp_tbl,0,0);
            IF($contador%1000==0 && $contador>1000) echo "\n$err_id\n";
            mysql_free($tmp_tbl);
            $err_id++;if($err_id<=0) $err_id=1;
            $SQL= "INSERT INTO errors (".
            "id1,id2,err_id,datahora,erro".
            ") VALUES (".
            "1,2,$err_id,'".microtime(1)."','test '".
            ")";
            $ok=mysql_query($SQL);

            if($ok){
            $contador=0;
            continue;
            }
            $contador++;
            usleep(500); // wait a bit...
            }
            {code}



            aria variables:
            || *Variable_name* || *Value* ||
            || aria_block_size || 8192 ||
            || aria_checkpoint_interval || 30 ||
            || aria_checkpoint_log_activity || 1048576 ||
            || aria_force_start_after_recovery_failures || 0 ||
            || aria_group_commit || none ||
            || aria_group_commit_interval || 0 ||
            || aria_log_file_size || 1073741824 ||
            || aria_log_purge_type || immediate ||
            || aria_max_sort_file_size || 9223372036853727232 ||
            || aria_page_checksum || ON ||
            || aria_pagecache_age_threshold || 300 ||
            || aria_pagecache_buffer_size || 134217728 ||
            || aria_pagecache_division_limit || 100 ||
            || aria_recover || NORMAL ||
            || aria_repair_threads || 1 ||
            || aria_sort_buffer_size || 268434432 ||
            || aria_stats_method || nulls_unequal ||
            || aria_sync_log_dir || NEWFILE ||
            || aria_used_for_temp_tables || ON ||

            autocommit=1
            concurrent_insert isn't a problem, i tested with 0,1,2 this give the same error

            --
            what i think is happen?
            at query cache code there's a consideration about when or when not include the query at query cache, while a insert and another aria table is invalidated and a "select" query execute, the value of the new insert field err_id isn't updated at the right time, in other words when i connect with a externall tool and set autocommit=0 (to get a different query cache flag) i see that the err_id is different from err_id of php script

            in other words the table wasn't invalidated correctly

            could we recheck if the aria engine invalidation is ok or not?
            i will test again with myisam, and after innodb
            hi guys, i'm running a stress test with query cache and aria...
            the table:
            {code}
            CREATE TABLE `errors` (
            `id1` INT(11) NOT NULL DEFAULT '0',
            `id2` INT(11) NOT NULL DEFAULT '0',
            `err_id` INT(11) NOT NULL DEFAULT '0',
            `datahora` DECIMAL(22,7) NOT NULL DEFAULT '0.0000000',
            `erro` LONGBLOB NOT NULL,
            PRIMARY KEY (`id1`, `id2`, `err_id`)
            )
            COLLATE='latin1_swedish_ci'
            ENGINE=Aria;
            (using page table type!)
            {code}


            i startup a php script:


            {code}
            <?php
            $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            // three different process


            // connect to database
            $conn=mysql_connect(/* connection parameters */);
            mysql_select_db(/*select database*/);

            // and start a infinite loop
            $contador=0;
            while(1){
            $SQL= "SELECT MAX(err_id) FROM errors ".
            "WHERE id1=1 AND id2=2";
            IF($contador%1000==0 && $contador>1000) echo "\n$SQL\n";
            $tmp_tbl=mysql_query($SQL);
            $err_id=mysql_result($tmp_tbl,0,0);
            IF($contador%1000==0 && $contador>1000) echo "\n$err_id\n";
            mysql_free($tmp_tbl);
            $err_id++;if($err_id<=0) $err_id=1;
            $SQL= "INSERT INTO errors (".
            "id1,id2,err_id,datahora,erro".
            ") VALUES (".
            "1,2,$err_id,'".microtime(1)."','test '".
            ")";
            $ok=mysql_query($SQL); /* autocommit = 1 */

            if($ok){
            $contador=0;
            continue;
            }
            $contador++;
            usleep(500); // wait a bit...
            }
            {code}



            aria variables:
            || *Variable_name* || *Value* ||
            || aria_block_size || 8192 ||
            || aria_checkpoint_interval || 30 ||
            || aria_checkpoint_log_activity || 1048576 ||
            || aria_force_start_after_recovery_failures || 0 ||
            || aria_group_commit || none ||
            || aria_group_commit_interval || 0 ||
            || aria_log_file_size || 1073741824 ||
            || aria_log_purge_type || immediate ||
            || aria_max_sort_file_size || 9223372036853727232 ||
            || aria_page_checksum || ON ||
            || aria_pagecache_age_threshold || 300 ||
            || aria_pagecache_buffer_size || 134217728 ||
            || aria_pagecache_division_limit || 100 ||
            || aria_recover || NORMAL ||
            || aria_repair_threads || 1 ||
            || aria_sort_buffer_size || 268434432 ||
            || aria_stats_method || nulls_unequal ||
            || aria_sync_log_dir || NEWFILE ||
            || aria_used_for_temp_tables || ON ||

            autocommit=1
            concurrent_insert isn't a problem, i tested with 0,1,2 this give the same error

            --
            what i think is happen?
            at query cache code there's a consideration about when or when not include the query at query cache, while a insert and another aria table is invalidated and a "select" query execute, the value of the new insert field err_id isn't updated at the right time, in other words when i connect with a externall tool and set autocommit=0 (to get a different query cache flag) i see that the err_id is different from err_id of php script

            in other words the table wasn't invalidated correctly

            could we recheck if the aria engine invalidation is ok or not?
            i will test again with myisam, and after innodb
            rspadim roberto spadim made changes -
            Description hi guys, i'm running a stress test with query cache and aria...
            the table:
            {code}
            CREATE TABLE `errors` (
            `id1` INT(11) NOT NULL DEFAULT '0',
            `id2` INT(11) NOT NULL DEFAULT '0',
            `err_id` INT(11) NOT NULL DEFAULT '0',
            `datahora` DECIMAL(22,7) NOT NULL DEFAULT '0.0000000',
            `erro` LONGBLOB NOT NULL,
            PRIMARY KEY (`id1`, `id2`, `err_id`)
            )
            COLLATE='latin1_swedish_ci'
            ENGINE=Aria;
            (using page table type!)
            {code}


            i startup a php script:


            {code}
            <?php
            $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            // three different process


            // connect to database
            $conn=mysql_connect(/* connection parameters */);
            mysql_select_db(/*select database*/);

            // and start a infinite loop
            $contador=0;
            while(1){
            $SQL= "SELECT MAX(err_id) FROM errors ".
            "WHERE id1=1 AND id2=2";
            IF($contador%1000==0 && $contador>1000) echo "\n$SQL\n";
            $tmp_tbl=mysql_query($SQL);
            $err_id=mysql_result($tmp_tbl,0,0);
            IF($contador%1000==0 && $contador>1000) echo "\n$err_id\n";
            mysql_free($tmp_tbl);
            $err_id++;if($err_id<=0) $err_id=1;
            $SQL= "INSERT INTO errors (".
            "id1,id2,err_id,datahora,erro".
            ") VALUES (".
            "1,2,$err_id,'".microtime(1)."','test '".
            ")";
            $ok=mysql_query($SQL); /* autocommit = 1 */

            if($ok){
            $contador=0;
            continue;
            }
            $contador++;
            usleep(500); // wait a bit...
            }
            {code}



            aria variables:
            || *Variable_name* || *Value* ||
            || aria_block_size || 8192 ||
            || aria_checkpoint_interval || 30 ||
            || aria_checkpoint_log_activity || 1048576 ||
            || aria_force_start_after_recovery_failures || 0 ||
            || aria_group_commit || none ||
            || aria_group_commit_interval || 0 ||
            || aria_log_file_size || 1073741824 ||
            || aria_log_purge_type || immediate ||
            || aria_max_sort_file_size || 9223372036853727232 ||
            || aria_page_checksum || ON ||
            || aria_pagecache_age_threshold || 300 ||
            || aria_pagecache_buffer_size || 134217728 ||
            || aria_pagecache_division_limit || 100 ||
            || aria_recover || NORMAL ||
            || aria_repair_threads || 1 ||
            || aria_sort_buffer_size || 268434432 ||
            || aria_stats_method || nulls_unequal ||
            || aria_sync_log_dir || NEWFILE ||
            || aria_used_for_temp_tables || ON ||

            autocommit=1
            concurrent_insert isn't a problem, i tested with 0,1,2 this give the same error

            --
            what i think is happen?
            at query cache code there's a consideration about when or when not include the query at query cache, while a insert and another aria table is invalidated and a "select" query execute, the value of the new insert field err_id isn't updated at the right time, in other words when i connect with a externall tool and set autocommit=0 (to get a different query cache flag) i see that the err_id is different from err_id of php script

            in other words the table wasn't invalidated correctly

            could we recheck if the aria engine invalidation is ok or not?
            i will test again with myisam, and after innodb
            hi guys, i'm running a stress test with query cache and aria...
            the table:
            {code}
            CREATE TABLE `errors` (
            `id1` INT(11) NOT NULL DEFAULT '0',
            `id2` INT(11) NOT NULL DEFAULT '0',
            `err_id` INT(11) NOT NULL DEFAULT '0',
            `datahora` DECIMAL(22,7) NOT NULL DEFAULT '0.0000000',
            `erro` LONGBLOB NOT NULL,
            PRIMARY KEY (`id1`, `id2`, `err_id`)
            )
            COLLATE='latin1_swedish_ci'
            ENGINE=Aria;
            (using page table type!)
            {code}


            i startup a php script:


            {code}
            <?php
            $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            // three different process


            // connect to database
            $conn=mysql_connect(/* connection parameters */);
            mysql_select_db(/*select database*/);

            // and start a infinite loop
            $contador=0;
            while(1){
            $SQL= "SELECT MAX(err_id) FROM errors ".
            "WHERE id1=1 AND id2=2";
            IF($contador%1000==0 && $contador>1000) echo "\n$SQL\n";
            $tmp_tbl=mysql_query($SQL);
            $err_id=mysql_result($tmp_tbl,0,0);
            IF($contador%1000==0 && $contador>1000) echo "\n$err_id\n";
            mysql_free($tmp_tbl);
            $err_id++;if($err_id<=0) $err_id=1;
            $SQL= "INSERT INTO errors (".
            "id1,id2,err_id,datahora,erro".
            ") VALUES (".
            "1,2,$err_id,'".microtime(1)."','test '".
            ")";
            $ok=mysql_query($SQL); /* autocommit = 1 */

            if($ok){
            $contador=0;
            continue;
            }
            $contador++;
            usleep(500); // wait a bit...
            }
            {code}



            aria variables:
            || *Variable_name* || *Value* ||
            || aria_block_size || 8192 ||
            || aria_checkpoint_interval || 30 ||
            || aria_checkpoint_log_activity || 1048576 ||
            || aria_force_start_after_recovery_failures || 0 ||
            || aria_group_commit || none ||
            || aria_group_commit_interval || 0 ||
            || aria_log_file_size || 1073741824 ||
            || aria_log_purge_type || immediate ||
            || aria_max_sort_file_size || 9223372036853727232 ||
            || aria_page_checksum || ON ||
            || aria_pagecache_age_threshold || 300 ||
            || aria_pagecache_buffer_size || 134217728 ||
            || aria_pagecache_division_limit || 100 ||
            || aria_recover || NORMAL ||
            || aria_repair_threads || 1 ||
            || aria_sort_buffer_size || 268434432 ||
            || aria_stats_method || nulls_unequal ||
            || aria_sync_log_dir || NEWFILE ||
            || aria_used_for_temp_tables || ON ||

            autocommit=1
            concurrent_insert isn't a problem, i tested with 0,1,2 this give the same error

            --
            what i think is happen?
            at query cache code there's a consideration about when or when not include the query at query cache, while a insert and another aria table is invalidated and a "select" query execute:
            {code}
            time ->
            -----INSERT-------
            ---SELECCCCCCT--- (cached after invalidation - shoudn't be cached)
            ---------------------SELECT ---- (got from query cache, wrong query cache value)
            {code}
            the value of the new insert field err_id isn't updated at the right time, in other words when i connect with a externall tool and set autocommit=0 (to get a different query cache flag) i see that the err_id is different from err_id of php script

            in other words the table wasn't invalidated correctly

            could we recheck if the aria engine invalidation is ok or not?
            i will test again with myisam, and after innodb
            rspadim roberto spadim made changes -
            Description hi guys, i'm running a stress test with query cache and aria...
            the table:
            {code}
            CREATE TABLE `errors` (
            `id1` INT(11) NOT NULL DEFAULT '0',
            `id2` INT(11) NOT NULL DEFAULT '0',
            `err_id` INT(11) NOT NULL DEFAULT '0',
            `datahora` DECIMAL(22,7) NOT NULL DEFAULT '0.0000000',
            `erro` LONGBLOB NOT NULL,
            PRIMARY KEY (`id1`, `id2`, `err_id`)
            )
            COLLATE='latin1_swedish_ci'
            ENGINE=Aria;
            (using page table type!)
            {code}


            i startup a php script:


            {code}
            <?php
            $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            // three different process


            // connect to database
            $conn=mysql_connect(/* connection parameters */);
            mysql_select_db(/*select database*/);

            // and start a infinite loop
            $contador=0;
            while(1){
            $SQL= "SELECT MAX(err_id) FROM errors ".
            "WHERE id1=1 AND id2=2";
            IF($contador%1000==0 && $contador>1000) echo "\n$SQL\n";
            $tmp_tbl=mysql_query($SQL);
            $err_id=mysql_result($tmp_tbl,0,0);
            IF($contador%1000==0 && $contador>1000) echo "\n$err_id\n";
            mysql_free($tmp_tbl);
            $err_id++;if($err_id<=0) $err_id=1;
            $SQL= "INSERT INTO errors (".
            "id1,id2,err_id,datahora,erro".
            ") VALUES (".
            "1,2,$err_id,'".microtime(1)."','test '".
            ")";
            $ok=mysql_query($SQL); /* autocommit = 1 */

            if($ok){
            $contador=0;
            continue;
            }
            $contador++;
            usleep(500); // wait a bit...
            }
            {code}



            aria variables:
            || *Variable_name* || *Value* ||
            || aria_block_size || 8192 ||
            || aria_checkpoint_interval || 30 ||
            || aria_checkpoint_log_activity || 1048576 ||
            || aria_force_start_after_recovery_failures || 0 ||
            || aria_group_commit || none ||
            || aria_group_commit_interval || 0 ||
            || aria_log_file_size || 1073741824 ||
            || aria_log_purge_type || immediate ||
            || aria_max_sort_file_size || 9223372036853727232 ||
            || aria_page_checksum || ON ||
            || aria_pagecache_age_threshold || 300 ||
            || aria_pagecache_buffer_size || 134217728 ||
            || aria_pagecache_division_limit || 100 ||
            || aria_recover || NORMAL ||
            || aria_repair_threads || 1 ||
            || aria_sort_buffer_size || 268434432 ||
            || aria_stats_method || nulls_unequal ||
            || aria_sync_log_dir || NEWFILE ||
            || aria_used_for_temp_tables || ON ||

            autocommit=1
            concurrent_insert isn't a problem, i tested with 0,1,2 this give the same error

            --
            what i think is happen?
            at query cache code there's a consideration about when or when not include the query at query cache, while a insert and another aria table is invalidated and a "select" query execute:
            {code}
            time ->
            -----INSERT-------
            ---SELECCCCCCT--- (cached after invalidation - shoudn't be cached)
            ---------------------SELECT ---- (got from query cache, wrong query cache value)
            {code}
            the value of the new insert field err_id isn't updated at the right time, in other words when i connect with a externall tool and set autocommit=0 (to get a different query cache flag) i see that the err_id is different from err_id of php script

            in other words the table wasn't invalidated correctly

            could we recheck if the aria engine invalidation is ok or not?
            i will test again with myisam, and after innodb
            hi guys, i'm running a stress test with query cache and aria...
            the table:
            {code}
            CREATE TABLE `errors` (
            `id1` INT(11) NOT NULL DEFAULT '0',
            `id2` INT(11) NOT NULL DEFAULT '0',
            `err_id` INT(11) NOT NULL DEFAULT '0',
            `datahora` DECIMAL(22,7) NOT NULL DEFAULT '0.0000000',
            `erro` LONGBLOB NOT NULL,
            PRIMARY KEY (`id1`, `id2`, `err_id`)
            )
            COLLATE='latin1_swedish_ci'
            ENGINE=Aria;
            (using page table type!)
            {code}


            i startup a php script:


            {code}
            <?php
            $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            // three different process


            // connect to database
            $conn=mysql_connect(/* connection parameters */);
            mysql_select_db(/*select database*/);

            // and start a infinite loop
            $contador=0;
            while(1){
            $SQL= "SELECT MAX(err_id) FROM errors ".
            "WHERE id1=1 AND id2=2";
            IF($contador%1000==0 && $contador>1000) echo "\n$SQL\n";
            $tmp_tbl=mysql_query($SQL);
            $err_id=mysql_result($tmp_tbl,0,0);
            IF($contador%1000==0 && $contador>1000) echo "\n$err_id\n";
            mysql_free($tmp_tbl);
            $err_id++;if($err_id<=0) $err_id=1;
            $SQL= "INSERT INTO errors (".
            "id1,id2,err_id,datahora,erro".
            ") VALUES (".
            "1,2,$err_id,'".microtime(1)."','test '".
            ")";
            $ok=mysql_query($SQL); /* autocommit = 1 */

            if($ok){
            $contador=0;
            continue;
            }
            $contador++;
            usleep(500); // wait a bit...
            }
            {code}



            aria variables:
            || *Variable_name* || *Value* ||
            || aria_block_size || 8192 ||
            || aria_checkpoint_interval || 30 ||
            || aria_checkpoint_log_activity || 1048576 ||
            || aria_force_start_after_recovery_failures || 0 ||
            || aria_group_commit || none ||
            || aria_group_commit_interval || 0 ||
            || aria_log_file_size || 1073741824 ||
            || aria_log_purge_type || immediate ||
            || aria_max_sort_file_size || 9223372036853727232 ||
            || aria_page_checksum || ON ||
            || aria_pagecache_age_threshold || 300 ||
            || aria_pagecache_buffer_size || 134217728 ||
            || aria_pagecache_division_limit || 100 ||
            || aria_recover || NORMAL ||
            || aria_repair_threads || 1 ||
            || aria_sort_buffer_size || 268434432 ||
            || aria_stats_method || nulls_unequal ||
            || aria_sync_log_dir || NEWFILE ||
            || aria_used_for_temp_tables || ON ||

            autocommit=1
            concurrent_insert isn't a problem, i tested with 0,1,2 this give the same error

            --
            what i think is happen?
            at query cache code there's a consideration about when or when not include the query at query cache, while a insert and another aria table is invalidated and a "select" query execute:
            {code}
            time ->
            -----INSERT-------
            ---SELECCCCCCT--- (cached after invalidation - shoudn't be cached)
            ---------------------SELECT ---- (got from query cache, wrong query cache value)
            {code}
            the value of the new insert field err_id isn't updated at the right time, in other words when i connect with a externall tool and set autocommit=0 (to get a different query cache flag, or use SQL_NO_CACHE) i see that the err_id is different from err_id of php script

            in other words the table wasn't invalidated correctly

            could we recheck if the aria engine invalidation is ok or not?
            i will test again with myisam, and after innodb
            rspadim roberto spadim made changes -
            Description hi guys, i'm running a stress test with query cache and aria...
            the table:
            {code}
            CREATE TABLE `errors` (
            `id1` INT(11) NOT NULL DEFAULT '0',
            `id2` INT(11) NOT NULL DEFAULT '0',
            `err_id` INT(11) NOT NULL DEFAULT '0',
            `datahora` DECIMAL(22,7) NOT NULL DEFAULT '0.0000000',
            `erro` LONGBLOB NOT NULL,
            PRIMARY KEY (`id1`, `id2`, `err_id`)
            )
            COLLATE='latin1_swedish_ci'
            ENGINE=Aria;
            (using page table type!)
            {code}


            i startup a php script:


            {code}
            <?php
            $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            // three different process


            // connect to database
            $conn=mysql_connect(/* connection parameters */);
            mysql_select_db(/*select database*/);

            // and start a infinite loop
            $contador=0;
            while(1){
            $SQL= "SELECT MAX(err_id) FROM errors ".
            "WHERE id1=1 AND id2=2";
            IF($contador%1000==0 && $contador>1000) echo "\n$SQL\n";
            $tmp_tbl=mysql_query($SQL);
            $err_id=mysql_result($tmp_tbl,0,0);
            IF($contador%1000==0 && $contador>1000) echo "\n$err_id\n";
            mysql_free($tmp_tbl);
            $err_id++;if($err_id<=0) $err_id=1;
            $SQL= "INSERT INTO errors (".
            "id1,id2,err_id,datahora,erro".
            ") VALUES (".
            "1,2,$err_id,'".microtime(1)."','test '".
            ")";
            $ok=mysql_query($SQL); /* autocommit = 1 */

            if($ok){
            $contador=0;
            continue;
            }
            $contador++;
            usleep(500); // wait a bit...
            }
            {code}



            aria variables:
            || *Variable_name* || *Value* ||
            || aria_block_size || 8192 ||
            || aria_checkpoint_interval || 30 ||
            || aria_checkpoint_log_activity || 1048576 ||
            || aria_force_start_after_recovery_failures || 0 ||
            || aria_group_commit || none ||
            || aria_group_commit_interval || 0 ||
            || aria_log_file_size || 1073741824 ||
            || aria_log_purge_type || immediate ||
            || aria_max_sort_file_size || 9223372036853727232 ||
            || aria_page_checksum || ON ||
            || aria_pagecache_age_threshold || 300 ||
            || aria_pagecache_buffer_size || 134217728 ||
            || aria_pagecache_division_limit || 100 ||
            || aria_recover || NORMAL ||
            || aria_repair_threads || 1 ||
            || aria_sort_buffer_size || 268434432 ||
            || aria_stats_method || nulls_unequal ||
            || aria_sync_log_dir || NEWFILE ||
            || aria_used_for_temp_tables || ON ||

            autocommit=1
            concurrent_insert isn't a problem, i tested with 0,1,2 this give the same error

            --
            what i think is happen?
            at query cache code there's a consideration about when or when not include the query at query cache, while a insert and another aria table is invalidated and a "select" query execute:
            {code}
            time ->
            -----INSERT-------
            ---SELECCCCCCT--- (cached after invalidation - shoudn't be cached)
            ---------------------SELECT ---- (got from query cache, wrong query cache value)
            {code}
            the value of the new insert field err_id isn't updated at the right time, in other words when i connect with a externall tool and set autocommit=0 (to get a different query cache flag, or use SQL_NO_CACHE) i see that the err_id is different from err_id of php script

            in other words the table wasn't invalidated correctly

            could we recheck if the aria engine invalidation is ok or not?
            i will test again with myisam, and after innodb
            hi guys, i'm running a stress test with query cache and aria...
            the table:
            {code}
            CREATE TABLE `errors` (
            `id1` INT(11) NOT NULL DEFAULT '0',
            `id2` INT(11) NOT NULL DEFAULT '0',
            `err_id` INT(11) NOT NULL DEFAULT '0',
            `datahora` DECIMAL(22,7) NOT NULL DEFAULT '0.0000000',
            `erro` LONGBLOB NOT NULL,
            PRIMARY KEY (`id1`, `id2`, `err_id`)
            )
            COLLATE='latin1_swedish_ci'
            ENGINE=Aria;
            (using page table type!)
            {code}


            i startup a php script:


            {code}
            <?php
            $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            // three different process


            // connect to database
            $conn=mysql_connect(/* connection parameters */);
            mysql_select_db(/*select database*/);

            // and start a infinite loop
            $contador=0;
            while(1){
            $SQL= "SELECT MAX(err_id) FROM errors ".
            "WHERE id1=1 AND id2=2";
            IF($contador%1000==0 && $contador>1000) echo "\n$SQL\n";
            $tmp_tbl=mysql_query($SQL);
            $err_id=mysql_result($tmp_tbl,0,0);
            IF($contador%1000==0 && $contador>1000) echo "\n$err_id\n";
            mysql_free($tmp_tbl);
            $err_id++;if($err_id<=0) $err_id=1;
            $SQL= "INSERT INTO errors (".
            "id1,id2,err_id,datahora,erro".
            ") VALUES (".
            "1,2,$err_id,'".microtime(1)."','test '".
            ")";
            $ok=mysql_query($SQL); /* autocommit = 1 */

            if($ok){
            $contador=0;
            continue;
            }
            $contador++;
            usleep(500); // wait a bit...
            }
            {code}



            aria variables:
            || *Variable_name* || *Value* ||
            || aria_block_size || 8192 ||
            || aria_checkpoint_interval || 30 ||
            || aria_checkpoint_log_activity || 1048576 ||
            || aria_force_start_after_recovery_failures || 0 ||
            || aria_group_commit || none ||
            || aria_group_commit_interval || 0 ||
            || aria_log_file_size || 1073741824 ||
            || aria_log_purge_type || immediate ||
            || aria_max_sort_file_size || 9223372036853727232 ||
            || aria_page_checksum || ON ||
            || aria_pagecache_age_threshold || 300 ||
            || aria_pagecache_buffer_size || 134217728 ||
            || aria_pagecache_division_limit || 100 ||
            || aria_recover || NORMAL ||
            || aria_repair_threads || 1 ||
            || aria_sort_buffer_size || 268434432 ||
            || aria_stats_method || nulls_unequal ||
            || aria_sync_log_dir || NEWFILE ||
            || aria_used_for_temp_tables || ON ||

            autocommit=1
            concurrent_insert isn't a problem, i tested with 0,1,2 this give the same error

            --
            what i think is happen?
            at query cache code there's a consideration about when or when not include the query at query cache, while a insert and another aria table is invalidated and a "select" query execute:
            {code}
            time ->
            -----INSERT-------
            ---SELECCCCCCT--- (cached after invalidation - shoudn't be cached)
            ---------------------SELECT ---- (got from query cache, wrong query cache value)
            {code}
            the value of the new insert field err_id isn't updated at the right time, in other words when i connect with a externall tool and set autocommit=0 (to get a different query cache flag, or use SQL_NO_CACHE) i see that the err_id is different from err_id of php script

            in other words the table wasn't invalidated correctly

            could we recheck if the aria engine invalidation is ok or not?
            i will test again with myisam, and after innodb
            ---
            no problem with innodb
            no problem with myisam
            no problem with aria using row_format fixed/dynamic, only with aria-page
            elenst Elena Stepanova made changes -
            Priority Blocker [ 1 ] Major [ 3 ]
            serg Sergei Golubchik made changes -
            Assignee Michael Widenius [ monty ]
            serg Sergei Golubchik made changes -
            Fix Version/s 10.0 [ 16000 ]

            please don't increase to stable before complete this bug that's not stable with query cache yet

            https://github.com/MariaDB/server/commit/761ed3d

            rspadim roberto spadim added a comment - please don't increase to stable before complete this bug that's not stable with query cache yet https://github.com/MariaDB/server/commit/761ed3d
            rspadim roberto spadim made changes -
            Description hi guys, i'm running a stress test with query cache and aria...
            the table:
            {code}
            CREATE TABLE `errors` (
            `id1` INT(11) NOT NULL DEFAULT '0',
            `id2` INT(11) NOT NULL DEFAULT '0',
            `err_id` INT(11) NOT NULL DEFAULT '0',
            `datahora` DECIMAL(22,7) NOT NULL DEFAULT '0.0000000',
            `erro` LONGBLOB NOT NULL,
            PRIMARY KEY (`id1`, `id2`, `err_id`)
            )
            COLLATE='latin1_swedish_ci'
            ENGINE=Aria;
            (using page table type!)
            {code}


            i startup a php script:


            {code}
            <?php
            $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            // three different process


            // connect to database
            $conn=mysql_connect(/* connection parameters */);
            mysql_select_db(/*select database*/);

            // and start a infinite loop
            $contador=0;
            while(1){
            $SQL= "SELECT MAX(err_id) FROM errors ".
            "WHERE id1=1 AND id2=2";
            IF($contador%1000==0 && $contador>1000) echo "\n$SQL\n";
            $tmp_tbl=mysql_query($SQL);
            $err_id=mysql_result($tmp_tbl,0,0);
            IF($contador%1000==0 && $contador>1000) echo "\n$err_id\n";
            mysql_free($tmp_tbl);
            $err_id++;if($err_id<=0) $err_id=1;
            $SQL= "INSERT INTO errors (".
            "id1,id2,err_id,datahora,erro".
            ") VALUES (".
            "1,2,$err_id,'".microtime(1)."','test '".
            ")";
            $ok=mysql_query($SQL); /* autocommit = 1 */

            if($ok){
            $contador=0;
            continue;
            }
            $contador++;
            usleep(500); // wait a bit...
            }
            {code}



            aria variables:
            || *Variable_name* || *Value* ||
            || aria_block_size || 8192 ||
            || aria_checkpoint_interval || 30 ||
            || aria_checkpoint_log_activity || 1048576 ||
            || aria_force_start_after_recovery_failures || 0 ||
            || aria_group_commit || none ||
            || aria_group_commit_interval || 0 ||
            || aria_log_file_size || 1073741824 ||
            || aria_log_purge_type || immediate ||
            || aria_max_sort_file_size || 9223372036853727232 ||
            || aria_page_checksum || ON ||
            || aria_pagecache_age_threshold || 300 ||
            || aria_pagecache_buffer_size || 134217728 ||
            || aria_pagecache_division_limit || 100 ||
            || aria_recover || NORMAL ||
            || aria_repair_threads || 1 ||
            || aria_sort_buffer_size || 268434432 ||
            || aria_stats_method || nulls_unequal ||
            || aria_sync_log_dir || NEWFILE ||
            || aria_used_for_temp_tables || ON ||

            autocommit=1
            concurrent_insert isn't a problem, i tested with 0,1,2 this give the same error

            --
            what i think is happen?
            at query cache code there's a consideration about when or when not include the query at query cache, while a insert and another aria table is invalidated and a "select" query execute:
            {code}
            time ->
            -----INSERT-------
            ---SELECCCCCCT--- (cached after invalidation - shoudn't be cached)
            ---------------------SELECT ---- (got from query cache, wrong query cache value)
            {code}
            the value of the new insert field err_id isn't updated at the right time, in other words when i connect with a externall tool and set autocommit=0 (to get a different query cache flag, or use SQL_NO_CACHE) i see that the err_id is different from err_id of php script

            in other words the table wasn't invalidated correctly

            could we recheck if the aria engine invalidation is ok or not?
            i will test again with myisam, and after innodb
            ---
            no problem with innodb
            no problem with myisam
            no problem with aria using row_format fixed/dynamic, only with aria-page
            hi guys, i'm running a stress test with query cache and aria...
            the table:
            {code}
            CREATE TABLE `errors` (
            `id1` INT(11) NOT NULL DEFAULT '0',
            `id2` INT(11) NOT NULL DEFAULT '0',
            `err_id` INT(11) NOT NULL DEFAULT '0',
            `datahora` DECIMAL(22,7) NOT NULL DEFAULT '0.0000000',
            `erro` LONGBLOB NOT NULL,
            PRIMARY KEY (`id1`, `id2`, `err_id`)
            )
            COLLATE='latin1_swedish_ci'
            ENGINE=Aria;
            (using page table type!)
            {code}


            i startup a php script:


            {code}
            <?php
            $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            // three different process


            // connect to database
            $conn=mysql_connect(/* connection parameters */);
            mysql_select_db(/*select database*/);

            // and start a infinite loop
            $contador=0;
            while(1){
            $SQL= "SELECT MAX(err_id) FROM errors ".
            "WHERE id1=1 AND id2=2";
            IF($contador%1000==0 && $contador>1000) echo "\n$SQL\n";
            $tmp_tbl=mysql_query($SQL);
            $err_id=mysql_result($tmp_tbl,0,0);
            IF($contador%1000==0 && $contador>1000) echo "\n$err_id\n";
            mysql_free($tmp_tbl);
            $err_id++;if($err_id<=0) $err_id=1;
            $SQL= "INSERT INTO errors (".
            "id1,id2,err_id,datahora,erro".
            ") VALUES (".
            "1,2,$err_id,'".microtime(1)."','test '".
            ")";
            $ok=mysql_query($SQL); /* autocommit = 1 */

            if($ok){
            $contador=0;
            continue;
            }
            $contador++;
            usleep(500); // wait a bit...
            }
            {code}



            aria variables:
            || *Variable_name* || *Value* ||
            || aria_block_size || 8192 ||
            || aria_checkpoint_interval || 30 ||
            || aria_checkpoint_log_activity || 1048576 ||
            || aria_force_start_after_recovery_failures || 0 ||
            || aria_group_commit || none ||
            || aria_group_commit_interval || 0 ||
            || aria_log_file_size || 1073741824 ||
            || aria_log_purge_type || immediate ||
            || aria_max_sort_file_size || 9223372036853727232 ||
            || aria_page_checksum || ON ||
            || aria_pagecache_age_threshold || 300 ||
            || aria_pagecache_buffer_size || 134217728 ||
            || aria_pagecache_division_limit || 100 ||
            || aria_recover || NORMAL ||
            || aria_repair_threads || 1 ||
            || aria_sort_buffer_size || 268434432 ||
            || aria_stats_method || nulls_unequal ||
            || aria_sync_log_dir || NEWFILE ||
            || aria_used_for_temp_tables || ON ||

            autocommit=1
            concurrent_insert isn't a problem, i tested with 0,1,2 this give the same error

            --
            what i think is happen?
            at query cache code there's a consideration about when or when not include the query at query cache, while a insert and another aria table is invalidated and a "select" query execute:
            {code}
            time ->
            -----INSERT-------
            ---SELECCCCCCT--- (cached after invalidation - shoudn't be cached)
            ---------------------SELECT ---- (got from query cache, wrong query cache value)
            {code}
            the value of the new insert field err_id isn't updated at the right time, in other words when i connect with a externall tool and set autocommit=0 (to get a different query cache flag, or use SQL_NO_CACHE) i see that the err_id is different from err_id of php script

            in other words
            1)the table wasn't invalidated correctly
            2)the query was cached in a state that aria engine should report to don't cache (concurrent insert? page flush? anything like a buffer or cache being used while updating table?)

            could we recheck if the aria engine invalidation is ok or not?
            i will test again with myisam, and after innodb
            ---
            no problem with innodb
            no problem with myisam
            no problem with aria using row_format fixed/dynamic, only with aria-page

            any news?

            rspadim roberto spadim added a comment - any news?
            rspadim roberto spadim made changes -
            Priority Major [ 3 ] Critical [ 2 ]
            rspadim roberto spadim made changes -
            rspadim roberto spadim made changes -
            Summary query cache bug with aria table query cache bug with aria table, row format = page
            jb-boin Jean Weisbuch added a comment -

            If i am not mistaken on what your test script does, it doesnt LOCK TABLES (and doesnt use a transaction as its Aria) and it does fork 3 processes that does in parallel run the same loop which means that just after one of the processes read the MAX() from the query cache, another one could have INSERTed a new value on the table before the first process will try to INSERT its own value, which will end up on hitting a DUPLICATE KEY error that would not be due to a wrong value on the Qcache but because one of the other processes already did insert the row.

            If you still think that there is a problem there, you should LOCK TABLES, read both the values from the Qcache and directly from the table (using SELECT SQL_CACHE and SELECT SQL_NO_CACHE) and compare their values.

            jb-boin Jean Weisbuch added a comment - If i am not mistaken on what your test script does, it doesnt LOCK TABLES (and doesnt use a transaction as its Aria) and it does fork 3 processes that does in parallel run the same loop which means that just after one of the processes read the MAX() from the query cache, another one could have INSERTed a new value on the table before the first process will try to INSERT its own value, which will end up on hitting a DUPLICATE KEY error that would not be due to a wrong value on the Qcache but because one of the other processes already did insert the row. If you still think that there is a problem there, you should LOCK TABLES, read both the values from the Qcache and directly from the table (using SELECT SQL_CACHE and SELECT SQL_NO_CACHE) and compare their values.
            rspadim roberto spadim added a comment - - edited

            Hi Jean, i think you are right and wrong
            1) yes the script fork, and execute select max(); insert with value of max() +1;
            2) yes the script execute in parallel, and yes it can insert a value that alread exists (duplicate key)
            3) but all scripts received the wrong value, and no script insert anymore (i receive INSERT DUPLICATE always with MAX()+1, instead of some scripts returning DUPLICATE and only one inserting the right value)
            4) in other words, the SELECT MAX() is returning a old value (from query cache), instead the right value from engine <-- BUG
            5) using query cache information plugin i see the SELECT MAX() saved after the INSERT query (in other words, the engine didn't invalidate the table, or invalidate it and save the 'old' query after invalidation, instead of block the 'old" query to be cached)

            no i don't want a lock tables as work around, i want the engine/query cache bug correction, innodb and myisam and tokudb runs ok

            rspadim roberto spadim added a comment - - edited Hi Jean, i think you are right and wrong 1) yes the script fork, and execute select max(); insert with value of max() +1; 2) yes the script execute in parallel, and yes it can insert a value that alread exists (duplicate key) 3) but all scripts received the wrong value, and no script insert anymore (i receive INSERT DUPLICATE always with MAX()+1, instead of some scripts returning DUPLICATE and only one inserting the right value) 4) in other words, the SELECT MAX() is returning a old value (from query cache), instead the right value from engine <-- BUG 5) using query cache information plugin i see the SELECT MAX() saved after the INSERT query (in other words, the engine didn't invalidate the table, or invalidate it and save the 'old' query after invalidation, instead of block the 'old" query to be cached) no i don't want a lock tables as work around, i want the engine/query cache bug correction, innodb and myisam and tokudb runs ok
            jb-boin Jean Weisbuch added a comment - - edited

            I have slightly modified your script so its easier to understand and verify the issue (only 2 concurrent processes, the table is recreated at each script execution with only one column, both the value from the cache and the table are verified and if different, shown).
            The same issue happens with the TokuDB engine while using my script
            Using LOCK TABLES (READ/WRITE) before the SELECT seems to invalidate the cache and thus, the problem cannot be reproduced while manually locking the table.

            Steps to reproduce

            Run this script on a test database.
            The script does :

            • (re)Creates the test table
            • Using two distinct processes/connections to the database to do a SELECT using the query cache of the MAX() value of the primary key then INSERT a new row with the previously fetched id+1
            • If the INSERT is successful, it output the inserted id in the form <$id>
            • If the INSERT fails (usually on a DUPLICATE KEY), a counter is incremented. If at least 2 errors happens in a row (the error counter is reset at each successful INSERT), the MAX() value of the primary key is also retreived directly from the table and if the MAX() value of the PK is different on the Qcache and the table a message is printed returning both values

            <?php
            // forking a second process that will run at the same time and do the same thing
            $pid = pcntl_fork();
             
            // connect to database
            $conn = mysql_connect("localhost", "username", "password");
            mysql_select_db("testdatabase");
             
            mysql_query("SET SESSION query_cache_type=ON;");
             
            if($pid == 0) {
            	// we are on the main process, no need for the fork to execute this
            	mysql_query("DROP TABLE errors6817;");
            	mysql_query("CREATE TABLE errors6817 (
            			id SMALLINT(5) NOT NULL DEFAULT '0',
            			PRIMARY KEY (id)
            		)
            		ENGINE=Aria ROW_FORMAT=PAGE;");
            	mysql_query("INSERT INTO errors6817 (id) VALUES (1);");
            } else {
            	// the fork will wait for 0.5sec before starting the loops to be sure the main process had time to recreate the table properly
            	usleep(500000);
            }
             
            $duplicateInsertCount = 0;
             
            while($duplicateInsertCount < 100) {
            	$tmp_tbl = mysql_query("SELECT SQL_CACHE MAX(id) FROM errors6817;");
            	if(!$tmp_tbl) echo(mysql_error()."\n");
            	$id = mysql_result($tmp_tbl, 0, 0);
            	mysql_free_result($tmp_tbl);
            	if($duplicateInsertCount > 1) {
            		// after every 10 INSERT on duplicate errors, we check the max value of id on the query cache versus the "real" one
            		$tmp_tbl = mysql_query("SELECT SQL_NO_CACHE MAX(id) FROM errors6817;");
            		if(!$tmp_tbl) echo(mysql_error()."\n");
            		$id_no_cache = mysql_result($tmp_tbl, 0, 0);
            		mysql_free_result($tmp_tbl);
            		if($id != $id_no_cache) echo "[duplicate $duplicateInsertCount on ".($pid==0 ? "main " : "child")."]\tqcache:\t${id}\ttable:\t${id_no_cache}\n";
            	}
            	$id++;
            	// we try to INSERT a new row on errors with id from the query_cache + 1 which should always be working
            	if(mysql_query("INSERT INTO errors6817 (id) VALUES (".$id.");")) {
            		// the INSERT didnt fail the error counter is reset
            		$duplicateInsertCount = 0;
            		echo "<".$id.">";
            	} else {
            		// the INSERT query returned a duplicate PK error ==> the counter is incremented
            		$duplicateInsertCount++;
            	}
            	// random wait at the end of the loop to decrease the risk of "collision" between the time one process read the MAX(id) and the other did INSERT the same value+1
            	usleep(mt_rand(50000, 100000));
            }

            Expected result

            As i told before, there could be collision if one of the process INSERT just after the other read the MAX(id) but in that case it shouldnt happen at every loop as there is a significant and random sleep between loops ; the ouput should be similar as the one if you add "$id = $id_no_cache" in the end of the "if($duplicateInsertCount > 1)" :

            $ php testmdev6817.php
            <2><3><4><5><6><7><8><9><10><11><12><13><14><15><16><17><18><19><20><21><22><23><24><25><26><27><28><29><30><31><32><33><34><35><36><37><38><39><40><41><42><43><44><45><46><47><48><49><50><51><52><53><54><55><56><57><58><59><60>
            [duplicate 2 on main ]	qcache:	59	table:	60<61>
            [duplicate 2 on child]	qcache:	60	table:	61<62><63><64><65><66><67><68><69><70><71><72><73><74><75><76><77><78><79><80><81><82><83><84><85><86><87><88><89><90>
            [duplicate 2 on child]	qcache:	89	table:	90<91><92><93><94><95><96><97><98><99><100><101><102><103><104><105><106><107>
            [duplicate 2 on child]	qcache:	106	table:	107<108><109><110><111><112><113><114><115><116><117><118><119><120>
            [duplicate 2 on main ]	qcache:	119	table:	120<121><122><123><124><125><126><127>
            [duplicate 2 on main ]	qcache:	126	table:	127<128><129><130>
            [duplicate 2 on main ]	qcache:	129	table:	130<131>
            [...]

            (where the duplicate count dont keep incrementing as successful INSERT(s) happened between two collisions and MAX(id) is incrementing)

            Actual result

            $ php testmdev6817.php
            <2><3><4><5><6><7><8><9><10>
            [duplicate 2 on main ]	qcache:	9	table:	10
            [duplicate 2 on child]	qcache:	9	table:	10
            [duplicate 3 on main ]	qcache:	9	table:	10
            [duplicate 3 on child]	qcache:	9	table:	10
            [duplicate 4 on main ]	qcache:	9	table:	10
            [duplicate 4 on child]	qcache:	9	table:	10
            [duplicate 5 on main ]	qcache:	9	table:	10
            [duplicate 5 on child]	qcache:	9	table:	10
            [...]

            (the duplicate INSERT(s) counter keeps incrementing meaning that all INSERT are failing and the MAX(id) value returned from the Qcache stays the same until the script is killed)
            The MAX(id) value found on the Qcache stays the same on both the main process and on the child while the one returned without using the Qcache is correct.

            –
            ps: while trying to reproduce this bug, I hit another one (MDEV-7314) by using "INSERT INTO errors6817 (id) SELECT MAX(id)+1 FROM errors6817;" as the INSERT query but it doesnt seems to be related.

            jb-boin Jean Weisbuch added a comment - - edited I have slightly modified your script so its easier to understand and verify the issue (only 2 concurrent processes, the table is recreated at each script execution with only one column, both the value from the cache and the table are verified and if different, shown). The same issue happens with the TokuDB engine while using my script Using LOCK TABLES (READ/WRITE) before the SELECT seems to invalidate the cache and thus, the problem cannot be reproduced while manually locking the table. Steps to reproduce Run this script on a test database. The script does : (re)Creates the test table Using two distinct processes/connections to the database to do a SELECT using the query cache of the MAX() value of the primary key then INSERT a new row with the previously fetched id+1 If the INSERT is successful, it output the inserted id in the form <$id> If the INSERT fails (usually on a DUPLICATE KEY ), a counter is incremented. If at least 2 errors happens in a row (the error counter is reset at each successful INSERT ), the MAX() value of the primary key is also retreived directly from the table and if the MAX() value of the PK is different on the Qcache and the table a message is printed returning both values <?php // forking a second process that will run at the same time and do the same thing $pid = pcntl_fork();   // connect to database $conn = mysql_connect("localhost", "username", "password"); mysql_select_db("testdatabase");   mysql_query("SET SESSION query_cache_type=ON;");   if($pid == 0) { // we are on the main process, no need for the fork to execute this mysql_query("DROP TABLE errors6817;"); mysql_query("CREATE TABLE errors6817 ( id SMALLINT(5) NOT NULL DEFAULT '0', PRIMARY KEY (id) ) ENGINE=Aria ROW_FORMAT=PAGE;"); mysql_query("INSERT INTO errors6817 (id) VALUES (1);"); } else { // the fork will wait for 0.5sec before starting the loops to be sure the main process had time to recreate the table properly usleep(500000); }   $duplicateInsertCount = 0;   while($duplicateInsertCount < 100) { $tmp_tbl = mysql_query("SELECT SQL_CACHE MAX(id) FROM errors6817;"); if(!$tmp_tbl) echo(mysql_error()."\n"); $id = mysql_result($tmp_tbl, 0, 0); mysql_free_result($tmp_tbl); if($duplicateInsertCount > 1) { // after every 10 INSERT on duplicate errors, we check the max value of id on the query cache versus the "real" one $tmp_tbl = mysql_query("SELECT SQL_NO_CACHE MAX(id) FROM errors6817;"); if(!$tmp_tbl) echo(mysql_error()."\n"); $id_no_cache = mysql_result($tmp_tbl, 0, 0); mysql_free_result($tmp_tbl); if($id != $id_no_cache) echo "[duplicate $duplicateInsertCount on ".($pid==0 ? "main " : "child")."]\tqcache:\t${id}\ttable:\t${id_no_cache}\n"; } $id++; // we try to INSERT a new row on errors with id from the query_cache + 1 which should always be working if(mysql_query("INSERT INTO errors6817 (id) VALUES (".$id.");")) { // the INSERT didnt fail the error counter is reset $duplicateInsertCount = 0; echo "<".$id.">"; } else { // the INSERT query returned a duplicate PK error ==> the counter is incremented $duplicateInsertCount++; } // random wait at the end of the loop to decrease the risk of "collision" between the time one process read the MAX(id) and the other did INSERT the same value+1 usleep(mt_rand(50000, 100000)); } Expected result As i told before, there could be collision if one of the process INSERT just after the other read the MAX(id) but in that case it shouldnt happen at every loop as there is a significant and random sleep between loops ; the ouput should be similar as the one if you add " $id = $id_no_cache " in the end of the " if($duplicateInsertCount > 1) " : $ php testmdev6817.php <2><3><4><5><6><7><8><9><10><11><12><13><14><15><16><17><18><19><20><21><22><23><24><25><26><27><28><29><30><31><32><33><34><35><36><37><38><39><40><41><42><43><44><45><46><47><48><49><50><51><52><53><54><55><56><57><58><59><60> [duplicate 2 on main ] qcache: 59 table: 60<61> [duplicate 2 on child] qcache: 60 table: 61<62><63><64><65><66><67><68><69><70><71><72><73><74><75><76><77><78><79><80><81><82><83><84><85><86><87><88><89><90> [duplicate 2 on child] qcache: 89 table: 90<91><92><93><94><95><96><97><98><99><100><101><102><103><104><105><106><107> [duplicate 2 on child] qcache: 106 table: 107<108><109><110><111><112><113><114><115><116><117><118><119><120> [duplicate 2 on main ] qcache: 119 table: 120<121><122><123><124><125><126><127> [duplicate 2 on main ] qcache: 126 table: 127<128><129><130> [duplicate 2 on main ] qcache: 129 table: 130<131> [...] (where the duplicate count dont keep incrementing as successful INSERT(s) happened between two collisions and MAX(id) is incrementing) Actual result $ php testmdev6817.php <2><3><4><5><6><7><8><9><10> [duplicate 2 on main ] qcache: 9 table: 10 [duplicate 2 on child] qcache: 9 table: 10 [duplicate 3 on main ] qcache: 9 table: 10 [duplicate 3 on child] qcache: 9 table: 10 [duplicate 4 on main ] qcache: 9 table: 10 [duplicate 4 on child] qcache: 9 table: 10 [duplicate 5 on main ] qcache: 9 table: 10 [duplicate 5 on child] qcache: 9 table: 10 [...] (the duplicate INSERT(s) counter keeps incrementing meaning that all INSERT are failing and the MAX(id) value returned from the Qcache stays the same until the script is killed) The MAX(id) value found on the Qcache stays the same on both the main process and on the child while the one returned without using the Qcache is correct. – ps: while trying to reproduce this bug, I hit another one ( MDEV-7314 ) by using " INSERT INTO errors6817 (id) SELECT MAX(id)+1 FROM errors6817; " as the INSERT query but it doesnt seems to be related.

            nice that's the problem, two concurrent inserts, one should fail other should insert without problems, at second try the same should happen, but both fails your script runs nice and report the same problem here thanks

            rspadim roberto spadim added a comment - nice that's the problem, two concurrent inserts, one should fail other should insert without problems, at second try the same should happen, but both fails your script runs nice and report the same problem here thanks
            rspadim roberto spadim added a comment - - edited

            i think it's not related but problem one is cause of other

            this MDEV-6817 is a problem of ARIA+Query cache
            your MDEV-7314 is a problem about ARIA storage (i don't know if it's a problem of concurrent insert yet, or a problem of aria log, must test)
            i will mark as blocked by, cause maybe solving mdev-7314 solve the query cache problem

            rspadim roberto spadim added a comment - - edited i think it's not related but problem one is cause of other this MDEV-6817 is a problem of ARIA+Query cache your MDEV-7314 is a problem about ARIA storage (i don't know if it's a problem of concurrent insert yet, or a problem of aria log, must test) i will mark as blocked by, cause maybe solving mdev-7314 solve the query cache problem
            rspadim roberto spadim made changes -
            rspadim roberto spadim made changes -
            Summary query cache bug with aria table, row format = page query cache bug (resturning inconsistent/old result set) with aria table parallel inserts, row format = page
            monty Michael Widenius made changes -
            Status Open [ 1 ] In Progress [ 3 ]

            I took a look at this and found that there is probably a small time slot between insert is executed and commit is done where if one executed a select querty this could be cached and not properly invalidated.

            Suggested fixes:

            • Move query cache invalidation to the after 'close_thread_tables' in sql_parse.cc
              This would proparly invalidate MyISAM, Aria and other tables from the query cache.
              (MyISAM are flushed as part of unlock tables in close_thread_tables while other storage engines should be invalidated as part of commit)
            • Add a separate query cache invalidation in the above place.

            It's better that Sanja takes a look at this as he is the one that wrote the query cache code.

            monty Michael Widenius added a comment - I took a look at this and found that there is probably a small time slot between insert is executed and commit is done where if one executed a select querty this could be cached and not properly invalidated. Suggested fixes: Move query cache invalidation to the after 'close_thread_tables' in sql_parse.cc This would proparly invalidate MyISAM, Aria and other tables from the query cache. (MyISAM are flushed as part of unlock tables in close_thread_tables while other storage engines should be invalidated as part of commit) Add a separate query cache invalidation in the above place. It's better that Sanja takes a look at this as he is the one that wrote the query cache code.
            monty Michael Widenius made changes -
            Assignee Michael Widenius [ monty ] Oleksandr Byelkin [ sanja ]
            rspadim roberto spadim added a comment - - edited

            i prefer at sql_parse.cc since we can use it with any engine, for example oqgraph and others engines that use internally others tables, for example...

            "select ... from one_engine_that_select_table_2 ..." 
              this will cache and save 2 tables used at query cache "one_engine_that_select_table_2" and the other table "table_2"
            "update table_2"
              this should invalidate all queries with "table_2"  (including the last query)

            but i don't know if we have a bigger lock contention (or maybe not? must check)

            rspadim roberto spadim added a comment - - edited i prefer at sql_parse.cc since we can use it with any engine, for example oqgraph and others engines that use internally others tables, for example... "select ... from one_engine_that_select_table_2 ..." this will cache and save 2 tables used at query cache "one_engine_that_select_table_2" and the other table "table_2" "update table_2" this should invalidate all queries with "table_2" (including the last query) but i don't know if we have a bigger lock contention (or maybe not? must check)

            =] hi guys, any news here?

            rspadim roberto spadim added a comment - =] hi guys, any news here?

            You will see the progress here as soon as the progress will happens.

            sanja Oleksandr Byelkin added a comment - You will see the progress here as soon as the progress will happens.
            sanja Oleksandr Byelkin made changes -
            Status In Progress [ 3 ] Stalled [ 10000 ]

            I hope my guess is right that mysql_free is mysql_free_result, but it looks like just hang...

            I'll try other version in the comments...

            sanja Oleksandr Byelkin added a comment - I hope my guess is right that mysql_free is mysql_free_result, but it looks like just hang... I'll try other version in the comments...

            ah, no something else it issued a lot of errors... in any case test suite is not usable

            sanja Oleksandr Byelkin added a comment - ah, no something else it issued a lot of errors... in any case test suite is not usable

            second test works, but produce expected results...

            sanja Oleksandr Byelkin added a comment - second test works, but produce expected results...
            sanja Oleksandr Byelkin made changes -
            Assignee Oleksandr Byelkin [ sanja ] Elena Stepanova [ elenst ]

            Modified and cleaned up test case

            <?php
             
            $pid=pcntl_fork();
             
            // connect to database
            $conn=mysql_connect('127.0.0.1:3306','root');
             
            // create an Aria table
            mysql_query("DROP DATABASE IF EXISTS mdev6817");
            mysql_query("CREATE DATABASE mdev6817");
            mysql_select_db('mdev6817');
            mysql_query("CREATE TABLE IF NOT EXISTS t1 (id INT NOT NULL DEFAULT '0', PRIMARY KEY (id)) ENGINE=Aria");
             
            // enable query cache
            mysql_query("SET GLOBAL query_cache_type = 1");
            mysql_query("SET GLOBAL query_cache_size = 1024*1024*256");
             
            // this is just for further logging
            $con_id=mysql_result(mysql_query("SELECT CONNECTION_ID()"),0,0);
             
            // $counter will count sequential SELECT MAX() queries executed after the last successful INSERT 
            $counter=0;
             
            // and start a infinite loop
            while(1){
            		$SQL=	"SELECT MAX(id) FROM t1";
            		$tmp_tbl=mysql_query($SQL);
            		$id=mysql_result($tmp_tbl,0,0);
            		mysql_free_result($tmp_tbl);
            		// 1000 is for throttling
            		if($counter%1000==0 && $counter>1000) {
            			echo "Connection $con_id: Executed $counter 'SELECT MAX(id)...' queries, result is $id\n";
            		}
            		$id++; 
            		// probably to avoid overflow? It was like that in the initial test, keeping as is
            		if($id<=0) $id=1;
             
            		$SQL=	"INSERT INTO t1 VALUES ($id)";
            		$ok=mysql_query($SQL); /* autocommit = 1 */
            		
            		if($ok){
            			echo "Connection $con_id: Successfully inserted id=$id\n";
            			$counter=0;
            			continue;
            		}
            		$counter++;
            		usleep(500); // wait a bit...
            }

            Expected result if there is no bug

            Connection 7: Successfully inserted id=1
            Connection 8: Successfully inserted id=2
            Connection 8: Successfully inserted id=3
            ...
            Connection 7: Successfully inserted id=124
            Connection 7: Successfully inserted id=125
            ...

            Connection numbers can vary.

            Example of the actual result, observed on all MariaDB versions

            Connection 12: Successfully inserted id=1
            Connection 11: Successfully inserted id=2
            Connection 11: Successfully inserted id=3
            Connection 12: Executed 2000 'SELECT MAX(id)...' queries, result is 2
            Connection 11: Executed 2000 'SELECT MAX(id)...' queries, result is 2
            Connection 12: Executed 3000 'SELECT MAX(id)...' queries, result is 2
            Connection 11: Executed 3000 'SELECT MAX(id)...' queries, result is 2
            Connection 12: Executed 4000 'SELECT MAX(id)...' queries, result is 2
            Connection 11: Executed 4000 'SELECT MAX(id)...' queries, result is 2
            Connection 11: Executed 5000 'SELECT MAX(id)...' queries, result is 2
            Connection 12: Executed 5000 'SELECT MAX(id)...' queries, result is 2
            ...

            Connection numbers and the result value can vary, the important part is that the result remains the same.
            The idea is that at some point SELECT starts returning a smaller value than currently exists in the table, no matter how many times you re-run the query.
            Only observed with the query cache.

            elenst Elena Stepanova added a comment - Modified and cleaned up test case <?php   $pid=pcntl_fork();   // connect to database $conn=mysql_connect('127.0.0.1:3306','root');   // create an Aria table mysql_query("DROP DATABASE IF EXISTS mdev6817"); mysql_query("CREATE DATABASE mdev6817"); mysql_select_db('mdev6817'); mysql_query("CREATE TABLE IF NOT EXISTS t1 (id INT NOT NULL DEFAULT '0', PRIMARY KEY (id)) ENGINE=Aria");   // enable query cache mysql_query("SET GLOBAL query_cache_type = 1"); mysql_query("SET GLOBAL query_cache_size = 1024*1024*256");   // this is just for further logging $con_id=mysql_result(mysql_query("SELECT CONNECTION_ID()"),0,0);   // $counter will count sequential SELECT MAX() queries executed after the last successful INSERT $counter=0;   // and start a infinite loop while(1){ $SQL= "SELECT MAX(id) FROM t1"; $tmp_tbl=mysql_query($SQL); $id=mysql_result($tmp_tbl,0,0); mysql_free_result($tmp_tbl); // 1000 is for throttling if($counter%1000==0 && $counter>1000) { echo "Connection $con_id: Executed $counter 'SELECT MAX(id)...' queries, result is $id\n"; } $id++; // probably to avoid overflow? It was like that in the initial test, keeping as is if($id<=0) $id=1;   $SQL= "INSERT INTO t1 VALUES ($id)"; $ok=mysql_query($SQL); /* autocommit = 1 */ if($ok){ echo "Connection $con_id: Successfully inserted id=$id\n"; $counter=0; continue; } $counter++; usleep(500); // wait a bit... } Expected result if there is no bug Connection 7: Successfully inserted id=1 Connection 8: Successfully inserted id=2 Connection 8: Successfully inserted id=3 ... Connection 7: Successfully inserted id=124 Connection 7: Successfully inserted id=125 ... Connection numbers can vary. Example of the actual result, observed on all MariaDB versions Connection 12: Successfully inserted id=1 Connection 11: Successfully inserted id=2 Connection 11: Successfully inserted id=3 Connection 12: Executed 2000 'SELECT MAX(id)...' queries, result is 2 Connection 11: Executed 2000 'SELECT MAX(id)...' queries, result is 2 Connection 12: Executed 3000 'SELECT MAX(id)...' queries, result is 2 Connection 11: Executed 3000 'SELECT MAX(id)...' queries, result is 2 Connection 12: Executed 4000 'SELECT MAX(id)...' queries, result is 2 Connection 11: Executed 4000 'SELECT MAX(id)...' queries, result is 2 Connection 11: Executed 5000 'SELECT MAX(id)...' queries, result is 2 Connection 12: Executed 5000 'SELECT MAX(id)...' queries, result is 2 ... Connection numbers and the result value can vary, the important part is that the result remains the same. The idea is that at some point SELECT starts returning a smaller value than currently exists in the table, no matter how many times you re-run the query. Only observed with the query cache.
            elenst Elena Stepanova made changes -
            Fix Version/s 10.1 [ 16100 ]
            Affects Version/s 5.3.12 [ 12000 ]
            Affects Version/s 10.1 [ 16100 ]
            Affects Version/s 10.0 [ 16000 ]
            Affects Version/s 5.5 [ 15800 ]
            Affects Version/s 10.0.12 [ 15201 ]
            Affects Version/s 10.1.1 [ 16801 ]
            Assignee Elena Stepanova [ elenst ] Oleksandr Byelkin [ sanja ]
            Labels Aria querycache querycache_result Aria querycache querycache_result verified
            ratzpo Rasmus Johansson (Inactive) made changes -
            Workflow MariaDB v2 [ 55060 ] MariaDB v3 [ 62683 ]
            serg Sergei Golubchik made changes -
            Sprint 10.0.20 [ 5 ]
            sanja Oleksandr Byelkin made changes -
            Status Stalled [ 10000 ] In Progress [ 3 ]

            repeatable with --debug

            sanja Oleksandr Byelkin added a comment - repeatable with --debug

            According to the log, the INSERT of highest id invalidated value in QC before ending internal transaction inside ARIA engine.
            At the same time in other thread we was getting MAX() value and as far as the above transaction is not ended it got old value of MAX which was written to QC where it stay...

            I also checked that non cached query returns correct value (got before cached).

            sanja Oleksandr Byelkin added a comment - According to the log, the INSERT of highest id invalidated value in QC before ending internal transaction inside ARIA engine. At the same time in other thread we was getting MAX() value and as far as the above transaction is not ended it got old value of MAX which was written to QC where it stay... I also checked that non cached query returns correct value (got before cached).

            Proof of concept patch with invalidating locked for write tables on close thread tables for INSERT (it is problem of INSERT only) works.

            Now it is time how to do it in a 'nice' way without big performance impact.

            sanja Oleksandr Byelkin added a comment - Proof of concept patch with invalidating locked for write tables on close thread tables for INSERT (it is problem of INSERT only) works. Now it is time how to do it in a 'nice' way without big performance impact.
            serg Sergei Golubchik made changes -
            Sprint 10.0.20 [ 5 ] 10.0.20, 10.1.6-1 [ 5, 6 ]
            serg Sergei Golubchik made changes -
            Rank Ranked higher
            serg Sergei Golubchik made changes -
            Sprint 10.0.20, 10.1.6-1 [ 5, 6 ] 10.0.20 [ 5 ]
            sanja Oleksandr Byelkin made changes -
            Status In Progress [ 3 ] Stalled [ 10000 ]
            sanja Oleksandr Byelkin made changes -
            Status Stalled [ 10000 ] In Progress [ 3 ]
            sanja Oleksandr Byelkin made changes -
            Status In Progress [ 3 ] Stalled [ 10000 ]
            elenst Elena Stepanova made changes -
            Priority Critical [ 2 ] Major [ 3 ]
            sanja Oleksandr Byelkin made changes -
            Status Stalled [ 10000 ] In Progress [ 3 ]
            sanja Oleksandr Byelkin made changes -
            Status In Progress [ 3 ] Stalled [ 10000 ]
            sanja Oleksandr Byelkin made changes -
            Status Stalled [ 10000 ] In Progress [ 3 ]

            how possible this test suite make php7 compatible or just mysqltest (there is no mysql interface used in the test suite, sorry I am not specialist in php at all)

            sanja Oleksandr Byelkin added a comment - how possible this test suite make php7 compatible or just mysqltest (there is no mysql interface used in the test suite, sorry I am not specialist in php at all)
            sanja Oleksandr Byelkin made changes -
            Status In Progress [ 3 ] Stalled [ 10000 ]
            rspadim roberto spadim added a comment - - edited

            using mysqli extension

            <?php
            $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            if($pid==0) $pid=pcntl_fork();
            // three different process
             
             
            // connect to database
            $conn=mysqli_connect(/* connection parameters */);
            mysqli_select_db(/*select database*/);
             
            // and start a infinite loop
            $contador=0;
            while(1){
            		$SQL=	"SELECT MAX(err_id) AS e FROM errors WHERE id1=1 AND id2=2";
            IF($contador%1000==0 && $contador>1000) echo "\n$SQL\n";
            		$tmp_tbl=mysqli_query($SQL);
            		$err_id=mysqli_fetch_assoc($tmp_tbl);
            		$err_id=$err_id['e'];
            IF($contador%1000==0 && $contador>1000) echo "\n$err_id\n";
            		$err_id++;if($err_id<=0) $err_id=1;
            		$SQL=	"INSERT INTO errors (".
            				"id1,id2,err_id,datahora,erro".
            			") VALUES (".
            				"1,2,$err_id,'".microtime(1)."','test '".
            			")";
            		$ok=mysqli_query($SQL); /* autocommit = 1 */
            		
            		if($ok){
            			$contador=0;
            			continue;
            		}
            		$contador++;
            		usleep(500); // wait a bit...
            }
            

            rspadim roberto spadim added a comment - - edited using mysqli extension <?php $pid =pcntl_fork(); if ( $pid ==0) $pid =pcntl_fork(); if ( $pid ==0) $pid =pcntl_fork(); // three different process // connect to database $conn =mysqli_connect( /* connection parameters */ ); mysqli_select_db( /*select database*/ ); // and start a infinite loop $contador =0; while (1){ $SQL = "SELECT MAX(err_id) AS e FROM errors WHERE id1=1 AND id2=2" ; IF( $contador %1000==0 && $contador >1000) echo "\n$SQL\n" ; $tmp_tbl =mysqli_query( $SQL ); $err_id =mysqli_fetch_assoc( $tmp_tbl ); $err_id = $err_id [ 'e' ]; IF( $contador %1000==0 && $contador >1000) echo "\n$err_id\n" ; $err_id ++; if ( $err_id <=0) $err_id =1; $SQL = "INSERT INTO errors (" . "id1,id2,err_id,datahora,erro" . ") VALUES (" . "1,2,$err_id,'" .microtime(1). "','test '" . ")" ; $ok =mysqli_query( $SQL ); /* autocommit = 1 */ if ( $ok ){ $contador =0; continue ; } $contador ++; usleep(500); // wait a bit... }

            Thanks!

            sanja Oleksandr Byelkin added a comment - Thanks!
            sanja Oleksandr Byelkin made changes -
            Status Stalled [ 10000 ] In Progress [ 3 ]
            elenst Elena Stepanova made changes -
            Fix Version/s 10.0 [ 16000 ]

            To reproduce on higher versions, SESSION query_cache_type also needs to be set (upon the first execution of the test, subsequent executions will pick up the global value which it sets).

            <?php
             
            $pid=pcntl_fork();
             
            // connect to database
            $conn=mysql_connect('127.0.0.1:3306','root');
             
            // create an Aria table
            mysql_query("DROP DATABASE IF EXISTS mdev6817");
            mysql_query("CREATE DATABASE mdev6817");
            mysql_select_db('mdev6817');
            mysql_query("CREATE TABLE IF NOT EXISTS t1 (id INT NOT NULL DEFAULT '0', PRIMARY KEY (id)) ENGINE=Aria");
             
            // enable query cache
            mysql_query("SET GLOBAL query_cache_type = 1");
            mysql_query("SET SESSION query_cache_type = 1");
            mysql_query("SET GLOBAL query_cache_size = 1024*1024*256");
             
            // this is just for further logging
            $con_id=mysql_result(mysql_query("SELECT CONNECTION_ID()"),0,0);
             
            // $counter will count sequential SELECT MAX() queries executed after the last successful INSERT 
            $counter=0;
             
            // and start a infinite loop
            while(1){
            		$SQL=	"SELECT MAX(id) FROM t1";
            		$tmp_tbl=mysql_query($SQL);
            		$id=mysql_result($tmp_tbl,0,0);
            		mysql_free_result($tmp_tbl);
            		// 1000 is for throttling
            		if($counter%1000==0 && $counter>1000) {
            			echo "Connection $con_id: Executed $counter 'SELECT MAX(id)...' queries, result is $id\n";
            		}
            		$id++; 
            		// probably to avoid overflow? It was like that in the initial test, keeping as is
            		if($id<=0) $id=1;
             
            		$SQL=	"INSERT INTO t1 VALUES ($id)";
            		$ok=mysql_query($SQL); /* autocommit = 1 */
            		
            		if($ok){
            			echo "Connection $con_id: Successfully inserted id=$id\n";
            			$counter=0;
            			continue;
            		}
            		$counter++;
            		usleep(500); // wait a bit...
            }
            

            elenst Elena Stepanova added a comment - To reproduce on higher versions, SESSION query_cache_type also needs to be set (upon the first execution of the test, subsequent executions will pick up the global value which it sets). <?php $pid =pcntl_fork(); // connect to database $conn =mysql_connect( '127.0.0.1:3306' , 'root' ); // create an Aria table mysql_query( "DROP DATABASE IF EXISTS mdev6817" ); mysql_query( "CREATE DATABASE mdev6817" ); mysql_select_db( 'mdev6817' ); mysql_query( "CREATE TABLE IF NOT EXISTS t1 (id INT NOT NULL DEFAULT '0', PRIMARY KEY (id)) ENGINE=Aria" ); // enable query cache mysql_query( "SET GLOBAL query_cache_type = 1" ); mysql_query( "SET SESSION query_cache_type = 1" ); mysql_query( "SET GLOBAL query_cache_size = 1024*1024*256" ); // this is just for further logging $con_id =mysql_result(mysql_query( "SELECT CONNECTION_ID()" ),0,0); // $counter will count sequential SELECT MAX() queries executed after the last successful INSERT $counter =0; // and start a infinite loop while (1){ $SQL = "SELECT MAX(id) FROM t1" ; $tmp_tbl =mysql_query( $SQL ); $id =mysql_result( $tmp_tbl ,0,0); mysql_free_result( $tmp_tbl ); // 1000 is for throttling if ( $counter %1000==0 && $counter >1000) { echo "Connection $con_id: Executed $counter 'SELECT MAX(id)...' queries, result is $id\n" ; } $id ++; // probably to avoid overflow? It was like that in the initial test, keeping as is if ( $id <=0) $id =1; $SQL = "INSERT INTO t1 VALUES ($id)" ; $ok =mysql_query( $SQL ); /* autocommit = 1 */ if ( $ok ){ echo "Connection $con_id: Successfully inserted id=$id\n" ; $counter =0; continue ; } $counter ++; usleep(500); // wait a bit... }
            elenst Elena Stepanova made changes -
            Fix Version/s 10.2 [ 14601 ]
            Fix Version/s 10.3 [ 22126 ]
            Fix Version/s 10.4 [ 22408 ]
            Affects Version/s 10.2 [ 14601 ]
            Affects Version/s 10.3 [ 22126 ]
            Affects Version/s 10.4 [ 22408 ]
            elenst Elena Stepanova made changes -
            Labels Aria querycache querycache_result verified
            svoj Sergey Vojtovich made changes -
            Assignee Oleksandr Byelkin [ sanja ] Michael Widenius [ monty ]
            monty Michael Widenius added a comment - - edited

            Updated to php7:

            <?php

            $pid=pcntl_fork();

            // connect to database
            $conn=mysqli_connect('127.0.0.1:3307','root');

            // create an Aria table
            mysqli_query($conn,"DROP DATABASE IF EXISTS mdev6817");
            mysqli_query($conn,"CREATE DATABASE mdev6817");
            mysqli_select_db($conn,'mdev6817');
            mysqli_query($conn,"CREATE TABLE IF NOT EXISTS t1 (id INT NOT NULL DEFAULT '0', PRIMARY KEY (id)) ENGINE=Aria transactional=1");

            // enable query cache
            mysqli_query($conn,"SET GLOBAL query_cache_type = 1");
            mysqli_query($conn,"SET SESSION query_cache_type = 1");
            mysqli_query($conn,"SET GLOBAL query_cache_size = 1024*1024*256");

            // this is just for further logging
            $con_id=mysqli_result(mysqli_query($conn,"SELECT CONNECTION_ID()"),0,0);

            // $counter will count sequential SELECT MAX() queries executed after the last successful INSERT
            $counter=0;

            function mysqli_result($search, $row, $field) {
            $result="";
            $i=0; while($results=mysqli_fetch_array($search)){
            if ($i==$row){$result=$results[$field];}
            $i++;}
            return $result;}

            // and start a infinite loop
            while(1){
            $SQL= "SELECT MAX(id) FROM t1";
            $tmp_tbl=mysqli_query($conn,$SQL);
            $id=mysqli_result($tmp_tbl,0,0);
            mysqli_free_result($tmp_tbl);
            // 1000 is for throttling
            if($counter%1000==0 && $counter>1000)

            { echo "Connection $con_id: Executed $counter 'SELECT MAX(id)...' queries, result is $id\n"; }

            $id++;
            // probably to avoid overflow? It was like that in the initial test, keeping as is
            if($id<=0) $id=1;

            $SQL= "INSERT INTO t1 VALUES ($id)";
            $ok=mysqli_query($conn,$SQL); /* autocommit = 1 */

            if($ok)

            { echo "Connection $con_id: Successfully inserted id=$id\n"; $counter=0; continue; }

            $counter++;
            usleep(500); // wait a bit...
            }

            monty Michael Widenius added a comment - - edited Updated to php7: <?php $pid=pcntl_fork(); // connect to database $conn=mysqli_connect('127.0.0.1:3307','root'); // create an Aria table mysqli_query($conn,"DROP DATABASE IF EXISTS mdev6817"); mysqli_query($conn,"CREATE DATABASE mdev6817"); mysqli_select_db($conn,'mdev6817'); mysqli_query($conn,"CREATE TABLE IF NOT EXISTS t1 (id INT NOT NULL DEFAULT '0', PRIMARY KEY (id)) ENGINE=Aria transactional=1"); // enable query cache mysqli_query($conn,"SET GLOBAL query_cache_type = 1"); mysqli_query($conn,"SET SESSION query_cache_type = 1"); mysqli_query($conn,"SET GLOBAL query_cache_size = 1024*1024*256"); // this is just for further logging $con_id=mysqli_result(mysqli_query($conn,"SELECT CONNECTION_ID()"),0,0); // $counter will count sequential SELECT MAX() queries executed after the last successful INSERT $counter=0; function mysqli_result($search, $row, $field) { $result=""; $i=0; while($results=mysqli_fetch_array($search)){ if ($i==$row){$result=$results [$field] ;} $i++;} return $result;} // and start a infinite loop while(1){ $SQL= "SELECT MAX(id) FROM t1"; $tmp_tbl=mysqli_query($conn,$SQL); $id=mysqli_result($tmp_tbl,0,0); mysqli_free_result($tmp_tbl); // 1000 is for throttling if($counter%1000==0 && $counter>1000) { echo "Connection $con_id: Executed $counter 'SELECT MAX(id)...' queries, result is $id\n"; } $id++; // probably to avoid overflow? It was like that in the initial test, keeping as is if($id<=0) $id=1; $SQL= "INSERT INTO t1 VALUES ($id)"; $ok=mysqli_query($conn,$SQL); /* autocommit = 1 */ if($ok) { echo "Connection $con_id: Successfully inserted id=$id\n"; $counter=0; continue; } $counter++; usleep(500); // wait a bit... }

            The problem is that for transactional aria tables (row_type=PAGE and transactional=1), maria_lock_database() didn't flush the state or the query cache.
            Not flushing the state is correct for transactional tables as this is done by checkpoint, but not flushing the query cache was wrong and could cause
            SELECT queries to not be deleted from the cache.

            Fixed by introducing a flush of the query cache as part of commit, if the table has changed.

            The test cases above and roberto's analyze of the problem helped a lot in finding and fixing this case.

            monty Michael Widenius added a comment - The problem is that for transactional aria tables (row_type=PAGE and transactional=1), maria_lock_database() didn't flush the state or the query cache. Not flushing the state is correct for transactional tables as this is done by checkpoint, but not flushing the query cache was wrong and could cause SELECT queries to not be deleted from the cache. Fixed by introducing a flush of the query cache as part of commit, if the table has changed. The test cases above and roberto's analyze of the problem helped a lot in finding and fixing this case.
            monty Michael Widenius made changes -
            Fix Version/s 10.1 [ 16100 ]

            hi folks, very nice update , i'm glad to help mariadb again =)

            rspadim roberto spadim added a comment - hi folks, very nice update , i'm glad to help mariadb again =)

            Pushed into 10.2

            monty Michael Widenius added a comment - Pushed into 10.2
            monty Michael Widenius made changes -
            issue.field.resolutiondate 2019-09-04 07:11:21.0 2019-09-04 07:11:21.045
            monty Michael Widenius made changes -
            Fix Version/s 10.2.27 [ 23717 ]
            Fix Version/s 10.3.18 [ 23719 ]
            Fix Version/s 10.4.8 [ 23721 ]
            Fix Version/s 10.2 [ 14601 ]
            Fix Version/s 10.3 [ 22126 ]
            Fix Version/s 10.4 [ 22408 ]
            Resolution Fixed [ 1 ]
            Status In Progress [ 3 ] Closed [ 6 ]
            serg Sergei Golubchik made changes -
            Workflow MariaDB v3 [ 62683 ] MariaDB v4 [ 148277 ]

            People

              monty Michael Widenius
              rspadim roberto spadim
              Votes:
              1 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.