[MDEV-7314] Concurrent "INSERT INTO table SELECT MAX(id)+1 FROM table" are hitting deadlocks on Aria tables using ROW_FORMAT=PAGE Created: 2014-12-13 Updated: 2015-01-20 Resolved: 2015-01-20 |
|
| Status: | Closed |
| Project: | MariaDB Server |
| Component/s: | Storage Engine - Aria |
| Affects Version/s: | 10.0.15 |
| Fix Version/s: | 10.0.16 |
| Type: | Bug | Priority: | Major |
| Reporter: | Jean Weisbuch | Assignee: | Michael Widenius |
| Resolution: | Fixed | Votes: | 1 |
| Labels: | None | ||
| Environment: |
Debian Wheezy amd64 |
||
| Issue Links: |
|
||||||||||||||||
| Description |
| Comments |
| Comment by roberto spadim [ 2014-12-13 ] | ||||||||||||||||||||||||||||||
|
Here with mariadb 10.0.14 i executed and i have this error:
but... why should this fail ?! isn't INSERT "atomic" in this case? with myisam,innodb,tokudb i don't have this error —
in my tests i don't have deadlocks, just error reporting about duplicate keys | ||||||||||||||||||||||||||||||
| Comment by roberto spadim [ 2014-12-13 ] | ||||||||||||||||||||||||||||||
|
i done this:
with ARIA -> mysql error appears after that, i execute:
and executed the same php script now, i run:
script runs forever, no error, only "........"
script runs forever, no error, only "........"
and script report error | ||||||||||||||||||||||||||||||
| Comment by Jean Weisbuch [ 2014-12-13 ] | ||||||||||||||||||||||||||||||
|
I added the METADATA_LOCK_INFO table content on the description. | ||||||||||||||||||||||||||||||
| Comment by roberto spadim [ 2014-12-13 ] | ||||||||||||||||||||||||||||||
|
report these variables too (https://mariadb.com/kb/en/mariadb/documentation/storage-engines/aria/aria-two-step-deadlock-detection/) : deadlock_search_depth_long i don't know if anyother variable is relevant (maybe aria_* variables, sql_mode and thread_handling?) | ||||||||||||||||||||||||||||||
| Comment by Elena Stepanova [ 2014-12-14 ] | ||||||||||||||||||||||||||||||
|
Do you happen to have binary logging enabled and set to statement or mixed? | ||||||||||||||||||||||||||||||
| Comment by Jean Weisbuch [ 2014-12-14 ] | ||||||||||||||||||||||||||||||
|
Indeed, i have the binlog activated in statement format. | ||||||||||||||||||||||||||||||
| Comment by Elena Stepanova [ 2014-12-14 ] | ||||||||||||||||||||||||||||||
|
For the deadlock with SBR, we have the old bug For the duplicate keys, I don't see an issue here since Aria allows concurrent inserts, and at some moment (quite often, actually) concurrent threads are bound to select exact same MAX(id) and attempt to insert the same new value. However, I'm no expert on Aria, so lets see how the thread on maria-discuss goes. Anyway, if it is a bug, it's a different one. | ||||||||||||||||||||||||||||||
| Comment by Elena Stepanova [ 2014-12-14 ] | ||||||||||||||||||||||||||||||
|
monty, | ||||||||||||||||||||||||||||||
| Comment by Jean Weisbuch [ 2014-12-14 ] | ||||||||||||||||||||||||||||||
|
The duplicate key is a bug, INSERT ... SELECT should be atomic. The KB on concurrent INSERTS on MyISAM states that :
| ||||||||||||||||||||||||||||||
| Comment by roberto spadim [ 2014-12-14 ] | ||||||||||||||||||||||||||||||
|
i don't know if this is relevant, and if it's the real problem, reading last comment from Jean... at aria documentation we have another interesting information:
i don't know where the read lock occurs, but it's a expected result? | ||||||||||||||||||||||||||||||
| Comment by Elena Stepanova [ 2014-12-14 ] | ||||||||||||||||||||||||||||||
|
I don't quite understand the argument about it being atomic. Obviously the transactional definition doesn't apply here, so maybe you can clarify what exactly you mean, and more importantly, how does it apply to the concurrent context? Duplicate key errors occur on an Aria table without binary logging, so the KB statement about MyISAM and binlog does not apply here. | ||||||||||||||||||||||||||||||
| Comment by Jean Weisbuch [ 2014-12-14 ] | ||||||||||||||||||||||||||||||
|
Atomic might not be the right word here but such query wont hit duplicates error on other engines and different Aria row formats. Having the query to hit duplicates that easily means that using "INSERT ... SELECT" taking datas on the table itself has a non negligeable chance to INSERT incorrect datas on the table (eg. if the query was inserting values on a non PK column with an auto-increment PK), it should at least be clearly documented as an engine limitation or maybe having a way to disable the concurrent inserts should be possible. Anyway it definitely doesnt have anything to do with the original bug. | ||||||||||||||||||||||||||||||
| Comment by Elena Stepanova [ 2014-12-14 ] | ||||||||||||||||||||||||||||||
|
Back to the original problem, I forgot to mention that technically these deadlocked queries don't hang forever, they will finish with ER_LOCK_WAIT_TIMEOUT after lock_wait_timeout is exceeded. Of course, since lock_wait_timeout = 1 year by default, for all practical purposes it looks like forever. But as a workaround, you can reduce the timeout. Also, I got a deterministic test case (I will add it to
| ||||||||||||||||||||||||||||||
| Comment by Michael Widenius [ 2015-01-18 ] | ||||||||||||||||||||||||||||||
|
The issue was that when doing insert into t1 select from t1 from the same table, we are requesting a 'insert concurrent' and a 'read_no_insert' lock from the table. The reason for having read_no_insert here is to ensure that insert ... select should be atomic when running with binary log enabled. | ||||||||||||||||||||||||||||||
| Comment by Michael Widenius [ 2015-01-20 ] | ||||||||||||||||||||||||||||||
|
Fix pushed to 10.0 tree |