[MDEV-25036] use bulk insert optimization for multiple insert statements Created: 2021-03-02 Updated: 2023-12-20 |
|
| Status: | Open |
| Project: | MariaDB Server |
| Component/s: | Data Manipulation - Insert, Storage Engine - InnoDB |
| Fix Version/s: | 11.5 |
| Type: | Task | Priority: | Major |
| Reporter: | Sergei Golubchik | Assignee: | Unassigned |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | performance | ||
| Issue Links: |
|
||||||||||||||||||||||||
| Description |
|
After The problem here is that in the bulk insert mode InnoDB cannot rollback individual statements, all the work done in the bulk insert mode is atomic, as a whole. There should either be some way to express it in SQL or make sure there will be no rollbacks. Another complication is that in the bulk insert mode InnoDB locks the whole table, not individual rows. A typical sql dump looks like
It seems that with UNIQUE_CHECKS=0 and FOREIGN_KEY_CHECKS=0 there should be no rollbacks. But with CHECK constraints there still should be? And as inserts are done under LOCK TABLES, InnoDB fine grained row level locking is wasted here. So, it seems to be possible to get To make it work generally we'll need a new syntax that will mean "this block of work can only be rolled back as a whole" A standard approach would be something like
which means pretty much exactly what we want to do. But the syntax is, perhaps, too awkward? Non-standard suggestions, copied from
|
| Comments |
| Comment by Marko Mäkelä [ 2021-03-03 ] | ||||||||||||||||
|
The setting unique_checks=0 only has an effect on the InnoDB change buffer. If InnoDB decides to buffer an insert into a unique secondary index (with the default unique_checks=1 it can never even be attempted), then the unique key violation would go unnoticed until some later point of time when the buffered change is actually merged and the corruption of the index might be noticed, possibly leading to a crash of the server, or some funny symptoms like My guess is that the motivation of the setting unique_checks=0 was to allow change buffering to speed up large data loads, in case the user really knows that no duplicate keys can occur. A much better speedup would be provided by That said, InnoDB could theoretically interpret the combination of unique_checks=0 and foreign_key_checks=0 to enable the bulk insert optimization. After all, the user swore that there cannot be any errors, hence we should be free to do what we like if those errors are present. | ||||||||||||||||
| Comment by Marko Mäkelä [ 2021-08-17 ] | ||||||||||||||||
|
thiru pointed out that if | ||||||||||||||||
| Comment by Roel Van de Paar [ 2021-11-04 ] | ||||||||||||||||
|
When this is being implemented, please also improve the cryptic error message currently seen when executing:
Leading to:
History: see | ||||||||||||||||
| Comment by Michael Widenius [ 2023-12-20 ] | ||||||||||||||||
|
We want something that is trivial to use and highly compatible with old applications.
I suggest the following 'easy solution' that would allow new users and old application to benefit
I would also like to suggest that the first auto-commit insert into a table would also use this optimization. I would expect that most users would use innodb-optimize-insert-into-empty-table=1 because of it's great benefits, in spite of |