[MDEV-30884] MariaDB 10.6.12 / 10.11.2 hangs on large parallel insert Created: 2023-03-20 Updated: 2023-06-01 Resolved: 2023-05-31 |
|
| Status: | Closed |
| Project: | MariaDB Server |
| Component/s: | Storage Engine - InnoDB |
| Affects Version/s: | 10.6.12 |
| Fix Version/s: | 11.1.1, 10.11.3, 11.0.2, 10.6.13, 10.8.8, 10.9.6, 10.10.4 |
| Type: | Bug | Priority: | Major |
| Reporter: | Sebastian Wittgens | Assignee: | Marko Mäkelä |
| Resolution: | Duplicate | Votes: | 1 |
| Labels: | None | ||
| Environment: |
Debian 10, MariaDB Repository |
||
| Attachments: |
|
||||||||
| Issue Links: |
|
||||||||
| Description |
|
Edit: Just tested 10.11.2 - happens there as well When parallel importing large amounts of datasets into the same table the imports suddenly hang at certain times. This sometimes happens after a few seconds, sometimes after a few minutes... at that point there have sometimes been 50MB of data imported, sometimes 1-2GB... in the processlist it looks like this - the time continues to go up, but nothing happens anymore, even after 50+ hours.
I also can't kill the queries. If I stop the original process that imported them - nothing happens. If I kill the query on the console the status switches from Update to "killed" - but that's it. I have to kill -9 the whole database. This does not happen with 10.6.11 Attached is an output of show global variables. To have a testcase outside of our application, I created the following table in an empty database:
Then I generated four SQL files with random test data using https://filldb.info/dummy and ran a bash script like this
I then loop over this
And wait... it happens usually after a few minutes with 10.6.12. Not sure if (and if so how / where) I should upload the complete testdata set, it's multiple gb big and basically just useless data - so please let me know how I can help |
| Comments |
| Comment by Marko Mäkelä [ 2023-05-22 ] |
|
Catscrash, is this a duplicate of |
| Comment by Sebastian Wittgens [ 2023-05-31 ] |
|
Thank you for your comment, it seems like I cannot reproduce this with 10.6.13 anymore, thank you so much! It might very well be a duplicate of this bug, yes |
| Comment by Marko Mäkelä [ 2023-05-31 ] |
|
Thank you, Catscrash. Let us assume that this is a duplicate of Note that due to |