[MDEV-18613] Optimization for dropping table Created: 2019-02-18 Updated: 2023-11-13 Resolved: 2021-10-26 |
|
| Status: | Closed |
| Project: | MariaDB Server |
| Component/s: | Data Definition - Alter Table, Storage Engine - InnoDB |
| Affects Version/s: | None |
| Fix Version/s: | 10.5.4 |
| Type: | Bug | Priority: | Major |
| Reporter: | musazhang | Assignee: | Marko Mäkelä |
| Resolution: | Duplicate | Votes: | 2 |
| Labels: | need_feedback | ||
| Issue Links: |
|
||||||||||||||||||||
| Description |
|
When innodb_file_per_table=ON, each table has its only ibd file, user thread has to unlink refered ibd file when drop table is executed. as a result, it cost a lot of time when the ibd file is large and stall the whole system. For detail information, please refer to: https://github.com/MariaDB/server/pull/1021 |
| Comments |
| Comment by Marko Mäkelä [ 2020-05-20 ] | |||||||||||||
|
As far as I can tell, this basically is a work-around for an operating system deficiency that blocks any concurrent usage of the file system while a large file is being deleted. To my knowledge, it is most needed on Linux, and not at all needed on Microsoft Windows.
Technically, if we implement a background task that piecewise shrinks a large file in order to work around the file system starvation bug, it would be preferable to do that on 10.5 or later, using the | |||||||||||||
| Comment by Marko Mäkelä [ 2020-06-11 ] | |||||||||||||
|
Now that | |||||||||||||
| Comment by Manjot Singh (Inactive) [ 2020-06-12 ] | |||||||||||||
|
How is this issue different than | |||||||||||||
| Comment by Manjot Singh (Inactive) [ 2020-07-21 ] | |||||||||||||
|
marko in your comment in Was this fixed in 8069? | |||||||||||||
| Comment by Marko Mäkelä [ 2020-07-27 ] | |||||||||||||
|
manjot, in I do not know whether any currently popular Linux file systems suffer from the problem that deleting a file (which in our case would occur at the time of the close() invocation) would prevent any concurrent operation on the file system. There are some hints that this was a problem with the ext3 file system, but not with ext4. I think that we will find it out when someone complains. I would expect the worst case to involve the deletion of large fragmented files. It might ‘help’ to fragment the files by enabling page_compressed when creating the tables. If some file system turns out to suffer from that problem, we could try to work around that problem by repeatedly invoking ftruncate() to shrink the file before closing the file handle. | |||||||||||||
| Comment by Marko Mäkelä [ 2021-07-01 ] | |||||||||||||
|
In Should some file system really require a work-around to make the delete-on-close perform faster (without stalling other threads or processes that are competing for kernel resources), we could implement something that performs a piecewise ftruncate() of the file before finally closing the handle. The following just illustrates the idea; there are multiple occurrences of such code in the 10.6 server:
| |||||||||||||
| Comment by Marko Mäkelä [ 2021-10-26 ] | |||||||||||||
|
I believe that this has been fixed by |