Details
-
Bug
-
Status: Open (View Workflow)
-
Major
-
Resolution: Unresolved
-
10.11.8
-
Debian Bookworm
Description
Our MariaDB servers with many thousands of clients sometimes encounter slow ALTER TABLE queries (up to 25 seconds) with adjacent read-only queries also being just as slow; all read-only queries apparently wait for the one ALTER TABLE to finish the commit.
Using two BPF scripts (https://github.com/iovisor/bcc/pull/5108 and https://github.com/iovisor/bcc/pull/5112), I pinpointed this to contention on dict_sys.latch; a common backtrace looks like this:
syscall
|
unlock_and_close_files(std::vector<pfs_os_file_t, std::allocator<pfs_os_file_t> > const&, trx_t*)
|
ha_innobase::commit_inplace_alter_table(TABLE*, Alter_inplace_info*, bool)
|
mysql_inplace_alter_table(THD*, TABLE_LIST*, TABLE*, TABLE*, Alter_inplace_info*, MDL_request*, st_ddl_log_state*, TRIGGER_RENAME_PARAM*, Alter_table_ctx*, bool&, unsigned long long&, bool) [clone .constprop.0]
|
mysql_alter_table(THD*, st_mysql_const_lex_string const*, st_mysql_const_lex_string const*, Table_specification_st*, TABLE_LIST*, Recreate_info*, Alter_info*, unsigned int, st_order*, bool, bool)
|
Sql_cmd_alter_table::execute(THD*)
|
mysql_execute_command(THD*, bool)
|
mysql_parse(THD*, char*, unsigned int, Parser_state*)
|
dispatch_command(enum_server_command, THD*, char*, unsigned int, bool)
|
do_command(THD*, bool)
|
The method ha_innobase::commit_inplace_alter_table indeed does most of its I/O while having dict_sys.latch locked exclusively. I believe that one should not do any I/O while holding such a (global) lock, one that is commonly locked for all queries, including read-only queries.
This is a major scalability issue for us, because it is easy for (unprivileged) MariaDB users to lock up the whole daemon (including all databases and all catalogs) and it could be used to DoS the MariaDB server.
Attachments
Issue Links
- blocks
-
MDEV-34986 Use RAII classes to manage locks
-
- In Review
-
- is blocked by
-
MDEV-35154 dict_sys_t::load_table() is holding exclusive dict_sys.latch for unnecessarily long time
-
- Confirmed
-
- relates to
-
MDEV-8069 DROP or rebuild of a large table may lock up InnoDB
-
- Closed
-
-
MDEV-15641 InnoDB crash while committing table-rebuilding ALTER TABLE
-
- Closed
-
-
MDEV-34999 ha_innobase::open() should not acquire dict_sys.latch twice
-
- Open
-
-
MDEV-35436 dict_stats_fetch_from_ps() unnecessarily holds exclusive dict_sys.latch
-
- Closed
-
It is correct that the last phase of ha_innobase::commit_inplace_alter_table(commit=true) is being covered by an exclusive dict_sys.latch.
The function unlock_and_close_files() is part of the
MDEV-8069logic. DDL operations may need to delete some files. Invoking the unlink() system call while holding an exclusive latch would be a bad idea if this would lead to deleting a huge file. Therefore, we retain an open handle to the file, and we close these file handles right after we release the dict_sys.latch.Since you have mentioned OPTIMIZE TABLE related to this, I have to mention
MDEV-15641, where dict_sys.latch was called dict_operation_lock (and accompanied by dict_sys.mutex). In a table-rebuilding operation such as OPTIMIZE TABLE or ALTER TABLE…FORCE or anything that is listed red or blue inMDEV-11424, we may have a design problem that some log from concurrent DML operations may accumulate while the DDL operation is waiting for the DML operations to finish. Due to this, the final call to row_log_table_apply() that we are invoking while holding exclusive dict_sys.latch could take an excessive amount of time. It is possible that this was fixed inMDEV-15250by making each DML operation apply their own changes to the being-rebuilt table during transaction commit.