Details
-
Task
-
Status: Closed (View Workflow)
-
Major
-
Resolution: Fixed
-
None
Description
It can be dangerous to run "set read_only" on a production server because it can block in close_cached_tables. More details about the pain this caused previously are at:
http://mysqlha.blogspot.com/2008/07/what-exactly-does-flush-tables-with.html
Per the code in set_var.cc:
/*
|
Perform a 'FLUSH TABLES WITH READ LOCK'.
|
This is a 3 step process:
|
- [1] lock_global_read_lock()
|
- [2] close_cached_tables()
|
- [3] make_global_read_lock_block_commit()
|
[1] prevents new connections from obtaining tables locked for write.
|
[2] waits until all existing connections close their tables.
|
[3] prevents transactions from being committed.
|
*/
|
Can there be a variant that doesn't do #2? My workload doesn't use MyISAM and I don't know if #2 is done because of MyISAM. Calling close_cached_tables seems like a heavy way to force LOCK TABLEs to be unlocked. Any long running queries will cause #2 to block.
Attachments
Issue Links
- relates to
-
MDEV-309 reimplement FLUSH ... CHECKPOINT
-
- Open
-
By Monty:
The reason for 2 is to ensure that that all table info is written to
disk so that if you do a snapshot or copy of tables, you will get
things in a consistent state.
This is mostly for MyISAM and non transactional tables, but it will
also speed up things for InnoDB tables and allow you to copy xtradb
tables from one server to another (if you are using table spaces)
without having to take down the server.