Details
-
Task
-
Status: Closed (View Workflow)
-
Critical
-
Resolution: Fixed
-
None
Description
The purpose of this task is to ensure that DROP DATABASE is crash safe.
This means that if the server is killed during drop database crashes , the database and binary log will be consistent when the server has done engine and ddl recovery.
All tables that where already dropped or started to be dropped will be fully dropped and binary logged.
Before this task there was a change that one could, after a crash, have stray .frm files
and dropped tables was not binary logged.
For this particular project, we will ensure that every executed DROP TABLE is atomic and will be logged to the binary log.
This is the same behaviour that we have with multi-table DROP TABLE.
Making DROP DATABASE atomic (either everything is dropped or nothing) will be fixed in a future task.
Description of how the current task should work:
- Collect list of tables
- DDL log tables as they are dropped
- DDL log drop database
- Delete db.opt
- Delete data directory
- Deactive ddl log entry
This is in line of how things are now (minus ddl logging) except that
we delete db.opt file last to not loose it if DROP DATABASE fails.
On recovery we have to ensure that all dropped tables are logged in
binary log and that they are properly dropped (as with atomic drop
table).
No new tables be dropped as part of recovery.
Recovery of active drop database ddl log entry:
- Update binary log with dropped tables. If table list is longer than
max_allowed_packet, then the query will be split into multiple drop queries. - If drop database was ddl logged but not in binary log
- drop the db.opt file and database directory.
- Log into the binary log
Attachments
Issue Links
- is part of
-
MDEV-17567 Atomic DDL
-
- Closed
-
- relates to
-
MDEV-25506 Atomic DDL: .frm file is removed and orphan InnoDB tablespace is left behind upon crash recovery
-
- Closed
-
-
MDEV-25691 Simplify handlerton::drop_database for InnoDB
-
- Closed
-
-
MDEV-25920 Atomic DROP DATABASE
-
- Open
-
Activity
Field | Original Value | New Value |
---|---|---|
Link |
This issue is part of |
Priority | Major [ 3 ] | Critical [ 2 ] |
Status | Open [ 1 ] | In Progress [ 3 ] |
Description |
The purpose of this task is to ensure that DROP DATABASE is atomic.
Description of how this should work: - Collect list of tables - DDL log tables as they are dropped - DDL log drop database - Delete db.opt - Delete data directory - Deactive ddl log entry This is in line of how things are now (minus ddl logging) except that we delete db.opt file last to not loose it if DROP DATABASE fails. On recovery we have to ensure that all dropped tables are logged in binary log and that they are properly dropped (as with atomic drop table). No new tables be dropped as part of recovery. Recovery of active drop database ddl log entry: - Update binary log with dropped tables. If table list is longer than max_allowed_packet, then the query will be split into multiple drop queries. - If drop database was ddl logged but not in binary log - drop the db.opt file and database directory. - Log into the binary log |
The purpose of this task is to ensure that DROP DATABASE is atomic.
For this particular project, we will ensure that every executed DROP TABLE is atomic and will be logged to the binary log. However multi-table drop is not atomic. If there is a crash in the middle of a drop database, those tables that where already dropped will continue to be dropped. Any table that was in the middle of a drop will be properly dropped. Any other tables will stay as they are. This is the same behaviour that we have with multi-table DROP TABLE. This will be fixed in a future task. Description of how the current task should work: - Collect list of tables - DDL log tables as they are dropped - DDL log drop database - Delete db.opt - Delete data directory - Deactive ddl log entry This is in line of how things are now (minus ddl logging) except that we delete db.opt file last to not loose it if DROP DATABASE fails. On recovery we have to ensure that all dropped tables are logged in binary log and that they are properly dropped (as with atomic drop table). No new tables be dropped as part of recovery. Recovery of active drop database ddl log entry: - Update binary log with dropped tables. If table list is longer than max_allowed_packet, then the query will be split into multiple drop queries. - If drop database was ddl logged but not in binary log - drop the db.opt file and database directory. - Log into the binary log |
Description |
The purpose of this task is to ensure that DROP DATABASE is atomic.
For this particular project, we will ensure that every executed DROP TABLE is atomic and will be logged to the binary log. However multi-table drop is not atomic. If there is a crash in the middle of a drop database, those tables that where already dropped will continue to be dropped. Any table that was in the middle of a drop will be properly dropped. Any other tables will stay as they are. This is the same behaviour that we have with multi-table DROP TABLE. This will be fixed in a future task. Description of how the current task should work: - Collect list of tables - DDL log tables as they are dropped - DDL log drop database - Delete db.opt - Delete data directory - Deactive ddl log entry This is in line of how things are now (minus ddl logging) except that we delete db.opt file last to not loose it if DROP DATABASE fails. On recovery we have to ensure that all dropped tables are logged in binary log and that they are properly dropped (as with atomic drop table). No new tables be dropped as part of recovery. Recovery of active drop database ddl log entry: - Update binary log with dropped tables. If table list is longer than max_allowed_packet, then the query will be split into multiple drop queries. - If drop database was ddl logged but not in binary log - drop the db.opt file and database directory. - Log into the binary log |
The purpose of this task is to ensure that DROP DATABASE is atomic.
For this particular project, we will ensure that every executed DROP TABLE is atomic and will be logged to the binary log. However multi-table drop is not atomic. If there is a crash in the middle of a drop database, those tables that where already dropped will continue to be dropped. Any table that was in the middle of a drop will be properly dropped. Any other tables will stay as they are. This is the same behaviour that we have with multi-table DROP TABLE. This will be fixed in a future task where we will either drop everything or nothing. Description of how the current task should work: - Collect list of tables - DDL log tables as they are dropped - DDL log drop database - Delete db.opt - Delete data directory - Deactive ddl log entry This is in line of how things are now (minus ddl logging) except that we delete db.opt file last to not loose it if DROP DATABASE fails. On recovery we have to ensure that all dropped tables are logged in binary log and that they are properly dropped (as with atomic drop table). No new tables be dropped as part of recovery. Recovery of active drop database ddl log entry: - Update binary log with dropped tables. If table list is longer than max_allowed_packet, then the query will be split into multiple drop queries. - If drop database was ddl logged but not in binary log - drop the db.opt file and database directory. - Log into the binary log |
Summary | Atomic DROP DATABASE | Crash-safe DROP DATABASE |
Description |
The purpose of this task is to ensure that DROP DATABASE is atomic.
For this particular project, we will ensure that every executed DROP TABLE is atomic and will be logged to the binary log. However multi-table drop is not atomic. If there is a crash in the middle of a drop database, those tables that where already dropped will continue to be dropped. Any table that was in the middle of a drop will be properly dropped. Any other tables will stay as they are. This is the same behaviour that we have with multi-table DROP TABLE. This will be fixed in a future task where we will either drop everything or nothing. Description of how the current task should work: - Collect list of tables - DDL log tables as they are dropped - DDL log drop database - Delete db.opt - Delete data directory - Deactive ddl log entry This is in line of how things are now (minus ddl logging) except that we delete db.opt file last to not loose it if DROP DATABASE fails. On recovery we have to ensure that all dropped tables are logged in binary log and that they are properly dropped (as with atomic drop table). No new tables be dropped as part of recovery. Recovery of active drop database ddl log entry: - Update binary log with dropped tables. If table list is longer than max_allowed_packet, then the query will be split into multiple drop queries. - If drop database was ddl logged but not in binary log - drop the db.opt file and database directory. - Log into the binary log |
The purpose of this task is to ensure that DROP DATABASE is crash safe.
This means that if the server is killed during drop database crashes , the database and binary log will be consistent when the server has done engine and ddl recovery. All tables that where already dropped or started to be dropped will be fully dropped and binary logged. Before this task there was a change that one could, after a crash, have stray .frm files and dropped tables was not binary logged. For this particular project, we will ensure that every executed DROP TABLE is atomic and will be logged to the binary log. This is the same behaviour that we have with multi-table DROP TABLE. Making DROP DATABASE atomic (either everything is dropped or nothing) will be fixed in a future task. Description of how the current task should work: - Collect list of tables - DDL log tables as they are dropped - DDL log drop database - Delete db.opt - Delete data directory - Deactive ddl log entry This is in line of how things are now (minus ddl logging) except that we delete db.opt file last to not loose it if DROP DATABASE fails. On recovery we have to ensure that all dropped tables are logged in binary log and that they are properly dropped (as with atomic drop table). No new tables be dropped as part of recovery. Recovery of active drop database ddl log entry: - Update binary log with dropped tables. If table list is longer than max_allowed_packet, then the query will be split into multiple drop queries. - If drop database was ddl logged but not in binary log - drop the db.opt file and database directory. - Log into the binary log |
Link |
This issue relates to |
Link |
This issue relates to |
Fix Version/s | 10.6.1 [ 24437 ] | |
Fix Version/s | 10.6 [ 24028 ] | |
Resolution | Fixed [ 1 ] | |
Status | In Progress [ 3 ] | Closed [ 6 ] |
Link | This issue relates to MDEV-25920 [ MDEV-25920 ] |
Workflow | MariaDB v3 [ 116878 ] | MariaDB v4 [ 134371 ] |
I feel that handlerton::drop_database is largely redundant after
MDEV-25506and other changes. It would be invoked as a last clean-up step to remove any garbage that could have been left behind in InnoDB.However, in a future implementation of atomic or transactional DROP DATABASE, the simplified InnoDB implementation of handlerton::drop_database could come in handy. The operation probably should be two-phase: first check with each storage engine that the operation would succeed, and then commit to performing it.