[MDEV-27765] MariaDB stopped to work randomly - misery started at "Unable to find a record to delete-mark" Created: 2022-02-07  Updated: 2024-01-16  Resolved: 2024-01-16

Status: Closed
Project: MariaDB Server
Component/s: None
Affects Version/s: 10.5.13, 10.7.1
Fix Version/s: N/A

Type: Bug Priority: Major
Reporter: Tristan Kundrat Assignee: Marko Mäkelä
Resolution: Incomplete Votes: 0
Labels: crash
Environment:

Fedora 34, 64bit


Issue Links:
Duplicate
is duplicated by MDEV-27949 [crash] Unable to find a record to de... Closed
Relates
relates to MDEV-26917 InnoDB: Clustered record for sec rec ... Closed
relates to MDEV-26977 mariadb 10.5.12 reboot loop in AWS | ... Closed
relates to MDEV-27734 Set innodb_change_buffering=none by d... Closed

 Description   

I was using my nextcloud instance as normal, until I suddenly got a status 500 code in my browser. Went to investigate, as I had changed nothing about my setup or anything.
When I finally got to the mariadb.log file, the first error looks like this:

2022-02-07 17:24:05 0 [ERROR] InnoDB: Unable to find a record to delete-mark
InnoDB: tuple DATA TUPLE: 2 fields;
 0: len 8; hex 80000000005d205c; asc      ] \;;
 1: len 8; hex 8000000000023902; asc       9 ;;
 
InnoDB: record PHYSICAL RECORD: n_fields 2; compact format; info bits 0
 0: len 8; hex 80000000005d205c; asc      ] \;;
 1: len 8; hex 8000000000020100; asc         ;;
2022-02-07 17:24:06 0 [ERROR] InnoDB: page [page id: space=12, page number=492] (379 records, index id 913).
2022-02-07 17:24:06 0 [ERROR] InnoDB: Submit a detailed bug report to https://jira.mariadb.org/
2022-02-07 17:24:31 0 [ERROR] InnoDB: Corruption of an index tree: table `nextcloud`.`oc_filecache` index `fs_size`, father ptr page no 2211, child page no 492
PHYSICAL RECORD: n_fields 2; compact format; info bits 0
 0: len 8; hex 800000000000615e; asc       a^;;
 1: len 8; hex 338000000000014e; asc 3      N;;
2022-02-07 17:24:31 0 [Note] InnoDB: n_owned: 0; heap_no: 380; next rec: 125
PHYSICAL RECORD: n_fields 3; compact format; info bits 0
 0: len 8; hex 8000000000004a97; asc       J ;;
 1: len 8; hex 800000000000fac4; asc         ;;
 2: len 4; hex 000008a3; asc     ;;
2022-02-07 17:24:31 0 [Note] InnoDB: n_owned: 0; heap_no: 66; next rec: 1750
2022-02-07 17:24:31 0 [ERROR] [FATAL] InnoDB: You should dump + drop + reimport the table to fix the corruption. If the crash happens at database startup. Please refer to https://mariadb.com/kb/en/library/innodb-recovery-modes/ for information about forcing recovery. Then dump + drop + reimport.
220207 17:24:31 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
 
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed, 
something is definitely wrong and this may fail.
 
Server version: 10.5.13-MariaDB
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=52
max_threads=153
thread_count=34
It is possible that mysqld could use up to 
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 467872 K  bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
 
Thread pointer: 0x559eec6fc5c8
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7fa606813b38 thread_stack 0x49000
/usr/libexec/mariadbd(my_print_stacktrace+0x3f)[0x559eea7ba7bf]
/usr/libexec/mariadbd(handle_fatal_signal+0x4d8)[0x559eea313638]
??:0(__restore_rt)[0x7fa62d2b0a20]
:0(__GI_raise)[0x7fa62cd9f2a2]
:0(__GI_abort)[0x7fa62cd888a4]
??:0(wsrep_write_dummy_event_low(THD*, char const*))[0x559ee9f29842]
??:0(wsrep_write_dummy_event_low(THD*, char const*))[0x559ee9f4da0d]
??:0(Wsrep_server_service::log_state_change(wsrep::server_state::state, wsrep::server_state::state))[0x559eea6bd69a]
??:0(Wsrep_server_service::log_state_change(wsrep::server_state::state, wsrep::server_state::state))[0x559eea6d4752]
??:0(Wsrep_server_service::log_state_change(wsrep::server_state::state, wsrep::server_state::state))[0x559eea6e1019]
??:0(Wsrep_server_service::log_state_change(wsrep::server_state::state, wsrep::server_state::state))[0x559eea67d032]
??:0(Wsrep_server_service::log_state_change(wsrep::server_state::state, wsrep::server_state::state))[0x559eea682b96]
??:0(Wsrep_server_service::log_state_change(wsrep::server_state::state, wsrep::server_state::state))[0x559eea683764]
??:0(Wsrep_server_service::log_state_change(wsrep::server_state::state, wsrep::server_state::state))[0x559eea64907a]
??:0(Wsrep_server_service::log_state_change(wsrep::server_state::state, wsrep::server_state::state))[0x559eea69c092]
??:0(tpool::task_group::execute(tpool::task*))[0x559eea74fc61]
??:0(tpool::thread_pool_generic::worker_main(tpool::worker_data*))[0x559eea74fe8e]
??:0(std::error_code::default_error_condition() const)[0x7fa62d150c84]
??:0(start_thread)[0x7fa62d2a6299]
:0(__GI___clone)[0x7fa62ce62353]
 
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0x0): (null)
Connection ID (thread ID): 0
Status: NOT_KILLED
 
Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=on,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on,split_materialized=on,condition_pushdown_for_subquery=on,rowid_filter=on,condition_pushdown_from_having=on,not_null_range_scan=off
 
The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ contains
information that should help you find out what is causing the crash.
 
We think the query pointer is invalid, but we will try to print it anyway. 
Query: 
Writing a core file...
Working directory at /var/lib/mysql
Resource Limits:
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        unlimited            unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             39677                39677                processes 
Max open files            32186                32186                files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       39677                39677                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us        
Core pattern: |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h

There are many other errors after that however, they are all the same (endless tries of starting mariadb):

2022-02-07 17:26:31 0 [Note] InnoDB: Uses event mutexes
2022-02-07 17:26:31 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2022-02-07 17:26:31 0 [Note] InnoDB: Number of pools: 1
2022-02-07 17:26:31 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
2022-02-07 17:26:31 0 [Note] InnoDB: Using Linux native AIO
2022-02-07 17:26:31 0 [Note] InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728
2022-02-07 17:26:31 0 [Note] InnoDB: Completed initialization of buffer pool
2022-02-07 17:26:40 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=28824091881,28824091881
2022-02-07 17:26:46 0 [Note] InnoDB: Read redo log up to LSN=28824222720
2022-02-07 17:27:02 0 [Note] InnoDB: Read redo log up to LSN=28829662208
2022-02-07 17:27:17 0 [Note] InnoDB: To recover: 620 pages from log
2022-02-07 17:27:18 0 [Note] InnoDB: 3 transaction(s) which must be rolled back or cleaned up in total 3 row operations to undo
2022-02-07 17:27:18 0 [Note] InnoDB: Trx id counter is 44702066
2022-02-07 17:27:18 0 [Note] InnoDB: Starting final batch to recover 618 pages from redo log.
220207 17:27:18 [ERROR] mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
 
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
 
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed, 
something is definitely wrong and this may fail.
Server version: 10.5.13-MariaDB
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=0
max_threads=153
thread_count=0
It is possible that mysqld could use up to 
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 467872 K  bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
 
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x0 thread_stack 0x49000
??:0(my_print_stacktrace)[0x55d9172db7bf]
??:0(handle_fatal_signal)[0x55d916e34638]
??:0(__restore_rt)[0x7fc2acc22a20]
??:0(Wsrep_server_service::log_state_change(wsrep::server_state::state, wsrep::server_state::state))[0x55d917156cdb]
??:0(Wsrep_server_service::log_state_change(wsrep::server_state::state, wsrep::server_state::state))[0x55d917162842]
??:0(Wsrep_server_service::log_state_change(wsrep::server_state::state, wsrep::server_state::state))[0x55d91720563a]
??:0(Wsrep_server_service::log_state_change(wsrep::server_state::state, wsrep::server_state::state))[0x55d9171474c8]
??:0(wsrep_write_dummy_event_low(THD*, char const*))[0x55d916a50e28]
??:0(Wsrep_server_service::log_state_change(wsrep::server_state::state, wsrep::server_state::state))[0x55d91723f0cd]
??:0(Wsrep_server_service::log_state_change(wsrep::server_state::state, wsrep::server_state::state))[0x55d9171485b2]
??:0(tpool::task_group::execute(tpool::task*))[0x55d917270c61]
??:0(tpool::thread_pool_generic::worker_main(tpool::worker_data*))[0x55d917270e8e]
??:0(std::error_code::default_error_condition() const)[0x7fc2acac2c84]
??:0(start_thread)[0x7fc2acc18299]
:0(__GI___clone)[0x7fc2ac7d4353]
The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mysqld/ containsinformation that should help you find out what is causing the crash.
Writing a core file...
Working directory at /var/lib/mysql
Resource Limits:
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        unlimited            unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             39677                39677                processes 
Max open files            32186                32186                files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       39677                39677                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us        
Core pattern: |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h

I already tried upgrading MariaDB, but version 10.7 also doesn't work, so I reverted back to 10.5.

If any more/different files/logs/etc are needed, just tell me.



 Comments   
Comment by Marko Mäkelä [ 2022-02-08 ]

Does this occur when you run the server with innodb_change_buffering=none (which we plan to set by default, in MDEV-27734)?

Note: You will have to rebuild the affected secondary indexes, by executing DROP INDEX and CREATE INDEX (or ALTER TABLE…ADD INDEX). Disabling the change buffering will only prevent further corruption from being introduced by it, but fix already caused corruption.

Comment by Tristan Kundrat [ 2022-02-08 ]

Thanks for your comment.
I tried adding --innodb-change-buffering=none to my systemd service file, as follows:

ExecStart=/usr/libexec/mariadbd --innodb-change-buffering=none --basedir=/usr $MYSQLD_OPTS $_WSREP_NEW_CLUSTER

But it still gives the same error in systemd:

mariadb.service: Main process exited, code=killed, status=11/SEGV
mariadb.service: Failed with result 'signal'.
Failed to start MariaDB 10.5 database server.

The mariadb.log error is still the same. But it tries to roll back 3 transactions as before, isn't that what you wanted me to disable?

Comment by Marko Mäkelä [ 2022-02-08 ]

Sorry, I forgot a "not" in my previous comment. Secondary indexes that are already corrupted will not be automatically fixed by disabling the change buffering.

Comment by Tristan Kundrat [ 2022-02-08 ]

But how am I supposed to fix the db when I can't access it?

Comment by Tristan Kundrat [ 2022-02-08 ]

Fixed it!
As the first error message stated, it was a problem with nextcloud.oc_filecache. As that can easily be rebuilt, I chose to delete the files directly, as only recovery mode 6 could run (read-only). Using recovery mode 6 I could then mysqldump all contents and delete the whole mysql folder (made backup before). I created a new database and imported the dump via mysql < dump.sql. After that, my nextcloud and gitea (also running with mariadb) seem to be working again.

Comment by Marko Mäkelä [ 2022-11-10 ]

TriKun, good for you. I think that innodb_force_recovery=3 would have been a safer way to achieve the same. innodb_force_recovery=6 (or deleting ib_logfile0 before
MDEV-27199 disallowed it) is very dangerous, because it can make the data pages really inconsistent with each other.

A minimal work-around could have been

ALTER TABLE nextcloud.oc_filecache DROP INDEX …, DROP INDEX …;
ALTER TABLE nextcloud.oc_filecache ADD INDEX …;

and a slightly more "overkill" fix would have been

OPTIMIZE TABLE nextcloud.oc_filecache;

(rebuilding the entire table, not just the secondary indexes).

MDEV-27949 seems to be a duplicate of this.

Comment by Marko Mäkelä [ 2022-11-10 ]

I realize that the crash occurred on the rollback of recovered incomplete transactions, which was prompting a change buffer merge. innodb_force_recovery=3 prevents the rollback, but the locks held on the table should prevent DROP INDEX or ALTER TABLE from running. You could have taken an SQL dump of the table (with SELECT), shut down the server, deleted the file oc_filecache.ibd, restarted the server, and finally DROP TABLE nextcloud.oc_filecache; and restoring the SQL dump.

I believe that this error is the first step towards the crash of MDEV-26917.

Comment by Marko Mäkelä [ 2022-11-23 ]

While analyzing failures from a stress test of the fix MDEV-30009, I may have found a possible explanation of this. The scenario is as follows.

  1. Some changes were buffered to a secondary index leaf page that was not located in the buffer pool.
  2. The page was freed (possibly as part of DROP INDEX).
  3. During ibuf_read_merge_pages(), we reset the bitmap bits but will not remove the change buffer records.
  4. The same page is allocated and reused for something else.
  5. The page is evicted from the buffer pool.
  6. Something is added to the change buffer for the page.
  7. On a change buffer merge, we will apply both old (bogus) and new entries to the page.

With ROW_FORMAT=COMPRESSED the impact should be more severe, because the estimates of page fullness (a prerequisite to ensure that all buffered inserts will fit into a page) are more pessimistic there, to ensure that no compression overflow will occur. Due to the bogus garbage entries that are being merged, the logic will be broken.

As far as I can tell, all MySQL and MariaDB versions are affected by this. The code changes that were applied in MDEV-20934 did not fix this, because that code would only be executed on shutdown with innodb_fast_shutdown=0.

The bottom line is: When the change buffer is enabled (which it was by default until MDEV-27734 changed), InnoDB will start to work randomly. To stop it from working randomly, you do not use the change buffer.

Comment by Marko Mäkelä [ 2023-12-14 ]

TriKun, can you still reproduce this with a newer version of MariaDB Server? Note that you should rebuild the affected tables (OPTIMIZE TABLE or similar); otherwise you may see the effects of old corruption.

For ROW_FORMAT=COMPRESSED tables it is known that ROLLBACK may cause corruption. See MDEV-32174.

Generated at Thu Feb 08 09:55:24 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.