[MDEV-26495] [ERROR] [FATAL] InnoDB: fdatasync() returned 13 Created: 2021-08-29  Updated: 2023-12-15  Resolved: 2023-12-15

Status: Closed
Project: MariaDB Server
Component/s: Storage Engine - InnoDB
Affects Version/s: 10.5.11
Fix Version/s: N/A

Type: Bug Priority: Major
Reporter: Colin GILLE Assignee: Daniel Black
Resolution: Cannot Reproduce Votes: 0
Labels: None
Environment:

Kubernetes / mariadb official docker image 10.5.11
Data directory mount via NFS



 Description   

I have had multiple mariadb hang in multiple container. The mariadb data directory is stored on an NFS share.
When this error arrises, the Mariadb server still accept connections, but any write will hang the connection, making the server rapidly useless (Too many connections).

I don't know why this error arrise, it doesn't seem to be related to mariadb itself as the system call is failing, but the error could be better handled as the mariadb server goes unresponsive.
Would it be possible to crash the process instead of hanging ?

That would allow Kubernetes (or any other container / process manager) to restart the process.

Here are the errors :

2021-08-27 21:34:38 0 [ERROR] [FATAL] InnoDB: fdatasync() returned 13
210827 21:34:38 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
 
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
 
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed, 
something is definitely wrong and this may fail.
 
Server version: 10.5.11-MariaDB-1:10.5.11+maria~focal
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=42
max_threads=152
thread_count=29
It is possible that mysqld could use up to 
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 465672 K  bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
 
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x0 thread_stack 0x49000
mysqld(my_print_stacktrace+0x32)[0x563413f15e22]
mysqld(handle_fatal_signal+0x485)[0x563413968a35]

2021-08-27 21:34:38 135638 [ERROR] [FATAL] InnoDB: fdatasync() returned 13
210827 21:34:38 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
 
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
 
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed, 
something is definitely wrong and this may fail.
 
Server version: 10.5.11-MariaDB-1:10.5.11+maria~focal
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=119
max_threads=152
thread_count=79
It is possible that mysqld could use up to 
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 465672 K  bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
 
Thread pointer: 0x7f10680008e8
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7f1110dabd58 thread_stack 0x49000
mysqld(my_print_stacktrace+0x32)[0x5625498fde22]
Printing to addr2line failed
mysqld(handle_fatal_signal+0x485)[0x562549350a35]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x153c0)[0x7f1332b353c0]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcb)[0x7f133263b18b]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x12b)[0x7f133261a859]

2021-08-27 21:34:38 0 [ERROR] [FATAL] InnoDB: fdatasync() returned 13
210827 21:34:38 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
 
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
 
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed, 
something is definitely wrong and this may fail.
 
Server version: 10.5.11-MariaDB-1:10.5.11+maria~focal
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=42
max_threads=152
thread_count=29
It is possible that mysqld could use up to 
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 465672 K  bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
 
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x0 thread_stack 0x49000
mysqld(my_print_stacktrace+0x32)[0x563413f15e22]
mysqld(handle_fatal_signal+0x485)[0x563413968a35]



 Comments   
Comment by Daniel Black [ 2021-08-30 ]

what is error 13

$ perror 13
OS error code  13:  Permission denied

Its not listed in the return codes of https://man7.org/linux/man-pages/man2/fdatasync.2.html

Normally permission errors are registered when the opening of a file occurs so something is going on that odd. I looked though the nfs kernel code and whatever is causing the EPERM is rather deep, or just communicated from the server side.

You may be able to tune your way out of this. Pay attention to your NFS mount options, server options and the stats on NFS.

Using something link nfsdist from bcc-tools (is a package, but source https://github.com/iovisor/bcc/blob/master/tools/nfsdist.py) will help get a better idea of the latency.

Its missing a trace on the commit, so you can add this by editing the script:

patch to nfsdist

$ diff -u nfsdist.o nfsdist
--- nfsdist.o	2021-08-30 10:49:23.271133964 +1000
+++ nfsdist	2021-08-30 10:55:48.408762256 +1000
@@ -120,6 +120,12 @@
     char *op = "getattr";
     return trace_return(ctx, op);
 }
+
+int trace_commit_return(struct pt_regs *ctx)
+{
+    char *op = "commit";
+    return trace_return(ctx, op);
+}
 """
 bpf_text = bpf_text.replace('FACTOR', str(factor))
 if args.pid:
@@ -139,11 +145,13 @@
 b.attach_kprobe(event="nfs_file_write", fn_name="trace_entry")
 b.attach_kprobe(event="nfs_file_open", fn_name="trace_entry")
 b.attach_kprobe(event="nfs_getattr", fn_name="trace_entry")
+b.attach_kprobe(event="nfs_commit_inode", fn_name="trace_entry")
 
 b.attach_kretprobe(event="nfs_file_read", fn_name="trace_read_return")
 b.attach_kretprobe(event="nfs_file_write", fn_name="trace_write_return")
 b.attach_kretprobe(event="nfs_file_open", fn_name="trace_open_return")
 b.attach_kretprobe(event="nfs_getattr", fn_name="trace_getattr_return")
+b.attach_kretprobe(event="nfs_commit_inode", fn_name="trace_commit_return")
 
 if BPF.get_kprobe_functions(b'nfs4_file_open'):
     b.attach_kprobe(event="nfs4_file_open", fn_name="trace_entry")

You can also use bpftrace https://github.com/iovisor/bpftrace/blob/master/docs/reference_guide.md#2-kprobekretprobe-dynamic-tracing-kernel-level-arguments to find out which is causing it.

$ git grep 'EXPORT_SYMBOL_GPL(nfs_.*commi'
write.c:EXPORT_SYMBOL_GPL(nfs_commitdata_alloc);
write.c:EXPORT_SYMBOL_GPL(nfs_commit_free);
write.c:EXPORT_SYMBOL_GPL(nfs_request_add_commit_list_locked);
write.c:EXPORT_SYMBOL_GPL(nfs_request_add_commit_list);
write.c:EXPORT_SYMBOL_GPL(nfs_request_remove_commit_list);
write.c:EXPORT_SYMBOL_GPL(nfs_scan_commit_list);
write.c:EXPORT_SYMBOL_GPL(nfs_commitdata_release);
write.c:EXPORT_SYMBOL_GPL(nfs_initiate_commit);
write.c:EXPORT_SYMBOL_GPL(nfs_init_commit);
write.c:EXPORT_SYMBOL_GPL(nfs_retry_commit);
write.c:EXPORT_SYMBOL_GPL(nfs_commit_inode);
 
Any/all of the above names with:
# bpftrace -e 'kretprobe:nfs_commit_inode { if (retval == 13) { @[kstack] = count();  } }'

I think the above are all tracing the nfs client and the error could be also coming directly from the server.

The above looks like it is crashing in an assertion on error, once it does get a response. For an autokill off option I suspect something outside of mariadb might be easier especially for syscalls like fdatasync that don't have a timeout value. If you restart mariadb on a storage that times out I'm not sure you've solved the problem either.

I hope you can resolve this by tuning NFS in some way. I'd like to see what you discovered and how so it can help others too.

Comment by Marko Mäkelä [ 2023-12-14 ]

Congelli501, do you have any update on this?

Comment by Colin GILLE [ 2023-12-15 ]

I haven't seen this problem in the last 2 years with updated versions of Mariadb, I think we can close the issue.

Comment by Daniel Black [ 2023-12-15 ]

Thanks Congelli501. Glad it hasn't reoccurred, whatever the cause was.

Generated at Thu Feb 08 09:45:44 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.