Details
-
Bug
-
Status: Closed (View Workflow)
-
Major
-
Resolution: Incomplete
-
10.2.11
-
MacOS High Sierra 10.13.4
MacBook Pro (13-inch 2017), 16GB RAM
JDK8
Description
Hi,
I was instructed to file an issue here by the owner/maintainer of MariaDB4j title. I integrated the latest version (2.3.0) of this library into a unit test. This lib spawns a mysqld process which sporadically "hangs", causing my unit test to hang as well. The only logs I could find are as follows:
11:27:48.380 [Exec Stream Pumper] ERROR ch.vorburger.exec.ManagedProcess - mysqld: *** set a breakpoint in malloc_error_break to debug
|
11:27:48.380 [Exec Stream Pumper] ERROR ch.vorburger.exec.ManagedProcess - mysqld: 180726 11:27:48 [ERROR] mysqld got signal 6 ;
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: This could be because you hit a bug. It is also possible that this binary
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: or one of the libraries it was linked against is corrupt, improperly built,
|
11:27:48.380 [Exec Stream Pumper] ERROR ch.vorburger.exec.ManagedProcess - mysqld: or misconfigured. This error can also be caused by malfunctioning hardware.
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld:
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: To report this bug, see https://mariadb.com/kb/en/reporting-bugs
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld:
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: We will try our best to scrape up some info that will hopefully help
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: diagnose the problem, but since we have already crashed,
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: something is definitely wrong and this may fail.
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld:
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: Server version: 10.2.11-MariaDB
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: key_buffer_size=134217728
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: read_buffer_size=131072
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: max_used_connections=3
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: max_threads=153
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: thread_count=9
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: It is possible that mysqld could use up to
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 467247 K bytes of memory
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: Hope that's ok; if not, decrease some variables in the equation.
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld:
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: Thread pointer: 0x7fee7e02f808
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: Attempting backtrace. You can use the following information to find out
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: where mysqld died. If you see no messages after this, something went
|
11:27:48.380 [Exec Stream Pumper] INFO ch.vorburger.exec.ManagedProcess - mysqld: terribly wrong...
|
There is no other output available and I could not locate any log files which might provider further insight. Here is the original issue I filed against MariaDB4j.
If there's any other tips to help in grabbing more debugging information, please let me know.
Is it using a lot of CPU (endless-looping) or not (just sleeping waiting for something)?
Please enable the coredump and next time it SIGABRTs, store the coredump and collect all threads' stack trace by running gdb --batch --eval-command="thread apply all bt full" <path to the binary> <path to the coredump>.
If you can catch the moment when it already hangs but hasn't yet aborted, run the same on the hanging server (only instead of <path to the coredump> you'll have server PID file). Run it several times with the short interval to see if there is any progress or it's totally stalled.
Better still if you can build the debug server and do the above on it.