[MDEV-7986] [ERROR] mysqld got signal 11 ; Created: 2015-02-20  Updated: 2015-05-18  Resolved: 2015-05-18

Status: Closed
Project: MariaDB Server
Component/s: OTHER
Affects Version/s: 5.5.41
Fix Version/s: N/A

Type: Bug Priority: Critical
Reporter: Cristian Nicoara Assignee: Unassigned
Resolution: Incomplete Votes: 0
Labels: None
Environment:

debian 7.8
mariaDB version : 5.5.41-MariaDB-1



 Description   

Hello,

I do not know if this is a bug or not, I am basically asking for your feedback on this behavior:

 mysqld: 150219 12:42:01 [ERROR] mysqld got signal 11 ;
 mysqld: This could be because you hit a bug. It is also possible that this binary
 mysqld: or one of the libraries it was linked against is corrupt, improperly built,
 mysqld: or misconfigured. This error can also be caused by malfunctioning hardware.
 mysqld:
 mysqld: To report this bug, see http://kb.askmonty.org/en/reporting-bugs
 mysqld:
 mysqld: We will try our best to scrape up some info that will hopefully help
 mysqld: diagnose the problem, but since we have already crashed,
 mysqld: something is definitely wrong and this may fail.
 mysqld:
 mysqld: Server version: 5.5.41-MariaDB-1~wheezy-log
 mysqld: key_buffer_size=16777216
 mysqld: read_buffer_size=131072
 mysqld: max_used_connections=168
 mysqld: max_threads=1002
 mysqld: thread_count=157
 mysqld: It is possible that mysqld could use up to
 mysqld: key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 2214597 K  bytes of memory
 mysqld: Hope that's ok; if not, decrease some variables in the equation.
 mysqld:
 mysqld: Thread pointer: 0x0x7f8d59817000
 mysqld: Attempting backtrace. You can use the following information to find out
 mysqld: where mysqld died. If you see no messages after this, something went
 mysqld: terribly wrong...
 mysqld: stack_bottom = 0x7f8d516fae50 thread_stack 0x40000
 kernel: [772494.249397] mysqld[29838]: segfault at 7f8f39024c28 ip 00007f8f3651e2c1 sp 00007f8d516f7fb0 error 7 in libc-2.13.so[7f8f364a8000+182000]
 mysqld_safe: Number of processes running now: 0

-----------------------------------------

This happened a few times so far . What we also have:
– tokuDB
– audit plugin , was not active when happened
– master slave configuration , master was affected
– we had some synchronization problem between master and slave

Do you have any similar occurrences?

Thank you



 Comments   
Comment by Elena Stepanova [ 2015-04-15 ]

Hi,

It is most certainly a bug; unfortunately, the absence of the stack trace makes it impossible to match it to other known or recently fixed issues.

Could you please check whether the coredump is created upon the crash (it should probably be in your datadir), and if not, add core-file to the server options and make sure that ulimit -c is set to unlimited? Most likely, the coredump won't be very helpful on the release installation, but it might still be better than nothing.

Also, how heavy is the load on your server? Would it be possible for you to enable the general log until the next crash? (It comes with some performance cost, and i see you have quite a lot of simultaneous connections, so if you can't do it, it's understandable).

Comment by Elena Stepanova [ 2015-05-18 ]

Closing as incomplete for now. If you have more information, please comment to re-open.

Generated at Thu Feb 08 07:23:47 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.