[MDEV-4892] MariaDB Crash after setting innodb_dict_size Created: 2013-08-13  Updated: 2014-11-10  Resolved: 2014-11-10

Status: Closed
Project: MariaDB Server
Component/s: Storage Engine - XtraDB
Affects Version/s: 5.5.32
Fix Version/s: 5.5.36

Type: Bug Priority: Minor
Reporter: Daniel Guzman Burgos Assignee: Unassigned
Resolution: Fixed Votes: 0
Labels: xtradb
Environment:

Ubuntu 12.04.2 LTS Running over a m1.medium ec2 instance (4GB Ram)



 Description   

Hi!

In order to limit the amount of memory use related to XtraDB, i've set innodb_dict_size_limit to 128M (134217728 Bytes). When that limit was reached, the server crashed.

The dict_mem status value before crash was:

Innodb_dict_tables	13571
Innodb_mem_dictionary	134211714

my.cnf related config:

...
innodb_log_buffer_size                = 4M
innodb_flush_log_at_trx_commit  = 2
innodb_flush_method                   = O_DIRECT
innodb_buffer_pool_size              = 1G
innodb_buffer_pool_populate       = 1
innodb_adaptive_hash_index_partitions = 64
innodb_dict_size_limit = 128M
....

The related error log:

130813 16:35:24  InnoDB: Assertion failure in thread 140137670870784 in file lock0lock.c line 3865
InnoDB: Failing assertion: (table->locks).count > 0
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
130813 16:35:24 [ERROR] mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
 
To report this bug, see http://kb.askmonty.org/en/reporting-bugs
 
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed, 
something is definitely wrong and this may fail.
 
Server version: 5.5.32-MariaDB-1~precise
key_buffer_size=536870912
read_buffer_size=2097152
max_used_connections=3
max_threads=102
thread_count=1
It is possible that mysqld could use up to 
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 943883 K  bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
 
Thread pointer: 0x0x7f74626f0ad0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x7f7458177800 thread_stack 0x48000
??:0(??)[0x7f745bdd522b]
??:0(??)[0x7f745b9fd561]
??:0(??)[0x7f745ace8cb0]
??:0(??)[0x7f74598f3425]
??:0(??)[0x7f74598f6b8b]
??:0(??)[0x7f745bd55fd3]
??:0(??)[0x7f745bd5a73b]
??:0(??)[0x7f745bcb3388]
??:0(??)[0x7f745bcb4b4b]
??:0(??)[0x7f745bc568ef]
??:0(??)[0x7f745bc5af1b]
??:0(??)[0x7f745b9fe5b7]
??:0(??)[0x7f745ba00e82]
??:0(??)[0x7f745b98fe68]
??:0(??)[0x7f745babecc3]
??:0(??)[0x7f745b85179d]
??:0(??)[0x7f745b8527d8]
??:0(??)[0x7f745ace0e9a]
??:0(??)[0x7f74599b0cbd]
 
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort.
Query (0x0): is an invalid pointer
Connection ID (thread ID): 16
Status: NOT_KILLED
 
Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=off,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=off,table_elimination=on,extended_keys=off
 
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
130813 16:35:25 mysqld_safe Number of processes running now: 0
130813 16:35:25 mysqld_safe mysqld restarted
130813 16:35:25 InnoDB: The InnoDB memory heap is disabled
130813 16:35:25 InnoDB: Mutexes and rw_locks use GCC atomic builtins
130813 16:35:25 InnoDB: Compressed tables use zlib 1.2.3.4
130813 16:35:25 InnoDB: Using Linux native AIO
130813 16:35:25 InnoDB: Initializing buffer pool, size = 1.0G
130813 16:35:26 InnoDB: Completed initialization of buffer pool
130813 16:35:26 InnoDB: highest supported file format is Barracuda.
InnoDB: Log scan progressed past the checkpoint lsn 147157826259
130813 16:35:26  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
InnoDB: Doing recovery: scanned up to log sequence number 147157833479
130813 16:35:27  InnoDB: Starting an apply batch of log records to the database...
InnoDB: Progress in percents: 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 
InnoDB: Apply batch completed
InnoDB: In a MySQL replication slave the last master binlog file
InnoDB: position 92846717, file name mysql-bin.000570
InnoDB: and relay log file
InnoDB: position 31669251, file name /var/log/mysql/mysqld-relay-bin.000054
130813 16:35:29  InnoDB: Waiting for the background threads to start
130813 16:35:30 Percona XtraDB (http://www.percona.com) 5.5.32-MariaDB-30.2 started; log sequence number 147157833479
130813 16:35:30 [Note] Server socket created on IP: '0.0.0.0'.
130813 16:35:30 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.32-MariaDB-1~precise'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  mariadb.org binary distribution
130813 16:39:56 [Warning] IP address '172.17.0.63' could not be resolved: Name or service not known

After setting innodb_dict_size_limit back to 0, the server stop crashing



 Comments   
Comment by Elena Stepanova [ 2013-08-13 ]

Hi,

Was it a one-time crash, or did it crash every time the limit was hit?

Comment by Daniel Guzman Burgos [ 2013-08-13 ]

Hi!
It crash every time the limit was hit, regardless of whether it increases or decreases:

Innodb_dict_tables 3283
Innodb_mem_dictionary 35080163

Innodb_dict_tables 2147
Innodb_mem_dictionary 24723920

Innodb_dict_tables 1338
Innodb_mem_dictionary 17014798

ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error: 104

OR:

Innodb_dict_tables 1199
Innodb_mem_dictionary 15728418

Innodb_dict_tables 1313
Innodb_mem_dictionary 16838853

Innodb_dict_tables 1339
Innodb_mem_dictionary 17019692

ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)

Comment by Daniel Guzman Burgos [ 2013-08-13 ]

Also reported for percona server:
https://bugs.launchpad.net/percona-server/+bug/758788

Comment by Elena Stepanova [ 2014-11-10 ]

The Percona bug was fixed in 5.5.36.
The fix comes with a test case, but for me it doesn't cause any failures either before or after the fix (even on Percona server). Still, since it's an XtraDB fix, I assume we merged it into MariaDB along with the normal XtraDB merge.

Generated at Thu Feb 08 07:00:00 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.