Details
-
Bug
-
Status: Confirmed (View Workflow)
-
Minor
-
Resolution: Unresolved
-
10.2.9
Description
Using default settings, I got the following error, once the DB reached 250 GB:
2017-10-10 7:31:56 140711626262272 [ERROR] LibRocksDB:[/opt/slapgrid/7c3e140d2be16dd0fc4001874fc875be/parts/mariadb__compile__/mariadb-10.2.9/storage/rocksdb/rocksdb/db/compaction_job.cc:1240] [default] [JOB 13239] OpenCompactionOutputFiles for table #28903 fails at NewWritableFile with status IO error: While open a file for appending: ./.rocksdb/028903.sst: Too many open files
|
2017-10-10 7:31:56 140711626262272 [ERROR] LibRocksDB:[/opt/slapgrid/7c3e140d2be16dd0fc4001874fc875be/parts/mariadb__compile__/mariadb-10.2.9/storage/rocksdb/rocksdb/db/db_impl_compaction_flush.cc:1335] Waiting after background compaction error: IO error: While open a file for appending: ./.rocksdb/028903.sst: Too many open files, Accumulated background error counts: 1
|
mysqld: /opt/slapgrid/7c3e140d2be16dd0fc4001874fc875be/parts/mariadb__compile__/mariadb-10.2.9/storage/rocksdb/rocksdb/db/db_impl.cc:710: void rocksdb::DBImpl::MarkLogsSynced(uint64_t, bool, const rocksdb::Status&): Assertion `log.getting_synced' failed.
|
171010 7:31:57 [ERROR] mysqld got signal 6 ;
|
I increased limits so that it can reach 6 TB:
- rocksdb_max_open_files = 32768
- rocksdb_merge_buf_size = 256M
However, RocksDB continues to split the DB in files of 64 MB so I'm not sure I changed the correct variable.
Anyway, we think that RocksDB should adapt automatically as the DB grows.
edit: Increasing rocksdb_max_open_files does nothing if open_files_limit is not also increased. Maybe one of them should be calculated automatically from the other.
Attachments
Issue Links
- relates to
-
MDEV-14220 (draft) set global rocksdb_pause_background_work=1 freezes
-
- Closed
-
-
MDEV-13975 [out of disk space assert] I was uploading 100 million records to a rocsdb table and Mariadb crashed
-
- Open
-
In my understanding no matter how we try to auto-adjust rocksdb_open_files_limit - there will be practical situations where IO attempt will receive errno: 24.
And the Engine must survive that error, e.g. just returning user error the same way when Server fails to open .frm file.
So I suggest looking at auto-adjustment of rocksdb_open_files_limit in another ticket.
AFAIK current formula for adjusting table_open_cache and max_connections is something like :
open_files_limit > 9 + 2*table_open_cache + max_connections
That comes with assumption that for 100 open MyISAM tables and 50 simultaneous connections server will need at least 2*100+50 file descriptors. (both .MYI and .MYD files are counted).
).
In my understanding RocksDB may be hungry for file descriptors without correlation to neither table_open_cache nor max_connections. (E.g. huge single RocksDB table may be stored in many .sst files
So I think we should try to investigate something like :
long_size_t requested_limit = 9 + 2*configured_table_open_cache + configured_max_connections + configured_rocksdb_open_files_limit ;
table_open_cache=configured_table_open_cache / koefficient;
max_connections=configured_table_open_cache / koefficient;
rocksdb_open_files_limits=configured_table_open_cache / koefficient;