Details
-
Bug
-
Status: Open (View Workflow)
-
Major
-
Resolution: Unresolved
-
10.0.21
-
centos 6.7
Description
Hello,
Following my recent thread about problem with chained replication (MDEV-8929), a different problem appeared.
On master I used to set binlog_commit_wait_count = 10 (also tested 20, 30, 100 and 1000) and binlog_commit_wait_usec=10000 (also 20000 and default of 1 sec).
All worked fine until the load been increased. In that case something happened to Mariadb. I saw that processlist started increasing with threads awaiting to login:
unauthenticated user | connecting host | NULL | Connect | NULL | login | NULL
|
MariaDB started slowly answering. Even 'mysql' shell command took time to execute. PHP backends went down because of slow performance. No LA (<1), however.
I tried both pool of threads and thread per connection. Works fine until I reach something like 10qps/sec.
The test environment: MyISAM tables without indexes, delayed insert and ROW replication with logs written to a different partition (no IO problems)
When I applied the binlog_commit_wait_count settings on master sometimes it takes few seconds to apply, and sometimes immediately.
But the result is always the same: instead of having 10k qps (or above) and around 1k threads connected from test servers, I end up with 1-2qps and >10k connected threads, all of which are those unauthenticated users. In the end everything stops working. All I see in DB - some sleep and mostly unauthenticated processes.
I guess this is caused by heavy internal calculations of small number of queries (10/20/30) that needs to be group-committed. If server performs at 10qps, it means each query takes 100 microseconds and 10/20/30 set for binlog_commit_wait_count queries could be a very small number that keeps MariaDB busy on calculating which in its turn causes problems with connections/threads and so on. Setting the number of wait_count higher only "helps" MariaDB stuck faster, though it's a matter of a few seconds in either case. Under such load it seems that any binlog_commit_wait_usec setting is "just-in-case" limit that is quite irrelevant.
So all in all I ended up with standard master and slave that couldn't catch up because its parallel threads didn't do anything as nothing else could be activated on master. Seems like standard group commit isn't helping much in such case.
| Binlog_commits | 9842017 | |
| Binlog_group_commits | 9757972 | |
+
whatever settings I apply on master, it's always only binlog_group_commit_trigger_timeout counters updated (which to my opinion also indicates some problem under heavy load). Even setting count to 50 queries and keeping wait_user with default 1 sec.
MariaDB [(none)]> show global status like 'binlog%'; |
+---------------------------------------+------------------+
|
| Variable_name | Value |
|
+---------------------------------------+------------------+
|
| Binlog_commits | 19042854 | |
| Binlog_group_commits | 18832527 | |
| Binlog_group_commit_trigger_count | 0 | |
| Binlog_group_commit_trigger_lock_wait | 0 | |
| Binlog_group_commit_trigger_timeout | 12569 | |
| Binlog_snapshot_file | mysql-bin.012932 | |
| Binlog_snapshot_position | 55357394 | |
| Binlog_bytes_written | 7571633389 | |
| Binlog_cache_disk_use | 0 | |
| Binlog_cache_use | 0 | |
| Binlog_stmt_cache_disk_use | 362 | |
| Binlog_stmt_cache_use | 19042644 | |
+---------------------------------------+------------------+
|
12 rows in set (0.07 sec) |
|
|
MariaDB [(none)]> show global variables like 'binlog%'; |
+-----------------------------------------+--------+
|
| Variable_name | Value |
|
+-----------------------------------------+--------+
|
| binlog_annotate_row_events | OFF |
|
| binlog_cache_size | 32768 | |
| binlog_checksum | NONE |
|
| binlog_commit_wait_count | 50 | |
| binlog_commit_wait_usec | 100000 | |
| binlog_direct_non_transactional_updates | OFF |
|
| binlog_format | ROW |
|
| binlog_optimize_thread_scheduling | ON |
|
| binlog_stmt_cache_size | 32768 | |
+-----------------------------------------+--------+
|
9 rows in set (1.50 sec) |
Please advise
thanks!
Alex