Details
-
Bug
-
Status: Closed (View Workflow)
-
Major
-
Resolution: Incomplete
-
10.1.12
-
Centos 7
Description
Using the follwing test sql:
DROP DATABASE IF EXISTS `my_test`; |
CREATE DATABASE `my_test`; |
USE `my_test`; |
 |
DROP TABLE IF EXISTS `test`; |
CREATE TABLE `test` ( |
`id` TINYINT UNSIGNED NOT NULL AUTO_INCREMENT, |
`size` INT UNSIGNED NOT NULL, |
PRIMARY KEY (`id`) |
)ENGINE=Innodb;
|
insert into my_test.test values(0,100); |
SET profiling = 1; |
#this update has short query end |
update my_test.test set size=100 where id=1; |
#this update has long query end |
update my_test.test set size=size+100 where id=1; |
SHOW PROFILE for query 1; |
SHOW PROFILE for query 2; |
I find that update my_test.test set size=size+100 where id=1; has a very log "query end" stage:
[root@localhost code]# /usr/local/mysql/bin/mysql < test.sql | grep "query end"
|
query end 0.000007
|
query end 0.005163
|
I have used the same test.sql on mysql 5.1 ,and no such thing happaned:
[root@localhost ~]# mysql < test.sql | grep "query end"
|
query end 0.000003
|
query end 0.000002
|
Of course,the config files of mariadb 10.1.12 and mysql 5.1.73 are different.But the update sql is so simple that I don't think the difference in config files can cause this huge performance slowdown.But I have upload my mariadb config file.
=============updated in 2016/03/17=================
Sorry for my brief description yesterday.Now I rerun the test.sql in my virtualbox centos 7 machine and check the full stages,the result is the same:
[root@localhost code]# /usr/local/mysql/bin/mysql < test.sql
|
Status Duration
|
starting 0.000084
|
checking permissions 0.000012
|
Opening tables 0.000017
|
After opening tables 0.000006
|
System lock 0.000004
|
Table lock 0.000006
|
init 0.000131
|
updating 0.000059
|
end 0.000010
|
query end 0.000009
|
closing tables 0.000004
|
Unlocking tables 0.000008
|
freeing items 0.000013
|
updating status 0.000011
|
cleaning up 0.000004
|
 |
Status Duration
|
starting 0.000036
|
checking permissions 0.000006
|
Opening tables 0.000012
|
After opening tables 0.000005
|
System lock 0.000004
|
Table lock 0.000005
|
init 0.000037
|
updating 0.000073
|
end 0.000007
|
Waiting for query cache lock 0.000003
|
end 0.000009
|
query end 0.003902
|
closing tables 0.000012
|
Unlocking tables 0.000012
|
freeing items 0.000010
|
updating status 0.000025
|
cleaning up 0.000004
|
Actually,I have tried to batch exec the "update my_test.test set size=100 where id=1;" 10000 times and "update my_test.test set size=size+100 where id=1;" 10000 times,and got the following results:
[root@localhost code]# time /usr/local/mysql/bin/mysql < stmt1.sql
|
 |
real 0m0.667s
|
user 0m0.037s
|
sys 0m0.102s
|
[root@localhost code]# time /usr/local/mysql/bin/mysql < stmt2.sql
|
 |
real 0m56.015s
|
user 0m0.187s
|
sys 0m0.316s
|
[root@localhost code]# wc -l stmt1.sql stmt2.sql
|
10000 stmt1.sql
|
10000 stmt2.sql
|
20000 total
|
I found this problem because in our performance testing , I used pt-query-digest to analyze the slow query log and found the exce time of a simply update sql is 88% of the overall exec time:
# Profile
|
# Rank Query ID Response time Calls R/Call V/M Item
|
# ==== ================== =============== ===== ====== ===== =============
|
# 1 0x919A71601652CD9C 1880.7490 88.8% 1000 1.8807 0.21 UPDATE uisdb.maildir
|
# 2 0x6A6BF8323EF9A924 64.4834 3.0% 1000 0.0645 0.03 INSERT UPDATE uisdb.mail_entity
|
# 3 0x0CFA48C3B4E6316C 60.1811 2.8% 1000 0.0602 0.02 INSERT uisdb.mail
|
# 4 0xB577AD2502F24975 49.4235 2.3% 1103 0.0448 0.02 SELECT uisdb.mail_entity
|
# MISC 0xMISC 63.1927 3.0% 44938 0.0014 0.0 <53 ITEMS>
|
 |
# Query 1: 20.83 QPS, 39.18x concurrency, ID 0x919A71601652CD9C at byte 40584611
|
# This item is included in the report because it matches --limit.
|
# Scores: V/M = 0.21
|
# Time range: 2016-03-17 01:29:20 to 01:30:08
|
# Attribute pct total min max avg 95% stddev median
|
# ============ === ======= ======= ======= ======= ======= ======= =======
|
# Count 2 1000
|
# Exec time 88 1881s 20ms 3s 2s 3s 628ms 2s
|
# Lock time 99 1833s 57us 3s 2s 3s 625ms 2s
|
# Rows sent 0 0 0 0 0 0 0 0
|
# Rows examine 3 1000 1 1 1 1 0 1
|
# Rows affecte 13 1000 1 1 1 1 0 1
|
# Query size 0 89.84k 92 92 92 92 0 92
|
# String:
|
# Hosts localhost
|
# Users uis
|
# Query_time distribution
|
# 1us
|
# 10us
|
# 100us
|
# 1ms
|
# 10ms ##
|
# 100ms #####
|
# 1s ################################################################
|
# 10s+
|
# Tables
|
# SHOW TABLE STATUS FROM `uisdb` LIKE 'maildir'\G
|
# SHOW CREATE TABLE `uisdb`.`maildir`\G
|
update uisdb.maildir set next_uid=last_insert_id(next_uid)+1 where mb_id=12 and maildir_id=1\G
|
# Converted for EXPLAIN
|
# EXPLAIN /*!50100 PARTITIONS*/
|
select next_uid=last_insert_id(next_uid)+1 from uisdb.maildir where mb_id=12 and maildir_id=1\G
|
|
uisdb.maildir is a simple table with 3000+ rows and the "where condition" uses its primary key to search:
MariaDB [(none)]> explain select next_uid=last_insert_id(next_uid)+1 from uisdb.maildir where mb_id=12 and maildir_id=1\G
|
*************************** 1. row ***************************
|
id: 1
|
select_type: SIMPLE
|
table: maildir
|
type: const
|
possible_keys: PRIMARY,mb_id
|
key: PRIMARY
|
key_len: 12
|
ref: const,const
|
rows: 1
|
Extra:
|
1 row in set (0.30 sec)
|
Our servers are not particularly tuned for mariadb except in the my.cnf files.If you need more env config,please tell me.
PS:when I googled this problem,someone seemed to encount the same problem,at this link