[MDEV-30820] slow log Rows_examined out of range Created: 2023-03-09  Updated: 2023-11-02  Resolved: 2023-09-22

Status: Closed
Project: MariaDB Server
Component/s: Server
Affects Version/s: 10.3, 10.4, 10.5, 10.6, 10.8, 10.9, 10.10, 10.11, 11.0
Fix Version/s: 10.4.32, 10.5.23, 10.6.16, 10.10.7, 10.11.6, 11.0.4, 11.1.3

Type: Bug Priority: Critical
Reporter: Muhammad Irfan Assignee: Oleksandr Byelkin
Resolution: Fixed Votes: 0
Labels: performance

Attachments: Text File huge_value_rows_examined.txt    
Issue Links:
Duplicate
is duplicated by MDEV-22860 Change EXAMINED_ROWS in I_S.PROCESSLI... Closed
Relates
relates to MDEV-22860 Change EXAMINED_ROWS in I_S.PROCESSLI... Closed
relates to MDEV-31742 incorrect examined rows in case of st... Closed
relates to MDEV-32518 Test failure: ./mtr --no-reorder main... Closed

 Description   

There is an issue for logging query on slow log table.
Since we set log_output=FILE and the data type for 'rows_examined' column on CVS table slow_log is INT, when this value exceeds the maximum allowed value for INT , an error is logged into error log.

([ERROR] Unable to write to mysql.slow_log)

Because the INSERT into the slow_log table fails due to the maximum value being exceeded. This is the output when we change the log_output to file:

# Thread_id: 380229  Schema: andre_k  QC_hit: No# Query_time: 0.028396  Lock_time: 0.001841  Rows_sent: 2  Rows_examined: 10958383778436017378# Rows_affected: 0  Bytes_sent: 124# Tmp_tables: 2  Tmp_disk_tables: 0  Tmp_table_sizes: 380968# Full_scan: Yes  Full_join: No  Tmp_table: Yes  Tmp_table_on_disk: No# Filesort: Yes  Filesort_on_disk: No  Merge_passes: 0  Priority_queue: Yes
# SET timestamp=1677147120;
# SELECT  get_id(CAST(aes_decrypt(tab1.NAME_F,'V41iNM0n4') AS char),'KM_ID_PL') as get_string,    (CASE WHEN (SELECT ID FROM tab2 where tab2.TAB1_ID = tab1.ID LIMIT 1) IS NULL THEN 0 ELSE 1 END) AS IS_ATTFROM    tab1ORDER BY 2 DESCLIMIT 2; 

use case huge_value_rows_examined.txt is attached.



 Comments   
Comment by Elena Stepanova [ 2023-03-19 ]

Same as attached, in MTR form:

--source include/have_innodb.inc
 
set @log_output.save= @@log_output, @slow_log.save= @@slow_query_log;
set global log_output= 'TABLE', slow_query_log= ON;
set long_query_time= 0.000001;
create database db;
use db;
 
--delimiter //
CREATE OR REPLACE FUNCTION `get_id`(`INPUT_STRING` VARCHAR(50), `DECRYPT_KEY` CHAR(20)) RETURNS int(11)
BEGIN
    DECLARE
        REQUEST_ID INT ;
    SET
        REQUEST_ID =
                IF(
            (
                LOCATE('#$#$#$ LHP_ID_', INPUT_STRING)
            ) > 0 AND
            (
                INPUT_STRING REGEXP DECRYPT_KEY
            )>0,
            (
                SPLIT_STRING(
                    SUBSTRING_INDEX(
                        SUBSTRING_INDEX(INPUT_STRING, '#$#$#$ LHP_ID_', -1),
                        '#$#$#$',
                        1
                    ),
                    '-',
                    2
                )
        ),
        NULL
        ) ;
                SET
                REQUEST_ID=IF(
                ISNULL(REQUEST_ID)OR(REQUEST_ID='')OR(REQUEST_ID=0),
                NULL,
                REQUEST_ID
 
                );
                RETURN(REQUEST_ID) ;
END
//
 
CREATE OR REPLACE FUNCTION `SPLIT_STRING`(`str` VARCHAR(255), `delim` VARCHAR(12), `pos` INT) RETURNS varchar(255) CHARSET utf8mb4
RETURN REPLACE(
        SUBSTRING(
                SUBSTRING_INDEX(str , delim , pos) ,
                CHAR_LENGTH(
                        SUBSTRING_INDEX(str , delim , pos - 1)
                ) + 1
        ) ,
        delim ,
        ''
)
//
 
CREATE TABLE `tab1` (
`ID` int(11) NOT NULL AUTO_INCREMENT,
`NAME_F` varchar(50) DEFAULT NULL,
  PRIMARY KEY (`ID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 //
 
 CREATE TABLE `tab2` (
  `ID` int(11) NOT NULL AUTO_INCREMENT,
  `TAB1_ID` int(11) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB  DEFAULT CHARSET=utf8mb4 //
 
 
for i in 1..100 do insert into tab1 values (null,'qwerty'); end for ; //
for i in 1..1000 do insert into tab2 values (null,round(rand()*10000)); end for ; //
 
--delimiter ;
 
SELECT
    get_id(CAST(aes_decrypt(tab1.NAME_F,'V41iNM0n4') AS char),'KM_ID_PL') as get_string,
    (CASE WHEN (SELECT ID FROM tab2 where tab2.TAB1_ID = tab1.ID LIMIT 1) IS NULL THEN 0 ELSE 1 END) AS IS_ATT
FROM
    tab1
ORDER BY 2 DESC
LIMIT 2;
 
set global log_output= @log_output.save, slow_query_log= @slow_log.save;
drop database db;

10.4 99ee200b

2023-03-19 22:50:15 9 [ERROR] Failed to write to mysql.slow_log: 

Comment by Kyle Hutchinson [ 2023-07-12 ]

sanja any update on this ticket?

Comment by Oleksandr Byelkin [ 2023-07-13 ]

kyle.hutchinson Not yes, sorry

Comment by Oleksandr Byelkin [ 2023-07-13 ]

(rr) p thd->get_examined_row_count()
$1 = 6468184979087718979

Comment by Oleksandr Byelkin [ 2023-07-13 ]

So examened row should be big iint probably

Comment by Oleksandr Byelkin [ 2023-07-14 ]

commit 845fbe9ce01fac29c446b4e898b2242ec814ade8 (HEAD -> bb-10.4-MDEV-30820, origin/bb-10.4-MDEV-30820)
Author: Oleksandr Byelkin <sanja@mariadb.com>
Date:   Thu Jul 13 13:39:54 2023 +0200
 
    MDEV-30820 slow log Rows_examined out of range
    
    Fix row counters to be able to get any possible value.

Comment by Oleksandr Byelkin [ 2023-07-19 ]

commit 55a27bac80ca12999f1239c96cb6f5d2e0a3f00c
Author: Oleksandr Byelkin <sanja@mariadb.com>
Date:   Thu Jul 13 13:39:54 2023 +0200
 
    MDEV-30820 slow log Rows_examined out of range
    
    Fix row counters to be able to get any possible value.

Comment by Oleksandr Byelkin [ 2023-08-11 ]

kyle.hutchinsonThe author of the bug (Monty) insist on other fix, so I will do it

Comment by Oleksandr Byelkin [ 2023-08-11 ]

The author want reseting counters instead removing restoring it (so above should be redone)

Comment by Michael Widenius [ 2023-09-08 ]

Short recap:

For sub statements we reset the counters and calculate only what happens in the sub statement. This is fine.
Sub statements only shows up in the slow query log if LOG_SLOW_DISABLE_SP is not set.
For the top statement, we collect counters for the top statement + all included sub statements.

One could argue that for the top statement we should not print the sub statement status (except if LOG_SLOW_DISABLE_SP is set).
Historically we have printed everything for the top statement and I think this is fine for now.
It may cause confusion if the number of statistics changes just because a user changes the value of
LOG_SLOW_DISABLE_SP (by default it is set).

This means that your current patch is fine as it only extends the counters.
What should probably be done in the documentation is to clearly state that slow query log statistics for a statement always includes all statistics for all called stored procedures or stored functions.

Comment by Kyle Hutchinson [ 2023-09-20 ]

sanja It looks like the existing patch has been approved, do you think this can be included in the next release?

Comment by Daniel Black [ 2023-09-21 ]

sanja I wouldn't just yet, looking at the figure:

--- /home/dan/repos/mariadb-server-10.4/mysql-test/main/log_slow_debug.result	2023-09-21 19:35:56.918844459 +1000
+++ /home/dan/repos/mariadb-server-10.4/mysql-test/main/log_slow_debug.reject	2023-09-21 19:43:41.965315498 +1000
@@ -246,6 +246,12 @@
 1	0
 2	0
 SET GLOBAL debug_dbug=@old_dbug;
+SELECT sql_text,rows_examined FROM mysql.slow_log;
+sql_text	rows_examined
+SET SESSION long_query_time= 0	0
+SET GLOBAL debug_dbug="+d,debug_huge_number_of_examined_rows"	0
+SELECT * FROM tab_MDEV_30820 ORDER BY 1	4
+SET GLOBAL debug_dbug=@old_dbug	0
 SET @@long_query_time= @old_long_query_time;
 SET @@global.log_output= @old_log_output;
 SET @@global.slow_query_log= @old_slow_query_log;

It doesn't appear that the very large number ended up in the logs. I foolishly started https://github.com/MariaDB/server/pull/2767 before seeing this commit thinking all the datatypes needed to be ulonglong. I get the same mtr result however.

Comment by Oleksandr Byelkin [ 2023-09-22 ]

the large numbers was result of counting bug. but theoretically they possible

Generated at Thu Feb 08 10:19:09 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.