Details

    Description

      There is an issue for logging query on slow log table.
      Since we set log_output=FILE and the data type for 'rows_examined' column on CVS table slow_log is INT, when this value exceeds the maximum allowed value for INT , an error is logged into error log.

      ([ERROR] Unable to write to mysql.slow_log)
      

      Because the INSERT into the slow_log table fails due to the maximum value being exceeded. This is the output when we change the log_output to file:

      # Thread_id: 380229  Schema: andre_k  QC_hit: No# Query_time: 0.028396  Lock_time: 0.001841  Rows_sent: 2  Rows_examined: 10958383778436017378# Rows_affected: 0  Bytes_sent: 124# Tmp_tables: 2  Tmp_disk_tables: 0  Tmp_table_sizes: 380968# Full_scan: Yes  Full_join: No  Tmp_table: Yes  Tmp_table_on_disk: No# Filesort: Yes  Filesort_on_disk: No  Merge_passes: 0  Priority_queue: Yes
      # SET timestamp=1677147120;
      # SELECT  get_id(CAST(aes_decrypt(tab1.NAME_F,'V41iNM0n4') AS char),'KM_ID_PL') as get_string,    (CASE WHEN (SELECT ID FROM tab2 where tab2.TAB1_ID = tab1.ID LIMIT 1) IS NULL THEN 0 ELSE 1 END) AS IS_ATTFROM    tab1ORDER BY 2 DESCLIMIT 2; 
      

      use case huge_value_rows_examined.txt is attached.

      Attachments

        Issue Links

          Activity

            muhammad.irfan Muhammad Irfan created issue -
            muhammad.irfan Muhammad Irfan made changes -
            Field Original Value New Value
            Summary show log Rows_examined out of range slow log Rows_examined out of range
            muhammad.irfan Muhammad Irfan made changes -
            Attachment huge_value_rows_examined.txt [ 68781 ]
            julien.fritsch Julien Fritsch made changes -
            Fix Version/s 10.4 [ 22408 ]
            Fix Version/s 10.5 [ 23123 ]
            Fix Version/s 10.6 [ 24028 ]
            Fix Version/s 10.3 [ 22126 ]

            Same as attached, in MTR form:

            --source include/have_innodb.inc
             
            set @log_output.save= @@log_output, @slow_log.save= @@slow_query_log;
            set global log_output= 'TABLE', slow_query_log= ON;
            set long_query_time= 0.000001;
            create database db;
            use db;
             
            --delimiter //
            CREATE OR REPLACE FUNCTION `get_id`(`INPUT_STRING` VARCHAR(50), `DECRYPT_KEY` CHAR(20)) RETURNS int(11)
            BEGIN
                DECLARE
                    REQUEST_ID INT ;
                SET
                    REQUEST_ID =
                            IF(
                        (
                            LOCATE('#$#$#$ LHP_ID_', INPUT_STRING)
                        ) > 0 AND
                        (
                            INPUT_STRING REGEXP DECRYPT_KEY
                        )>0,
                        (
                            SPLIT_STRING(
                                SUBSTRING_INDEX(
                                    SUBSTRING_INDEX(INPUT_STRING, '#$#$#$ LHP_ID_', -1),
                                    '#$#$#$',
                                    1
                                ),
                                '-',
                                2
                            )
                    ),
                    NULL
                    ) ;
                            SET
                            REQUEST_ID=IF(
                            ISNULL(REQUEST_ID)OR(REQUEST_ID='')OR(REQUEST_ID=0),
                            NULL,
                            REQUEST_ID
             
                            );
                            RETURN(REQUEST_ID) ;
            END
            //
             
            CREATE OR REPLACE FUNCTION `SPLIT_STRING`(`str` VARCHAR(255), `delim` VARCHAR(12), `pos` INT) RETURNS varchar(255) CHARSET utf8mb4
            RETURN REPLACE(
                    SUBSTRING(
                            SUBSTRING_INDEX(str , delim , pos) ,
                            CHAR_LENGTH(
                                    SUBSTRING_INDEX(str , delim , pos - 1)
                            ) + 1
                    ) ,
                    delim ,
                    ''
            )
            //
             
            CREATE TABLE `tab1` (
            `ID` int(11) NOT NULL AUTO_INCREMENT,
            `NAME_F` varchar(50) DEFAULT NULL,
              PRIMARY KEY (`ID`)
            ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 //
             
             CREATE TABLE `tab2` (
              `ID` int(11) NOT NULL AUTO_INCREMENT,
              `TAB1_ID` int(11) DEFAULT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=InnoDB  DEFAULT CHARSET=utf8mb4 //
             
             
            for i in 1..100 do insert into tab1 values (null,'qwerty'); end for ; //
            for i in 1..1000 do insert into tab2 values (null,round(rand()*10000)); end for ; //
             
            --delimiter ;
             
            SELECT
                get_id(CAST(aes_decrypt(tab1.NAME_F,'V41iNM0n4') AS char),'KM_ID_PL') as get_string,
                (CASE WHEN (SELECT ID FROM tab2 where tab2.TAB1_ID = tab1.ID LIMIT 1) IS NULL THEN 0 ELSE 1 END) AS IS_ATT
            FROM
                tab1
            ORDER BY 2 DESC
            LIMIT 2;
             
            set global log_output= @log_output.save, slow_query_log= @slow_log.save;
            drop database db;
            

            10.4 99ee200b

            2023-03-19 22:50:15 9 [ERROR] Failed to write to mysql.slow_log: 
            

            elenst Elena Stepanova added a comment - Same as attached, in MTR form: --source include/have_innodb.inc   set @log_output.save= @@log_output, @slow_log.save= @@slow_query_log; set global log_output= 'TABLE' , slow_query_log= ON ; set long_query_time= 0.000001; create database db; use db;   --delimiter // CREATE OR REPLACE FUNCTION `get_id`(`INPUT_STRING` VARCHAR (50), `DECRYPT_KEY` CHAR (20)) RETURNS int (11) BEGIN DECLARE REQUEST_ID INT ; SET REQUEST_ID = IF ( ( LOCATE( '#$#$#$ LHP_ID_' , INPUT_STRING) ) > 0 AND ( INPUT_STRING REGEXP DECRYPT_KEY )>0, ( SPLIT_STRING( SUBSTRING_INDEX( SUBSTRING_INDEX(INPUT_STRING, '#$#$#$ LHP_ID_' , -1), '#$#$#$' , 1 ), '-' , 2 ) ), NULL ) ; SET REQUEST_ID= IF ( ISNULL (REQUEST_ID) OR (REQUEST_ID= '' ) OR (REQUEST_ID=0), NULL , REQUEST_ID   ); RETURN (REQUEST_ID) ; END //   CREATE OR REPLACE FUNCTION `SPLIT_STRING`(`str` VARCHAR (255), `delim` VARCHAR (12), `pos` INT ) RETURNS varchar (255) CHARSET utf8mb4 RETURN REPLACE ( SUBSTRING ( SUBSTRING_INDEX(str , delim , pos) , CHAR_LENGTH( SUBSTRING_INDEX(str , delim , pos - 1) ) + 1 ) , delim , '' ) //   CREATE TABLE `tab1` ( `ID` int (11) NOT NULL AUTO_INCREMENT, `NAME_F` varchar (50) DEFAULT NULL , PRIMARY KEY (`ID`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 //   CREATE TABLE `tab2` ( `ID` int (11) NOT NULL AUTO_INCREMENT, `TAB1_ID` int (11) DEFAULT NULL , PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 //     for i in 1..100 do insert into tab1 values ( null , 'qwerty' ); end for ; // for i in 1..1000 do insert into tab2 values ( null ,round(rand()*10000)); end for ; //   --delimiter ;   SELECT get_id( CAST (aes_decrypt(tab1.NAME_F, 'V41iNM0n4' ) AS char ), 'KM_ID_PL' ) as get_string, ( CASE WHEN ( SELECT ID FROM tab2 where tab2.TAB1_ID = tab1.ID LIMIT 1) IS NULL THEN 0 ELSE 1 END ) AS IS_ATT FROM tab1 ORDER BY 2 DESC LIMIT 2;   set global log_output= @log_output.save, slow_query_log= @slow_log.save; drop database db; 10.4 99ee200b 2023-03-19 22:50:15 9 [ERROR] Failed to write to mysql.slow_log:
            elenst Elena Stepanova made changes -
            Component/s Server [ 13907 ]
            Fix Version/s 10.8 [ 26121 ]
            Fix Version/s 10.9 [ 26905 ]
            Fix Version/s 10.10 [ 27530 ]
            Fix Version/s 10.11 [ 27614 ]
            Fix Version/s 11.0 [ 28320 ]
            Fix Version/s 10.3 [ 22126 ]
            Affects Version/s 10.3 [ 22126 ]
            Affects Version/s 10.4 [ 22408 ]
            Affects Version/s 10.5 [ 23123 ]
            Affects Version/s 10.8 [ 26121 ]
            Affects Version/s 10.9 [ 26905 ]
            Affects Version/s 10.10 [ 27530 ]
            Affects Version/s 10.11 [ 27614 ]
            Affects Version/s 11.0 [ 28320 ]
            Assignee Oleksandr Byelkin [ sanja ]
            elenst Elena Stepanova made changes -
            Status Open [ 1 ] Confirmed [ 10101 ]
            julien.fritsch Julien Fritsch made changes -
            Fix Version/s 10.8 [ 26121 ]
            julien.fritsch Julien Fritsch made changes -
            Priority Major [ 3 ] Critical [ 2 ]

            sanja any update on this ticket?

            kyle.hutchinson Kyle Hutchinson added a comment - sanja any update on this ticket?

            kyle.hutchinson Not yes, sorry

            sanja Oleksandr Byelkin added a comment - kyle.hutchinson Not yes, sorry

            (rr) p thd->get_examined_row_count()
            $1 = 6468184979087718979
            

            sanja Oleksandr Byelkin added a comment - (rr) p thd->get_examined_row_count() $1 = 6468184979087718979

            So examened row should be big iint probably

            sanja Oleksandr Byelkin added a comment - So examened row should be big iint probably
            sanja Oleksandr Byelkin made changes -
            Status Confirmed [ 10101 ] In Progress [ 3 ]
            sanja Oleksandr Byelkin added a comment - - edited

            commit 845fbe9ce01fac29c446b4e898b2242ec814ade8 (HEAD -> bb-10.4-MDEV-30820, origin/bb-10.4-MDEV-30820)
            Author: Oleksandr Byelkin <sanja@mariadb.com>
            Date:   Thu Jul 13 13:39:54 2023 +0200
             
                MDEV-30820 slow log Rows_examined out of range
                
                Fix row counters to be able to get any possible value.
            
            

            sanja Oleksandr Byelkin added a comment - - edited commit 845fbe9ce01fac29c446b4e898b2242ec814ade8 (HEAD -> bb-10.4-MDEV-30820, origin/bb-10.4-MDEV-30820) Author: Oleksandr Byelkin <sanja@mariadb.com> Date: Thu Jul 13 13:39:54 2023 +0200   MDEV-30820 slow log Rows_examined out of range Fix row counters to be able to get any possible value.
            sanja Oleksandr Byelkin made changes -
            Assignee Oleksandr Byelkin [ sanja ] Sergei Golubchik [ serg ]
            Status In Progress [ 3 ] In Review [ 10002 ]
            sanja Oleksandr Byelkin made changes -
            sanja Oleksandr Byelkin added a comment - - edited

            commit 55a27bac80ca12999f1239c96cb6f5d2e0a3f00c
            Author: Oleksandr Byelkin <sanja@mariadb.com>
            Date:   Thu Jul 13 13:39:54 2023 +0200
             
                MDEV-30820 slow log Rows_examined out of range
                
                Fix row counters to be able to get any possible value.
            

            sanja Oleksandr Byelkin added a comment - - edited commit 55a27bac80ca12999f1239c96cb6f5d2e0a3f00c Author: Oleksandr Byelkin <sanja@mariadb.com> Date: Thu Jul 13 13:39:54 2023 +0200   MDEV-30820 slow log Rows_examined out of range Fix row counters to be able to get any possible value.
            serg Sergei Golubchik made changes -
            Assignee Sergei Golubchik [ serg ] Oleksandr Byelkin [ sanja ]
            Status In Review [ 10002 ] Stalled [ 10000 ]

            kyle.hutchinsonThe author of the bug (Monty) insist on other fix, so I will do it

            sanja Oleksandr Byelkin added a comment - kyle.hutchinson The author of the bug (Monty) insist on other fix, so I will do it

            The author want reseting counters instead removing restoring it (so above should be redone)

            sanja Oleksandr Byelkin added a comment - The author want reseting counters instead removing restoring it (so above should be redone)

            Short recap:

            For sub statements we reset the counters and calculate only what happens in the sub statement. This is fine.
            Sub statements only shows up in the slow query log if LOG_SLOW_DISABLE_SP is not set.
            For the top statement, we collect counters for the top statement + all included sub statements.

            One could argue that for the top statement we should not print the sub statement status (except if LOG_SLOW_DISABLE_SP is set).
            Historically we have printed everything for the top statement and I think this is fine for now.
            It may cause confusion if the number of statistics changes just because a user changes the value of
            LOG_SLOW_DISABLE_SP (by default it is set).

            This means that your current patch is fine as it only extends the counters.
            What should probably be done in the documentation is to clearly state that slow query log statistics for a statement always includes all statistics for all called stored procedures or stored functions.

            monty Michael Widenius added a comment - Short recap: For sub statements we reset the counters and calculate only what happens in the sub statement. This is fine. Sub statements only shows up in the slow query log if LOG_SLOW_DISABLE_SP is not set. For the top statement, we collect counters for the top statement + all included sub statements. One could argue that for the top statement we should not print the sub statement status (except if LOG_SLOW_DISABLE_SP is set). Historically we have printed everything for the top statement and I think this is fine for now. It may cause confusion if the number of statistics changes just because a user changes the value of LOG_SLOW_DISABLE_SP (by default it is set). This means that your current patch is fine as it only extends the counters. What should probably be done in the documentation is to clearly state that slow query log statistics for a statement always includes all statistics for all called stored procedures or stored functions.
            danblack Daniel Black made changes -

            sanja It looks like the existing patch has been approved, do you think this can be included in the next release?

            kyle.hutchinson Kyle Hutchinson added a comment - sanja It looks like the existing patch has been approved, do you think this can be included in the next release?
            danblack Daniel Black added a comment -

            sanja I wouldn't just yet, looking at the figure:

            --- /home/dan/repos/mariadb-server-10.4/mysql-test/main/log_slow_debug.result	2023-09-21 19:35:56.918844459 +1000
            +++ /home/dan/repos/mariadb-server-10.4/mysql-test/main/log_slow_debug.reject	2023-09-21 19:43:41.965315498 +1000
            @@ -246,6 +246,12 @@
             1	0
             2	0
             SET GLOBAL debug_dbug=@old_dbug;
            +SELECT sql_text,rows_examined FROM mysql.slow_log;
            +sql_text	rows_examined
            +SET SESSION long_query_time= 0	0
            +SET GLOBAL debug_dbug="+d,debug_huge_number_of_examined_rows"	0
            +SELECT * FROM tab_MDEV_30820 ORDER BY 1	4
            +SET GLOBAL debug_dbug=@old_dbug	0
             SET @@long_query_time= @old_long_query_time;
             SET @@global.log_output= @old_log_output;
             SET @@global.slow_query_log= @old_slow_query_log;
            

            It doesn't appear that the very large number ended up in the logs. I foolishly started https://github.com/MariaDB/server/pull/2767 before seeing this commit thinking all the datatypes needed to be ulonglong. I get the same mtr result however.

            danblack Daniel Black added a comment - sanja I wouldn't just yet, looking at the figure: --- /home/dan/repos/mariadb-server-10.4/mysql-test/main/log_slow_debug.result 2023-09-21 19:35:56.918844459 +1000 +++ /home/dan/repos/mariadb-server-10.4/mysql-test/main/log_slow_debug.reject 2023-09-21 19:43:41.965315498 +1000 @@ -246,6 +246,12 @@ 1 0 2 0 SET GLOBAL debug_dbug=@old_dbug; +SELECT sql_text,rows_examined FROM mysql.slow_log; +sql_text rows_examined +SET SESSION long_query_time= 0 0 +SET GLOBAL debug_dbug="+d,debug_huge_number_of_examined_rows" 0 +SELECT * FROM tab_MDEV_30820 ORDER BY 1 4 +SET GLOBAL debug_dbug=@old_dbug 0 SET @@long_query_time= @old_long_query_time; SET @@global.log_output= @old_log_output; SET @@global.slow_query_log= @old_slow_query_log; It doesn't appear that the very large number ended up in the logs. I foolishly started https://github.com/MariaDB/server/pull/2767 before seeing this commit thinking all the datatypes needed to be ulonglong. I get the same mtr result however.

            the large numbers was result of counting bug. but theoretically they possible

            sanja Oleksandr Byelkin added a comment - the large numbers was result of counting bug. but theoretically they possible
            sanja Oleksandr Byelkin made changes -
            Fix Version/s 10.4.32 [ 29300 ]
            Fix Version/s 10.5.23 [ 29012 ]
            Fix Version/s 10.6.16 [ 29014 ]
            Fix Version/s 10.10.7 [ 29018 ]
            Fix Version/s 10.11.6 [ 29020 ]
            Fix Version/s 11.0.4 [ 29021 ]
            Fix Version/s 11.1.3 [ 29023 ]
            Fix Version/s 11.2.2 [ 29035 ]
            Fix Version/s 10.4 [ 22408 ]
            Fix Version/s 10.5 [ 23123 ]
            Fix Version/s 10.6 [ 24028 ]
            Fix Version/s 10.9 [ 26905 ]
            Fix Version/s 10.10 [ 27530 ]
            Fix Version/s 10.11 [ 27614 ]
            Fix Version/s 11.0 [ 28320 ]
            Resolution Fixed [ 1 ]
            Status Stalled [ 10000 ] Closed [ 6 ]
            julien.fritsch Julien Fritsch made changes -
            Fix Version/s 11.2.2 [ 29035 ]
            danblack Daniel Black made changes -
            marko Marko Mäkelä made changes -
            mariadb-jira-automation Jira Automation (IT) made changes -
            Zendesk Related Tickets 148522

            People

              sanja Oleksandr Byelkin
              muhammad.irfan Muhammad Irfan
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.