Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-29259

Comparison semantic of int = string changes with creation of an index

    XMLWordPrintable

Details

    • Bug
    • Status: Confirmed (View Workflow)
    • Major
    • Resolution: Unresolved
    • 10.4.24, 10.3, 10.4, 10.5, 10.6, 10.7, 10.8, 10.9
    • 10.4, 10.5, 10.6
    • None
    • None

    Description

      Per: https://mariadb.com/kb/en/type-conversion/
      "If one argument is string and the other argument is integer, they are compared as decimals. "
      However, if an index is created on the integer column and that index is picked by the optimizer, the comparison looks to be carried out as integer. This is incorrect, since it changes the comparison semantics just by creating/dropping an index.
      There should not be such side effect from create/drop index.

      We use 10.4.24 but same occurs on 10.6.5.

      Example:

      MariaDB [test]> create table t1(id int primary key, c1 int);
      Query OK, 0 rows affected (0.030 sec)
       
      MariaDB [test]> insert into t1 values(1, 1),(2, 2);
      Query OK, 2 rows affected (0.003 sec)
      Records: 2  Duplicates: 0  Warnings: 0
       
      MariaDB [test]> select * from t1;
      +----+------+
      | id | c1   |
      +----+------+
      |  1 |    1 |
      |  2 |    2 |
      +----+------+
      2 rows in set (0.001 sec)
       
      MariaDB [test]> select * from t1 where c1 = '1.5';
      Empty set (0.001 sec)
       
      -- Note: this is as expected because column values 1,2 converted to decimal are 1.0 and 2.0
      -- and neither is equal to 1.5 (string '1.5' converted to decimal).
       
      MariaDB [test]> create index idx1 on t1(c1);
      Query OK, 0 rows affected (0.032 sec)
      Records: 0  Duplicates: 0  Warnings: 0
       
      MariaDB [test]> select * from t1 where c1 = '1.5';
      +----+------+
      | id | c1   |
      +----+------+
      |  2 |    2 |
      +----+------+
      1 row in set (0.001 sec)
       
      -- Note: this was unexpected. It looks as if string '1.5' is converted to decimal first (as it should) but then is cast to int (causing the rounding to occur).
      -- This seems wrong because index scan vs table scan should not change the semantics of the data type resolution. And it is possible to implement 
      -- this so that index is still used but semantics is consistent with table scan path (and documentation) - hence sounds like an oversight? 
       
      MariaDB [test]> select * from t1 ignore index(idx1) where c1 = '1.5';
      Empty set (0.001 sec)
       
      -- Note: back to expected when index is not picked (or can also drop the index).
      


      P.S.
      could you also please clarify whether the int = string is using decimal and not double?
      Because the following warning seems to imply that it is using double:

      MariaDB [test]> select * from t1 where c1 = '1.5A';
      Empty set (Engine=InnoDB), 1 warning (0.001 sec)
       
      MariaDB [test]> show warnings;
      +---------+------+------------------------------------------+
      | Level   | Code | Message                                  |
      +---------+------+------------------------------------------+
      | Warning | 1292 | Truncated incorrect DOUBLE value: '1.5A' |
      +---------+------+------------------------------------------+
      

      Attachments

        Issue Links

          Activity

            People

              bar Alexander Barkov
              alurie Andrei Lurie
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.