[MDEV-14948] UNIX_TIMESTAMP() returns decimal inconsistently Created: 2018-01-15  Updated: 2018-01-24  Resolved: 2018-01-24

Status: Closed
Project: MariaDB Server
Component/s: Data types
Affects Version/s: 5.5.56
Fix Version/s: N/A

Type: Bug Priority: Minor
Reporter: Jonathan Knopp Assignee: Unassigned
Resolution: Not a Bug Votes: 0
Labels: None
Environment:

Centos 7


Issue Links:
Relates
relates to MDEV-3922 UNIX_TIMESTAMP() reports decimal when... Closed

 Description   

Setup:

CREATE TABLE t1 (c1 TIME);
INSERT INTO t1 VALUES('06:01:00');

Now:

SELECT UNIX_TIMESTAMP(CONCAT('2018-01-01 ',c1)) FROM t1;

produces "1514815560.000000"
while:

SELECT UNIX_TIMESTAMP(CAST(CONCAT('2018-01-01 ',c1) AS DATETIME)) FROM t1;

produces "1514815560"

Yet:

SELECT UNIX_TIMESTAMP(CONCAT('2018-01-01 ','06:01:00'));

produces "1514815260"
and

SELECT UNIX_TIMESTAMP(CONCAT('2018-01-01 ',CAST('06:01:00' AS time)));

still produces "1514815260"

Same as MySQL bug #82070. Similar to MDEV-3922.

Also, per https://mariadb.com/kb/en/library/unix_timestamp/

SELECT UNIX_TIMESTAMP('2007-11-30 10:30:19');

should return decimal as of 5.3, but it does not in 5.5.



 Comments   
Comment by Sergei Golubchik [ 2018-01-15 ]

I don't think it's a bug. Looks like the expected behavior. The metadata of the result (and number of decimals is part of metadata) is normally derived from the metadata of the arguments. Like, if you add DECIMAL(10,2) and DECIMAL(9,1), you'll get DECIMAL(11,2), while if you multiply DECIMAL(10,2) and DECIMAL(9,1), you'll get DECIMAL(19,3). Note that it does not depend on actual data, only on the metadata.

Similarly, UNIX_TIMESTAMP() result has as many decimals as its argument does.

But in the first case your argument is a string. It doesn't have a fixed number of decimals, so UNIX_TIMESTAMP() must assume the worst and return the highest possible number of decimals to avoid information loss. Consider this case:

CREATE TABLE t1 (a VARCHAR(100));
INSERT t1 VALUES ("2010-10-10 11:12:13"), ("2011-11-11 10:13:14.543319");
SELECT UNIX_TIMESTAMP(a) FROM t1;

In the second case you explicitly cast the argument to DATETIME(0), this clearly has no microseconds, so UNIX_TIMESTAMP() result doesn't either.

In the last two cases you use a literal constant. It cannot possibly change during the query execution, so MariaDB can analyze it and derive proper metadata from the constant value. It's an exception from the rule above, metadata depend on data, but it's only possible for constants.

Comment by Jonathan Knopp [ 2018-01-24 ]

Thanks for the explanation Sergei. Still seems "wrong" from an end user naturally expected behaviour standpoint but certainly it's minor enough to not be worth the effort to improve.

Generated at Thu Feb 08 08:17:31 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.