Details
-
Bug
-
Status: Closed (View Workflow)
-
Major
-
Resolution: Fixed
-
10.3(EOL)
-
None
-
10.3.6-1
Description
I'm using a Azure VM (ubuntu), mariadb ubuntu package
Server version: 10.3.5-MariaDB-10.3.5+maria~xenial mariadb.org binary distribution
MariaDB [test]> show variables like 'column_compre%'; |
+----------------------------------+------------------+ |
| Variable_name | Value |
|
+----------------------------------+------------------+ |
| column_compression_threshold | 100 |
|
| column_compression_zlib_level | 6 |
|
| column_compression_zlib_strategy | DEFAULT_STRATEGY |
|
| column_compression_zlib_wrap | OFF | |
+----------------------------------+------------------+ |
4 rows in set (0.025 sec) |
|
CREATE TABLE `table_without_compression` ( |
`id` int(11) NOT NULL DEFAULT 0, |
`gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, |
`data` blob NOT NULL, |
PRIMARY KEY (`id`,`gmtime`) |
) ENGINE=InnoDB DEFAULT CHARSET=latin1; |
#Query OK, 0 rows affected (0.385 sec) |
|
CREATE TABLE `table_compressed` ( |
`id` int(11) NOT NULL DEFAULT 0, |
`gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, |
`data` blob COMPRESSED NOT NULL DEFAULT '', |
PRIMARY KEY (`id`,`gmtime`) |
) ENGINE=InnoDB DEFAULT CHARSET=latin1;* |
#Query OK, 0 rows affected (0.385 sec) |
|
INSERT INTO table_without_compression VALUES (1,0,repeat('a',65535)); |
#Query OK, 1 row affected (0.135 sec)
|
|
INSERT INTO table_without_compression VALUES (1,1,repeat('a',65536)); |
#ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN, maybe that's the problem, internally compression consider the from/to column instead of a bigger column (long blob for example when using blob/longblob, i will test with TEXT/LONG TEXT and VARCHAR too) |
|
INSERT INTO table_compressed SELECT * FROM table_without_compression;
|
#ERROR 1406 (22001): Data too long for column 'data' at row 1 <- THAT'S THE ERROR |
—
Same to TEXT
CREATE TABLE `table_compressed2` ( |
`id` int(11) NOT NULL DEFAULT 0, |
`gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, |
`data` text COMPRESSED NOT NULL DEFAULT '', |
PRIMARY KEY (`id`,`gmtime`) |
) ENGINE=InnoDB DEFAULT CHARSET=latin1; |
#Query OK, 0 rows affected (0.231 sec) |
|
CREATE TABLE `table_without_compression2` ( |
`id` int(11) NOT NULL DEFAULT 0, |
`gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, |
`data` text NOT NULL, |
PRIMARY KEY (`id`,`gmtime`) |
) ENGINE=InnoDB DEFAULT CHARSET=latin1; |
#Query OK, 0 rows affected (0.253 sec) |
|
INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65535)); |
#Query OK, 1 row affected (0.061 sec)
|
|
INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65536)); |
#ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN |
|
INSERT INTO table_compressed2 SELECT * FROM table_without_compression2; |
#ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN |
—
mixing data types:
INSERT INTO table_compressed SELECT * FROM table_without_compression2; |
#ERROR 1406 (22001): Data too long for column 'data' at row 1 |
INSERT INTO table_compressed2 SELECT * FROM table_without_compression; |
#ERROR 1406 (22001): Data too long for column 'data' at row 1 |
—
creating now a LONGBLOB / LONGTEXT
CREATE TABLE table_compressed_long_blob (
|
`id` int(11) NOT NULL DEFAULT 0,
|
`gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000,
|
`data` LONGBLOB COMPRESSED NOT NULL,
|
PRIMARY KEY (`id`,`gmtime`)
|
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
|
#Query OK, 0 rows affected (0.141 sec)
|
|
CREATE TABLE table_compressed_long_text (
|
`id` int(11) NOT NULL DEFAULT 0,
|
`gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000,
|
`data` LONGTEXT COMPRESSED NOT NULL,
|
PRIMARY KEY (`id`,`gmtime`)
|
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
|
#Query OK, 0 rows affected (0.161 sec)
|
|
INSERT INTO table_compressed_long_blob SELECT * FROM table_without_compression2;
|
#Query OK, 1 row affected (0.045 sec) <- NICE!
|
#Records: 1 Duplicates: 0 Warnings: 0
|
|
INSERT INTO table_compressed_long_text SELECT * FROM table_without_compression2;
|
#Query OK, 1 row affected (0.009 sec) <- NICE!
|
#Records: 1 Duplicates: 0 Warnings: 0
|
Possible solutions:
1) Report at Documentation about data sizes,
2) Change code to use a bigger (high order) column type <- preffered, since we have users with "INSERT compressed_table SELECT * FROM same_column_type_structure" queries
—
problem found:
MariaDB [test]> truncate table_without_compression; |
Query OK, 0 rows affected (0.131 sec) |
|
MariaDB [test]> INSERT INTO table_without_compression VALUES (1,0,repeat('a',65534)); |
Query OK, 1 row affected (0.039 sec)
|
|
MariaDB [test]> INSERT INTO table_compressed SELECT * FROM table_without_compression; |
Query OK, 1 row affected (0.044 sec)
|
Records: 1 Duplicates: 0 Warnings: 0
|
at compression source code, we have LENGTH-1 as field size
maybe solution is here (i don't know the full source):
https://github.com/MariaDB/server/commit/fdc47792354c820aa4a8542d7c00d434424a63fb#diff-06426300d2671c77dd6763db8df79306R3257
instead of:
+ return n < 256 ? 1 : n < 65536 ? 2 : n < 16777216 ? 3 : 4;
use:
+ return n < 256-1 ? 1 : n < 65536-1 ? 2 : n < 16777216-1 ? 3 : 4;
Attachments
Issue Links
- is blocked by
-
MDEV-15762 VARCHAR(0) COMPRESSED crashes the server
-
- Closed
-
-
MDEV-15763 VARCHAR(1) COMPRESSED crashes the server
-
- Closed
-
- relates to
-
MDEV-16134 Wrong I_S.COLUMNS.CHARACTER_XXX_LENGTH value for compressed columns
-
- Closed
-
Activity
Field | Original Value | New Value |
---|---|---|
Description |
I'm using a Azure VM (ubuntu), mariadb ubuntu package
Server version: 10.3.5-MariaDB-10.3.5+maria~xenial mariadb.org binary distribution {code:sql} CREATE TABLE `table_without_compression` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.385 sec) CREATE TABLE `table_compressed` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;* #Query OK, 0 rows affected (0.385 sec) INSERT INTO table_without_compression VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.135 sec) INSERT INTO table_without_compression VALUES (1,1,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN, maybe that's the problem, internally compression consider the from/to column instead of a bigger column (long blob for example when using blob/longblob, i will test with TEXT/LONG TEXT and VARCHAR too) INSERT INTO table_compressed SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- THAT'S THE ERROR {code} --- Same to TEXT {code:sql} CREATE TABLE `table_compressed2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.231 sec) CREATE TABLE `table_without_compression2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.253 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.061 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN INSERT INTO table_compressed2 SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN {code} --- mixing data types: {code:sql} INSERT INTO table_compressed SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 INSERT INTO table_compressed2 SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 {code} --- creating now a LONGBLOB / LONGTEXT {code} CREATE TABLE table_compressed_long_blob ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGBLOB COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.141 sec) CREATE TABLE table_compressed_long_text ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGTEXT COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.161 sec) INSERT INTO table_compressed_long_blob SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.045 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 INSERT INTO table_compressed_long_text SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.009 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 {code} Possible solutions: 1) Report at Documentation about data sizes, 2) change code to use a bigger (high order) column type |
Priority | Major [ 3 ] | Critical [ 2 ] |
Description |
I'm using a Azure VM (ubuntu), mariadb ubuntu package
Server version: 10.3.5-MariaDB-10.3.5+maria~xenial mariadb.org binary distribution {code:sql} CREATE TABLE `table_without_compression` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.385 sec) CREATE TABLE `table_compressed` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;* #Query OK, 0 rows affected (0.385 sec) INSERT INTO table_without_compression VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.135 sec) INSERT INTO table_without_compression VALUES (1,1,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN, maybe that's the problem, internally compression consider the from/to column instead of a bigger column (long blob for example when using blob/longblob, i will test with TEXT/LONG TEXT and VARCHAR too) INSERT INTO table_compressed SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- THAT'S THE ERROR {code} --- Same to TEXT {code:sql} CREATE TABLE `table_compressed2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.231 sec) CREATE TABLE `table_without_compression2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.253 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.061 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN INSERT INTO table_compressed2 SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN {code} --- mixing data types: {code:sql} INSERT INTO table_compressed SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 INSERT INTO table_compressed2 SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 {code} --- creating now a LONGBLOB / LONGTEXT {code} CREATE TABLE table_compressed_long_blob ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGBLOB COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.141 sec) CREATE TABLE table_compressed_long_text ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGTEXT COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.161 sec) INSERT INTO table_compressed_long_blob SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.045 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 INSERT INTO table_compressed_long_text SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.009 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 {code} Possible solutions: 1) Report at Documentation about data sizes, 2) change code to use a bigger (high order) column type |
I'm using a Azure VM (ubuntu), mariadb ubuntu package
Server version: 10.3.5-MariaDB-10.3.5+maria~xenial mariadb.org binary distribution {code:sql} CREATE TABLE `table_without_compression` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.385 sec) CREATE TABLE `table_compressed` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;* #Query OK, 0 rows affected (0.385 sec) INSERT INTO table_without_compression VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.135 sec) INSERT INTO table_without_compression VALUES (1,1,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN, maybe that's the problem, internally compression consider the from/to column instead of a bigger column (long blob for example when using blob/longblob, i will test with TEXT/LONG TEXT and VARCHAR too) INSERT INTO table_compressed SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- THAT'S THE ERROR {code} --- Same to TEXT {code:sql} CREATE TABLE `table_compressed2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.231 sec) CREATE TABLE `table_without_compression2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.253 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.061 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN INSERT INTO table_compressed2 SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN {code} --- mixing data types: {code:sql} INSERT INTO table_compressed SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 INSERT INTO table_compressed2 SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 {code} --- creating now a LONGBLOB / LONGTEXT {code} CREATE TABLE table_compressed_long_blob ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGBLOB COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.141 sec) CREATE TABLE table_compressed_long_text ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGTEXT COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.161 sec) INSERT INTO table_compressed_long_blob SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.045 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 INSERT INTO table_compressed_long_text SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.009 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 {code} Possible solutions: 1) Report at Documentation about data sizes, 2) Change code to use a bigger (high order) column type <- preffered, since we have users with "INSERT compressed_table SELECT * FROM same_column_type_structure" queries |
Description |
I'm using a Azure VM (ubuntu), mariadb ubuntu package
Server version: 10.3.5-MariaDB-10.3.5+maria~xenial mariadb.org binary distribution {code:sql} CREATE TABLE `table_without_compression` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.385 sec) CREATE TABLE `table_compressed` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;* #Query OK, 0 rows affected (0.385 sec) INSERT INTO table_without_compression VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.135 sec) INSERT INTO table_without_compression VALUES (1,1,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN, maybe that's the problem, internally compression consider the from/to column instead of a bigger column (long blob for example when using blob/longblob, i will test with TEXT/LONG TEXT and VARCHAR too) INSERT INTO table_compressed SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- THAT'S THE ERROR {code} --- Same to TEXT {code:sql} CREATE TABLE `table_compressed2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.231 sec) CREATE TABLE `table_without_compression2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.253 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.061 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN INSERT INTO table_compressed2 SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN {code} --- mixing data types: {code:sql} INSERT INTO table_compressed SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 INSERT INTO table_compressed2 SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 {code} --- creating now a LONGBLOB / LONGTEXT {code} CREATE TABLE table_compressed_long_blob ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGBLOB COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.141 sec) CREATE TABLE table_compressed_long_text ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGTEXT COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.161 sec) INSERT INTO table_compressed_long_blob SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.045 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 INSERT INTO table_compressed_long_text SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.009 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 {code} Possible solutions: 1) Report at Documentation about data sizes, 2) Change code to use a bigger (high order) column type <- preffered, since we have users with "INSERT compressed_table SELECT * FROM same_column_type_structure" queries |
I'm using a Azure VM (ubuntu), mariadb ubuntu package
Server version: 10.3.5-MariaDB-10.3.5+maria~xenial mariadb.org binary distribution {code:sql} MariaDB [test]> show variables like 'column_compre%'; +----------------------------------+------------------+ | Variable_name | Value | +----------------------------------+------------------+ | column_compression_threshold | 100 | | column_compression_zlib_level | 6 | | column_compression_zlib_strategy | DEFAULT_STRATEGY | | column_compression_zlib_wrap | OFF | +----------------------------------+------------------+ 4 rows in set (0.025 sec) CREATE TABLE `table_without_compression` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.385 sec) CREATE TABLE `table_compressed` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;* #Query OK, 0 rows affected (0.385 sec) INSERT INTO table_without_compression VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.135 sec) INSERT INTO table_without_compression VALUES (1,1,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN, maybe that's the problem, internally compression consider the from/to column instead of a bigger column (long blob for example when using blob/longblob, i will test with TEXT/LONG TEXT and VARCHAR too) INSERT INTO table_compressed SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- THAT'S THE ERROR {code} --- Same to TEXT {code:sql} CREATE TABLE `table_compressed2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.231 sec) CREATE TABLE `table_without_compression2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.253 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.061 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN INSERT INTO table_compressed2 SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN {code} --- mixing data types: {code:sql} INSERT INTO table_compressed SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 INSERT INTO table_compressed2 SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 {code} --- creating now a LONGBLOB / LONGTEXT {code} CREATE TABLE table_compressed_long_blob ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGBLOB COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.141 sec) CREATE TABLE table_compressed_long_text ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGTEXT COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.161 sec) INSERT INTO table_compressed_long_blob SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.045 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 INSERT INTO table_compressed_long_text SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.009 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 {code} Possible solutions: 1) Report at Documentation about data sizes, 2) Change code to use a bigger (high order) column type <- preffered, since we have users with "INSERT compressed_table SELECT * FROM same_column_type_structure" queries |
Description |
I'm using a Azure VM (ubuntu), mariadb ubuntu package
Server version: 10.3.5-MariaDB-10.3.5+maria~xenial mariadb.org binary distribution {code:sql} MariaDB [test]> show variables like 'column_compre%'; +----------------------------------+------------------+ | Variable_name | Value | +----------------------------------+------------------+ | column_compression_threshold | 100 | | column_compression_zlib_level | 6 | | column_compression_zlib_strategy | DEFAULT_STRATEGY | | column_compression_zlib_wrap | OFF | +----------------------------------+------------------+ 4 rows in set (0.025 sec) CREATE TABLE `table_without_compression` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.385 sec) CREATE TABLE `table_compressed` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;* #Query OK, 0 rows affected (0.385 sec) INSERT INTO table_without_compression VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.135 sec) INSERT INTO table_without_compression VALUES (1,1,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN, maybe that's the problem, internally compression consider the from/to column instead of a bigger column (long blob for example when using blob/longblob, i will test with TEXT/LONG TEXT and VARCHAR too) INSERT INTO table_compressed SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- THAT'S THE ERROR {code} --- Same to TEXT {code:sql} CREATE TABLE `table_compressed2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.231 sec) CREATE TABLE `table_without_compression2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.253 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.061 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN INSERT INTO table_compressed2 SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN {code} --- mixing data types: {code:sql} INSERT INTO table_compressed SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 INSERT INTO table_compressed2 SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 {code} --- creating now a LONGBLOB / LONGTEXT {code} CREATE TABLE table_compressed_long_blob ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGBLOB COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.141 sec) CREATE TABLE table_compressed_long_text ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGTEXT COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.161 sec) INSERT INTO table_compressed_long_blob SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.045 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 INSERT INTO table_compressed_long_text SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.009 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 {code} Possible solutions: 1) Report at Documentation about data sizes, 2) Change code to use a bigger (high order) column type <- preffered, since we have users with "INSERT compressed_table SELECT * FROM same_column_type_structure" queries |
I'm using a Azure VM (ubuntu), mariadb ubuntu package
Server version: 10.3.5-MariaDB-10.3.5+maria~xenial mariadb.org binary distribution {code:sql} MariaDB [test]> show variables like 'column_compre%'; +----------------------------------+------------------+ | Variable_name | Value | +----------------------------------+------------------+ | column_compression_threshold | 100 | | column_compression_zlib_level | 6 | | column_compression_zlib_strategy | DEFAULT_STRATEGY | | column_compression_zlib_wrap | OFF | +----------------------------------+------------------+ 4 rows in set (0.025 sec) CREATE TABLE `table_without_compression` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.385 sec) CREATE TABLE `table_compressed` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;* #Query OK, 0 rows affected (0.385 sec) INSERT INTO table_without_compression VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.135 sec) INSERT INTO table_without_compression VALUES (1,1,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN, maybe that's the problem, internally compression consider the from/to column instead of a bigger column (long blob for example when using blob/longblob, i will test with TEXT/LONG TEXT and VARCHAR too) INSERT INTO table_compressed SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- THAT'S THE ERROR {code} --- Same to TEXT {code:sql} CREATE TABLE `table_compressed2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.231 sec) CREATE TABLE `table_without_compression2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.253 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.061 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN INSERT INTO table_compressed2 SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN {code} --- mixing data types: {code:sql} INSERT INTO table_compressed SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 INSERT INTO table_compressed2 SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 {code} --- creating now a LONGBLOB / LONGTEXT {code} CREATE TABLE table_compressed_long_blob ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGBLOB COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.141 sec) CREATE TABLE table_compressed_long_text ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGTEXT COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.161 sec) INSERT INTO table_compressed_long_blob SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.045 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 INSERT INTO table_compressed_long_text SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.009 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 {code} Possible solutions: 1) Report at Documentation about data sizes, 2) Change code to use a bigger (high order) column type <- preffered, since we have users with "INSERT compressed_table SELECT * FROM same_column_type_structure" queries --- problem found: {code:sql} MariaDB [test]> truncate table_without_compression; Query OK, 0 rows affected (0.131 sec) MariaDB [test]> INSERT INTO table_without_compression VALUES (1,0,repeat('a',65534)); Query OK, 1 row affected (0.039 sec) MariaDB [test]> INSERT INTO table_compressed SELECT * FROM table_without_compression; Query OK, 1 row affected (0.044 sec) Records: 1 Duplicates: 0 Warnings: 0 {code} at compression source code, we have LENGTH-1 as field size https://github.com/MariaDB/server/commit/fdc47792354c820aa4a8542d7c00d434424a63fb#diff-06426300d2671c77dd6763db8df79306R3241 |
Description |
I'm using a Azure VM (ubuntu), mariadb ubuntu package
Server version: 10.3.5-MariaDB-10.3.5+maria~xenial mariadb.org binary distribution {code:sql} MariaDB [test]> show variables like 'column_compre%'; +----------------------------------+------------------+ | Variable_name | Value | +----------------------------------+------------------+ | column_compression_threshold | 100 | | column_compression_zlib_level | 6 | | column_compression_zlib_strategy | DEFAULT_STRATEGY | | column_compression_zlib_wrap | OFF | +----------------------------------+------------------+ 4 rows in set (0.025 sec) CREATE TABLE `table_without_compression` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.385 sec) CREATE TABLE `table_compressed` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;* #Query OK, 0 rows affected (0.385 sec) INSERT INTO table_without_compression VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.135 sec) INSERT INTO table_without_compression VALUES (1,1,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN, maybe that's the problem, internally compression consider the from/to column instead of a bigger column (long blob for example when using blob/longblob, i will test with TEXT/LONG TEXT and VARCHAR too) INSERT INTO table_compressed SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- THAT'S THE ERROR {code} --- Same to TEXT {code:sql} CREATE TABLE `table_compressed2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.231 sec) CREATE TABLE `table_without_compression2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.253 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.061 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN INSERT INTO table_compressed2 SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN {code} --- mixing data types: {code:sql} INSERT INTO table_compressed SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 INSERT INTO table_compressed2 SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 {code} --- creating now a LONGBLOB / LONGTEXT {code} CREATE TABLE table_compressed_long_blob ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGBLOB COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.141 sec) CREATE TABLE table_compressed_long_text ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGTEXT COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.161 sec) INSERT INTO table_compressed_long_blob SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.045 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 INSERT INTO table_compressed_long_text SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.009 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 {code} Possible solutions: 1) Report at Documentation about data sizes, 2) Change code to use a bigger (high order) column type <- preffered, since we have users with "INSERT compressed_table SELECT * FROM same_column_type_structure" queries --- problem found: {code:sql} MariaDB [test]> truncate table_without_compression; Query OK, 0 rows affected (0.131 sec) MariaDB [test]> INSERT INTO table_without_compression VALUES (1,0,repeat('a',65534)); Query OK, 1 row affected (0.039 sec) MariaDB [test]> INSERT INTO table_compressed SELECT * FROM table_without_compression; Query OK, 1 row affected (0.044 sec) Records: 1 Duplicates: 0 Warnings: 0 {code} at compression source code, we have LENGTH-1 as field size https://github.com/MariaDB/server/commit/fdc47792354c820aa4a8542d7c00d434424a63fb#diff-06426300d2671c77dd6763db8df79306R3241 |
I'm using a Azure VM (ubuntu), mariadb ubuntu package
Server version: 10.3.5-MariaDB-10.3.5+maria~xenial mariadb.org binary distribution {code:sql} MariaDB [test]> show variables like 'column_compre%'; +----------------------------------+------------------+ | Variable_name | Value | +----------------------------------+------------------+ | column_compression_threshold | 100 | | column_compression_zlib_level | 6 | | column_compression_zlib_strategy | DEFAULT_STRATEGY | | column_compression_zlib_wrap | OFF | +----------------------------------+------------------+ 4 rows in set (0.025 sec) CREATE TABLE `table_without_compression` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.385 sec) CREATE TABLE `table_compressed` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` blob COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;* #Query OK, 0 rows affected (0.385 sec) INSERT INTO table_without_compression VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.135 sec) INSERT INTO table_without_compression VALUES (1,1,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN, maybe that's the problem, internally compression consider the from/to column instead of a bigger column (long blob for example when using blob/longblob, i will test with TEXT/LONG TEXT and VARCHAR too) INSERT INTO table_compressed SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- THAT'S THE ERROR {code} --- Same to TEXT {code:sql} CREATE TABLE `table_compressed2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text COMPRESSED NOT NULL DEFAULT '', PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.231 sec) CREATE TABLE `table_without_compression2` ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` text NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.253 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65535)); #Query OK, 1 row affected (0.061 sec) INSERT INTO table_without_compression2 VALUES (1,0,repeat('a',65536)); #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN INSERT INTO table_compressed2 SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 <- JUST TO CHECK IF WE GOT MAX SIZE OF BLOB COLUMN {code} --- mixing data types: {code:sql} INSERT INTO table_compressed SELECT * FROM table_without_compression2; #ERROR 1406 (22001): Data too long for column 'data' at row 1 INSERT INTO table_compressed2 SELECT * FROM table_without_compression; #ERROR 1406 (22001): Data too long for column 'data' at row 1 {code} --- creating now a LONGBLOB / LONGTEXT {code} CREATE TABLE table_compressed_long_blob ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGBLOB COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.141 sec) CREATE TABLE table_compressed_long_text ( `id` int(11) NOT NULL DEFAULT 0, `gmtime` decimal(22,7) NOT NULL DEFAULT 0.0000000, `data` LONGTEXT COMPRESSED NOT NULL, PRIMARY KEY (`id`,`gmtime`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; #Query OK, 0 rows affected (0.161 sec) INSERT INTO table_compressed_long_blob SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.045 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 INSERT INTO table_compressed_long_text SELECT * FROM table_without_compression2; #Query OK, 1 row affected (0.009 sec) <- NICE! #Records: 1 Duplicates: 0 Warnings: 0 {code} Possible solutions: 1) Report at Documentation about data sizes, 2) Change code to use a bigger (high order) column type <- preffered, since we have users with "INSERT compressed_table SELECT * FROM same_column_type_structure" queries --- problem found: {code:sql} MariaDB [test]> truncate table_without_compression; Query OK, 0 rows affected (0.131 sec) MariaDB [test]> INSERT INTO table_without_compression VALUES (1,0,repeat('a',65534)); Query OK, 1 row affected (0.039 sec) MariaDB [test]> INSERT INTO table_compressed SELECT * FROM table_without_compression; Query OK, 1 row affected (0.044 sec) Records: 1 Duplicates: 0 Warnings: 0 {code} at compression source code, we have LENGTH-1 as field size https://github.com/MariaDB/server/commit/fdc47792354c820aa4a8542d7c00d434424a63fb#diff-06426300d2671c77dd6763db8df79306R3241 maybe solution is here (i don't know the full source): https://github.com/MariaDB/server/commit/fdc47792354c820aa4a8542d7c00d434424a63fb#diff-06426300d2671c77dd6763db8df79306R3257 instead of: + return n < 256 ? 1 : n < 65536 ? 2 : n < 16777216 ? 3 : 4; use: + return n < 256-1 ? 1 : n < 65536-1 ? 2 : n < 16777216-1 ? 3 : 4; |
Priority | Critical [ 2 ] | Trivial [ 5 ] |
Assignee | Sergey Vojtovich [ svoj ] |
Status | Open [ 1 ] | Confirmed [ 10101 ] |
Affects Version/s | 10.3 [ 22126 ] |
Fix Version/s | 10.3 [ 22126 ] |
Priority | Trivial [ 5 ] | Major [ 3 ] |
Sprint | 10.3.6 [ 237 ] |
Sprint | 10.3.6-1 [ 237 ] | 10.3.6-1, 10.3.6 [ 237, 240 ] |
Assignee | Sergey Vojtovich [ svoj ] | Sergei Golubchik [ serg ] |
Status | Confirmed [ 10101 ] | In Review [ 10002 ] |
Assignee | Sergei Golubchik [ serg ] | Alexey Botchkov [ holyfoot ] |
Link |
This issue is blocked by |
Link |
This issue is blocked by |
Assignee | Alexey Botchkov [ holyfoot ] | Alexander Barkov [ bar ] |
Link |
This issue relates to |
Status | In Review [ 10002 ] | Stalled [ 10000 ] |
Assignee | Alexander Barkov [ bar ] | Sergey Vojtovich [ svoj ] |
Component/s | Server [ 13907 ] | |
Fix Version/s | 10.3.7 [ 23005 ] | |
Fix Version/s | 10.3 [ 22126 ] | |
Resolution | Fixed [ 1 ] | |
Status | Stalled [ 10000 ] | Closed [ 6 ] |
Sprint | 10.3.6-1, 10.3.6 [ 237, 240 ] | 10.3.6-1 [ 237 ] |
Rank | Ranked higher |
Workflow | MariaDB v3 [ 86080 ] | MariaDB v4 [ 153976 ] |
comment included
https://mariadb.com/kb/en/library/storage-engine-independent-column-compression/
the problem:
VARCHAR(255) have length 255
VARCHAR(255) COMPRESSED have length 255 (but use 2 bytes)
—
disconsider, tested:
CREATE TABLE `v255` (
`l` varchar(255) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE `v255c` (
`l` varchar(255) /!100301 COMPRESSED/ DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
MariaDB [test]> insert into v255 values (repeat('a',255));
Query OK, 1 row affected (0.019 sec)
MariaDB [test]> insert into v255c values (repeat('a',255));
Query OK, 1 row affected (0.055 sec)
that's just a problem about documentation and field limit at BLOB types