Details
-
Bug
-
Status: Closed (View Workflow)
-
Trivial
-
Resolution: Fixed
-
10.6, 10.11, 11.4, 11.8, 10.5(EOL)
-
None
-
Debian 12, Debian 6.1.115-1 (2024-11-01) x86_64 GNU/Linux, gcc (Debian 12.2.0-14) 12.2.0, zlib1g 1:1.2.13.dfsg-1
Description
In the test case below, we run UNCOMPRESS function on a plain text string.
First we get it return a 1-row result set to see the warning and result it produces. Then we turn off the noise and run it on a number of rows, to see how long it takes.
We do the whole thing twice, once with max_allowed_packet=512M and another time with max_allowed_packet=1G.
--source include/have_sequence.inc
|
|
SET @allowed= @@global.max_allowed_packet; |
--let $packets_mb=512, 1024
|
|
--let $run= 1
|
while ($run <= 2)
|
{
|
--let $mb= `SELECT ELT($run, $packets_mb)` |
--eval SET GLOBAL max_allowed_packet= $mb * 1024*1024 |
--connect (con1,localhost,root,,) |
SELECT @@max_allowed_packet; |
SELECT UNCOMPRESS('xxxxxxxxxxxxx'); |
--disable_warnings |
--disable_result_log |
SET @t= @@timestamp; |
SELECT UNCOMPRESS('xxxxxxxxxxxxx') FROM seq_1_to_500; |
--enable_warnings |
--enable_result_log |
SELECT @@timestamp - @t AS duration; |
--disconnect con1 |
--connection default |
--inc $run |
}
|
|
SET GLOBAL max_allowed_packet= @allowed; |
With max_allowed_packet=512M the function call causes the warning ER_TOO_BIG_FOR_UNCOMPRESS and quotes the max_allowed_packet as the maximum size
main c29e83f226190ed218961fa26b88acd99c6aef6b |
@@max_allowed_packet
|
536870912
|
SELECT UNCOMPRESS('xxxxxxxxxxxxx'); |
UNCOMPRESS('xxxxxxxxxxxxx') |
NULL
|
Warnings:
|
Warning 1256 Uncompressed data size too large; the maximum size is 536870912 (probably, length of uncompressed data was corrupted) |
With max_allowed_packet=1G it causes the warning ER_ZLIB_Z_DATA_ERROR instead
@@max_allowed_packet
|
1073741824
|
SELECT UNCOMPRESS('xxxxxxxxxxxxx'); |
UNCOMPRESS('xxxxxxxxxxxxx') |
NULL
|
Warnings:
|
Warning 1259 ZLIB: Input data corrupted
|
As for the performance, on a non-debug build with max_allowed_packet=512M the 500000 calls took 0.07 seconds on my machine
SET @t= @@timestamp; |
SELECT UNCOMPRESS('xxxxxxxxxxxxx') FROM seq_1_to_500000; |
SELECT @@timestamp - @t AS duration; |
duration
|
0.06628608703613281
|
with max_allowed_packet=1G it took 2 seconds
SET @t= @@timestamp; |
SELECT UNCOMPRESS('xxxxxxxxxxxxx') FROM seq_1_to_500000; |
SELECT @@timestamp - @t AS duration; |
duration
|
2.070930004119873
|
On a debug build, it's impossible to wait for the second call to finish. If we scale it down from 500000 to 500 invocations, it is 0.02 seconds vs 45 seconds on my machine.
Full unabridged output on a debug build with sequence table scaled down to 500 rows:
main c29e83f226190ed218961fa26b88acd99c6aef6b |
SET @allowed= @@global.max_allowed_packet; |
SET GLOBAL max_allowed_packet= 512 * 1024*1024; |
connect con1,localhost,root,,; |
SELECT @@max_allowed_packet; |
@@max_allowed_packet
|
536870912
|
SELECT UNCOMPRESS('xxxxxxxxxxxxx'); |
UNCOMPRESS('xxxxxxxxxxxxx') |
NULL
|
Warnings:
|
Warning 1256 Uncompressed data size too large; the maximum size is 536870912 (probably, length of uncompressed data was corrupted) |
SET @t= @@timestamp; |
SELECT UNCOMPRESS('xxxxxxxxxxxxx') FROM seq_1_to_500; |
SELECT @@timestamp - @t AS duration; |
duration
|
0.025333881378173828
|
disconnect con1;
|
connection default; |
SET GLOBAL max_allowed_packet= 1024 * 1024*1024; |
connect con1,localhost,root,,; |
SELECT @@max_allowed_packet; |
@@max_allowed_packet
|
1073741824
|
SELECT UNCOMPRESS('xxxxxxxxxxxxx'); |
UNCOMPRESS('xxxxxxxxxxxxx') |
NULL
|
Warnings:
|
Warning 1259 ZLIB: Input data corrupted
|
SET @t= @@timestamp; |
SELECT UNCOMPRESS('xxxxxxxxxxxxx') FROM seq_1_to_500; |
SELECT @@timestamp - @t AS duration; |
duration
|
44.36912393569946
|
disconnect con1;
|
connection default; |
SET GLOBAL max_allowed_packet= @allowed; |