Details
-
Bug
-
Status: Open (View Workflow)
-
Major
-
Resolution: Unresolved
-
10.6
-
None
-
Q3/2026 Server Maintenance
Description
Environment:
Customer is running Enterprise Server version 10.6.24-20. The following variables are set at the server level, with page compression is limited to specific tables:
@@innodb_file_per_table: 1
|
@@innodb_default_row_format: dynamic
|
@@innodb_compression_algorithm: zlib
|
@@innodb_compression_level: 6
|
Implementation:
We've enabled page compression on existing tables using pt-online-schema-change. The newly created tables showed the following in SHOW TABLE STATUS output:
partitioned "PAGE_COMPRESSED"=1 |
Issue #1:
One of the tables showed a significant amount of compression.
$ du -h EqBucketTotal#P#p2023Q2.ibd |
617M EqBucketTotal#P#p2023Q2.ibd |
$ du -h --apparent-size EqBucketTotal#P#p2023Q2.ibd |
2.8G EqBucketTotal#P#p2023Q2.ibd |
The other four tables showed no compression whatsoever, and we don't have any explanation for this. The tables have identical definitions, data types, and partition structures. The data values are slightly different, but surely not so different that one table would compress 75% and the other tables 0%?
Issue #2:
After compressing the tables, the customer saw a significant spike in EBS IOPS for the next physical backup. This was so severe that we reverted the table compression. EBS IOPS immediately returned to normal.
After discussing with Engineering, the working theory is that by punching holes in the physical blocks, EBS was no longer able to merge physically sequential reads into one operation:
https://docs.aws.amazon.com/ebs/latest/userguide/ebs-io-characteristics.html#ebs-io-iops
Because compression fragments the files, this means that a single 256K operation is split into 16 random, non-sequential reads. This would effectively mean that page compression could not be used on EBS volumes, since it would dramatically increase the number of IOPS required to run the disk.