[MDEV-29453] blob data corrupted by INSERT INTO Created: 2022-09-02  Updated: 2023-02-09  Resolved: 2023-01-04

Status: Closed
Project: MariaDB Server
Component/s: Data Manipulation - Insert
Affects Version/s: 10.7.5, 10.8.4, 10.9.2, 10.9.4, 10.10.2
Fix Version/s: 10.11.3, 10.8.8, 10.9.6, 10.10.4

Type: Bug Priority: Major
Reporter: Michael Roosz Assignee: Thirunarayanan Balathandayuthapani
Resolution: Duplicate Votes: 0
Labels: None

Attachments: File debug.sql    
Issue Links:
Duplicate
is duplicated by MDEV-30321 blob data corrupted by row_merge_writ... Closed
Problem/Incident
is caused by MDEV-27318 SIGSEGV in row_merge_tuple_sort and A... Closed

 Description   

After creating a sql dump of my drupal database with mysqldump and then importing it into a fresh mariadb installation, I noticed that for two rows the longblob data got corrupted to all binary zeros (but correct length).

To make it easier to debug I simplified it to the attached debug.sql file.

After importing it with

cat debug.sql | mysql

for mariadb 10.7.5, 10.8.4 and 10.9.2 the following rows get corrupted:

name = debug12637_______
value = 0x00000000...
 
name = debug7498___________________
value = 0x00000000...

with 10.6.8 it is working correctly:

name = debug12637_______
value =  0x78787878...
 
name = debug7498___________________
value =  0x78787878...

There is no crash, no error log, no nothing... I was lucky that one of my users noticed the corruption after a few hours.

It must be related to the length of the line or the rows, or something like that, because when I create the dump with "--net-buffer-length 500000" the data is imported correctly.

update 12/2022:

  • bug is also present in 10.9.4 and 10.10.2
  • bug is present in the official dockerhub mariadb (based on ubuntu) images, as well as with windows binaries - thus seems to be a generic issue (docker images and windows binaries have been tested on two completely different systems)
  • testing different mysql client options (--net-buffer-length 500000 --max-allowed-packet=500000 --unbuffered --show-warnings --compress --binary-mode) did not show any difference
  • setting max-allowed-packet=1G on server-side also does not fix the issue
  • bug seems to be related to these statements in the sql file generated by mysqldump:

    /*!40014 SET UNIQUE_CHECKS=0 */;
    /*!40014 SET FOREIGN_KEY_CHECKS=0 */;
    /*!40000 ALTER TABLE `variable` DISABLE KEYS */;
    

    but I do not know why

  • creating the dump file with "mysqldump --hex-blob" makes the bug trigger for different rows - maybe due to different encoding length?
  • at this point I do not know how to continue debugging - any help is greatly appreciated


 Comments   
Comment by Michael Roosz [ 2022-12-30 ]

still present in 10.9.4 and 10.10.2.

pretty extreme bug - I need to restore from a sql dump file but I cannot due to this issue.
only option I see is downgrading to 10.6

Generated at Thu Feb 08 10:08:42 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.