[MDEV-6703] Add "mysqlbinlog --binlog-row-event-max-size" support Created: 2014-09-05  Updated: 2015-07-31  Resolved: 2015-02-24

Status: Closed
Project: MariaDB Server
Component/s: Replication
Affects Version/s: 10.0.13
Fix Version/s: 10.0.17

Type: Bug Priority: Major
Reporter: Hartmut Holzgraefe Assignee: Sergei Golubchik
Resolution: Fixed Votes: 3
Labels: upstream-fixed, verified

Issue Links:
PartOf
is part of MDEV-5242 merge 5.6 bugfixes into 10.0 Open
Relates
relates to MDEV-8340 Add "mysqlbinlog --binlog-row-event-m... Closed

 Description   

This option was added in MySQL 5.6 as response to http://bugs.mysql.com/49932 "mysqlbinlog max_allowed_packet hard coded to 1GB"

Not really sure why Andrew thought it was needed as max_allowed_packet has an upper limit of 1GB anyway so mysqlbinlog should never run into problems with it, but I had a customer case where this came up, and maybe it would make sense to be able to run mysqlbinlog with lower max_allowed setting to verify that a slaves binlog-row-event-max-size would be large enough to process a masters logs ...

Or can row based replication binlog events indeed exceed 1GB?



 Comments   
Comment by Elena Stepanova [ 2014-09-07 ]

the commit comment from the mentioned bug says that binlog events can indeed exceed 1GB,

The size limitation of ROW event is controlled by binlog-row-event-max-size (4G),
which can be larger than max-allowed-packet

However, the bugfix for http://bugs.mysql.com/49932 isn't limited to adding an option to mysqlbinlog, there was also a change in how slave threads process such big events. Does it make sense to merge only a part of the fix that affects mysqlbinlog?

Comment by Hartmut Holzgraefe [ 2014-09-09 ]

This was originally triggered by a case where a slave IO thread failed to read an ~1.4G event from its master ... another identical slave did not fail though, so it is not clear what happened there ... we ended up deciding to set up the failing slave from scratch ...

I'm wondering how a binlog row event can become much larger than the upper limit for max-allowed-packet, but maybe e.g. a SQL statement concatenating two 600M strings in an UPDATE can cause this? Or is this possible when using LOAD DATA INFILE?

The mysqlbinlog aspect of this just came up as an attempt to see what was in the binlog at the failing position also failed with a "too large" error:

ERROR: Error in Log_event::read_log_event(): 'Event too big', data_len: 1392919111, event_type: 0

Comment by Elena Stepanova [ 2014-09-15 ]

I think it's easy enough to create a big event e.g. running an UPDATE with a loose (or absent) WHERE clause on a big table.
So, I presume the request is really to merge the whole bugfix http://bugs.mysql.com/49932 into 10.0?

Comment by Hartmut Holzgraefe [ 2014-11-07 ]

Yes, a >1GB binlog entry can easily be generated even by a single row change event:

  create table t1(b1 longblob, b2 longblob) engine=myisam;
  insert into t1 values(repeat('a', 700000000), repeat('b', 700000000));

This will kill slaves with binlog_format=ROW with

                Last_IO_Errno: 1236
                Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master; the first event '.' at 4, the last event read from 'hartmut-laptop-bin.000001' at 582, the last byte read from 'hartmut-laptop-bin.000001' at 601.'

and mysqlbinlog will fail with

# at 538
ERROR: Error in Log_event::read_log_event(): 'Event too big', data_len: 1400000038, event_type: 23
ERROR: Could not read entry at offset 582: Error in log format or read error.

With MySQL 5.6.20 mysqlbinlog works fine with this 1.4GB event, it fails when event size exceeds 1.6GB due to http://bugs.mysql.com/bug.php?id=74734

Replication still fails with the same error on the master size unless I set --binlog_row_event_max_size to a value larger than the package size. If I do that the master will process the event just fine and transfer it to the slave, but the slave will then fail with

                Last_IO_Errno: 1153
                Last_IO_Error: Got a packet bigger than 'slave_max_allowed_packet' bytes

So looks as if the patch from Bug #49932 only fixes the master side to use

  max(max_allowed_packet, binlog_row_event_max_size);

but similar

  max(slave_max_allowed_packet, binlog_row_event_max_size);

logic still seems to be missing from the slave side? (and slave_max_allowed_packet has an upper limit of 1GB like max_allowed_packet) ...

Generated at Thu Feb 08 07:13:57 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.