Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-6703

Add "mysqlbinlog --binlog-row-event-max-size" support

Details

    Description

      This option was added in MySQL 5.6 as response to http://bugs.mysql.com/49932 "mysqlbinlog max_allowed_packet hard coded to 1GB"

      Not really sure why Andrew thought it was needed as max_allowed_packet has an upper limit of 1GB anyway so mysqlbinlog should never run into problems with it, but I had a customer case where this came up, and maybe it would make sense to be able to run mysqlbinlog with lower max_allowed setting to verify that a slaves binlog-row-event-max-size would be large enough to process a masters logs ...

      Or can row based replication binlog events indeed exceed 1GB?

      Attachments

        Issue Links

          Activity

            the commit comment from the mentioned bug says that binlog events can indeed exceed 1GB,

            The size limitation of ROW event is controlled by binlog-row-event-max-size (4G),
            which can be larger than max-allowed-packet

            However, the bugfix for http://bugs.mysql.com/49932 isn't limited to adding an option to mysqlbinlog, there was also a change in how slave threads process such big events. Does it make sense to merge only a part of the fix that affects mysqlbinlog?

            elenst Elena Stepanova added a comment - the commit comment from the mentioned bug says that binlog events can indeed exceed 1GB, The size limitation of ROW event is controlled by binlog-row-event-max-size (4G), which can be larger than max-allowed-packet However, the bugfix for http://bugs.mysql.com/49932 isn't limited to adding an option to mysqlbinlog, there was also a change in how slave threads process such big events. Does it make sense to merge only a part of the fix that affects mysqlbinlog?

            This was originally triggered by a case where a slave IO thread failed to read an ~1.4G event from its master ... another identical slave did not fail though, so it is not clear what happened there ... we ended up deciding to set up the failing slave from scratch ...

            I'm wondering how a binlog row event can become much larger than the upper limit for max-allowed-packet, but maybe e.g. a SQL statement concatenating two 600M strings in an UPDATE can cause this? Or is this possible when using LOAD DATA INFILE?

            The mysqlbinlog aspect of this just came up as an attempt to see what was in the binlog at the failing position also failed with a "too large" error:

            ERROR: Error in Log_event::read_log_event(): 'Event too big', data_len: 1392919111, event_type: 0

            hholzgra Hartmut Holzgraefe added a comment - This was originally triggered by a case where a slave IO thread failed to read an ~1.4G event from its master ... another identical slave did not fail though, so it is not clear what happened there ... we ended up deciding to set up the failing slave from scratch ... I'm wondering how a binlog row event can become much larger than the upper limit for max-allowed-packet, but maybe e.g. a SQL statement concatenating two 600M strings in an UPDATE can cause this? Or is this possible when using LOAD DATA INFILE? The mysqlbinlog aspect of this just came up as an attempt to see what was in the binlog at the failing position also failed with a "too large" error: ERROR: Error in Log_event::read_log_event(): 'Event too big', data_len: 1392919111, event_type: 0

            I think it's easy enough to create a big event e.g. running an UPDATE with a loose (or absent) WHERE clause on a big table.
            So, I presume the request is really to merge the whole bugfix http://bugs.mysql.com/49932 into 10.0?

            elenst Elena Stepanova added a comment - I think it's easy enough to create a big event e.g. running an UPDATE with a loose (or absent) WHERE clause on a big table. So, I presume the request is really to merge the whole bugfix http://bugs.mysql.com/49932 into 10.0?

            Yes, a >1GB binlog entry can easily be generated even by a single row change event:

              create table t1(b1 longblob, b2 longblob) engine=myisam;
              insert into t1 values(repeat('a', 700000000), repeat('b', 700000000));

            This will kill slaves with binlog_format=ROW with

                            Last_IO_Errno: 1236
                            Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master; the first event '.' at 4, the last event read from 'hartmut-laptop-bin.000001' at 582, the last byte read from 'hartmut-laptop-bin.000001' at 601.'

            and mysqlbinlog will fail with

            # at 538
            ERROR: Error in Log_event::read_log_event(): 'Event too big', data_len: 1400000038, event_type: 23
            ERROR: Could not read entry at offset 582: Error in log format or read error.

            With MySQL 5.6.20 mysqlbinlog works fine with this 1.4GB event, it fails when event size exceeds 1.6GB due to http://bugs.mysql.com/bug.php?id=74734

            Replication still fails with the same error on the master size unless I set --binlog_row_event_max_size to a value larger than the package size. If I do that the master will process the event just fine and transfer it to the slave, but the slave will then fail with

                            Last_IO_Errno: 1153
                            Last_IO_Error: Got a packet bigger than 'slave_max_allowed_packet' bytes

            So looks as if the patch from Bug #49932 only fixes the master side to use

              max(max_allowed_packet, binlog_row_event_max_size);

            but similar

              max(slave_max_allowed_packet, binlog_row_event_max_size);

            logic still seems to be missing from the slave side? (and slave_max_allowed_packet has an upper limit of 1GB like max_allowed_packet) ...

            hholzgra Hartmut Holzgraefe added a comment - Yes, a >1GB binlog entry can easily be generated even by a single row change event: create table t1(b1 longblob, b2 longblob) engine=myisam; insert into t1 values(repeat('a', 700000000), repeat('b', 700000000)); This will kill slaves with binlog_format=ROW with Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master; the first event '.' at 4, the last event read from 'hartmut-laptop-bin.000001' at 582, the last byte read from 'hartmut-laptop-bin.000001' at 601.' and mysqlbinlog will fail with # at 538 ERROR: Error in Log_event::read_log_event(): 'Event too big', data_len: 1400000038, event_type: 23 ERROR: Could not read entry at offset 582: Error in log format or read error. With MySQL 5.6.20 mysqlbinlog works fine with this 1.4GB event, it fails when event size exceeds 1.6GB due to http://bugs.mysql.com/bug.php?id=74734 Replication still fails with the same error on the master size unless I set --binlog_row_event_max_size to a value larger than the package size. If I do that the master will process the event just fine and transfer it to the slave, but the slave will then fail with Last_IO_Errno: 1153 Last_IO_Error: Got a packet bigger than 'slave_max_allowed_packet' bytes So looks as if the patch from Bug #49932 only fixes the master side to use max(max_allowed_packet, binlog_row_event_max_size); but similar max(slave_max_allowed_packet, binlog_row_event_max_size); logic still seems to be missing from the slave side? (and slave_max_allowed_packet has an upper limit of 1GB like max_allowed_packet) ...

            People

              serg Sergei Golubchik
              hholzgra Hartmut Holzgraefe
              Votes:
              3 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.