Uploaded image for project: 'MariaDB Connector/C'
  1. MariaDB Connector/C
  2. CONC-658

Errors caused by MYSQL_OPT_MAX_ALLOWED_PACKET are misleading

Details

    • Bug
    • Status: Open (View Workflow)
    • Major
    • Resolution: Unresolved
    • None
    • 3.1, 3.3, 3.4
    • None
    • None

    Description

      The errors that are reported when the client-side max_allowed_packet limit is reached are not helpful and at worst are misleading: exceeding the value in a resultset causes the error to be reported as:

      ERROR 2013 (HY000): Lost connection to server during query
      

      The error is technically true: the connection is lost but it was lost because the connector itself closed it because the client-side limit for max_allowed_packet was exceeded. This makes it hard to debug problems related resultsets when the true cause of the error is not known.

      At the time of writing, the default values for the option are also not documented but based on the source code, it appears to be 1GiB. This also differs from the MariaDB default of 16MiB and the default that is used by the mariadb client, also 16MiB.

      Attachments

        Activity

          Agree with Markus here, this behaviour is very confusing for users. I just saw a case of this on the #maria channel. It must be very common for users to be bitten by this and have a hard time figuring out what the real problem is.

          The problem is made worse since the error message is not merely unhelpful, it is actively misleading. It suggests the problem is on the server side ("Lost connection to server" often means the server closed the connection, eg. server crash), but the real problem is with the configuration on the client side.

          The expected behaviour would be to get eg. ER_TOO_LONG_STRING or ER_NET_PACKET_TOO_LARGE.

          --source include/have_innodb.inc
          CREATE TABLE t1 (a INT PRIMARY KEY, b VARCHAR(2048), c LONGBLOB) ENGINE=InnoDB;
          INSERT INTO t1 VALUES (0, REPEAT("x", 2048), REPEAT("Hulubulu!!?!", 1024*100));
          --exec $MYSQL_DUMP --max-allowed-packet=102400 test t1 > $MYSQLTEST_VARDIR/tmp/tmp_t1.sql
          

          mariadb-dump: Error 2013: Lost connection to server during query when dumping table `t1` at row: 0
          

          knielsen Kristian Nielsen added a comment - Agree with Markus here, this behaviour is very confusing for users. I just saw a case of this on the #maria channel. It must be very common for users to be bitten by this and have a hard time figuring out what the real problem is. The problem is made worse since the error message is not merely unhelpful, it is actively misleading. It suggests the problem is on the server side ("Lost connection to server" often means the server closed the connection, eg. server crash), but the real problem is with the configuration on the client side. The expected behaviour would be to get eg. ER_TOO_LONG_STRING or ER_NET_PACKET_TOO_LARGE. --source include/have_innodb.inc CREATE TABLE t1 (a INT PRIMARY KEY, b VARCHAR(2048), c LONGBLOB) ENGINE=InnoDB; INSERT INTO t1 VALUES (0, REPEAT("x", 2048), REPEAT("Hulubulu!!?!", 1024*100)); --exec $MYSQL_DUMP --max-allowed-packet=102400 test t1 > $MYSQLTEST_VARDIR/tmp/tmp_t1.sql mariadb-dump: Error 2013: Lost connection to server during query when dumping table `t1` at row: 0
          georg Georg Richter added a comment -

          A better error message would be of course better, but, why does the server sends data > max_allowed_packet at all? The max_allowed_packet from client is specified in client_hello packet.

          georg Georg Richter added a comment - A better error message would be of course better, but, why does the server sends data > max_allowed_packet at all? The max_allowed_packet from client is specified in client_hello packet.

          georg, that's a good question.

          The server code to send data to the client seems to be around
          select_send::send_data() which fills in the packet (in-memory String) and
          calls Protocol::write(), which goes through my_net_write() and
          net_real_write() to write the data down the socket using VIO.

          There doesn't seem to be any checking of the client max packet length.

          The server does read and store the client's max packet size, in
          THD::max_client_packet_length. But this seems completely unused, just
          wasting space in the THD :-/

          It would seem appropriate for the server to give an error instead of sending
          a packet to the client that's bigger than the client wants to handle. One
          risk of introducing this is that some connectors may not send a correct max
          packet size in the handshake response. This could then cause such connectors
          to break if the server started to enforce a possibly invalid client max
          packet size.

          Handling this in the client isn't optimal if the server is anyway sending
          down a too large packet. The client kind of has to drop the connection since
          there's no other way to stop the data from the server. Maybe the client
          could read and discard data from the socket until end of the too-large
          packet, but it's not optimal. At least an error that informed the user that
          the cause is exceeded max_allowed_packet client setting would be good.

          I can try ask Monty what he thinks...

          knielsen Kristian Nielsen added a comment - georg , that's a good question. The server code to send data to the client seems to be around select_send::send_data() which fills in the packet (in-memory String) and calls Protocol::write(), which goes through my_net_write() and net_real_write() to write the data down the socket using VIO. There doesn't seem to be any checking of the client max packet length. The server does read and store the client's max packet size, in THD::max_client_packet_length. But this seems completely unused, just wasting space in the THD :-/ It would seem appropriate for the server to give an error instead of sending a packet to the client that's bigger than the client wants to handle. One risk of introducing this is that some connectors may not send a correct max packet size in the handshake response. This could then cause such connectors to break if the server started to enforce a possibly invalid client max packet size. Handling this in the client isn't optimal if the server is anyway sending down a too large packet. The client kind of has to drop the connection since there's no other way to stop the data from the server. Maybe the client could read and discard data from the socket until end of the too-large packet, but it's not optimal. At least an error that informed the user that the cause is exceeded max_allowed_packet client setting would be good. I can try ask Monty what he thinks...

          People

            georg Georg Richter
            markus makela markus makela
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:

              Git Integration

                Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.