Details

    • Task
    • Status: Closed (View Workflow)
    • Major
    • Resolution: Fixed
    • None
    • 2.2.0
    • Other
    • None

    Description

      Since they are immutable, current Blob implementation (MariaDbBlob) must be improved :

      • avoiding byte array copy
      • permit to directly use the resultset row content

      Attachments

        Activity

          lbardon Loïc Bardon added a comment -

          Connect string option blobSendChunkSize from the original MySQL connector seems not to be supported by MariaDB Connector/J.
          This is unfortunate since it prevents storing or retrieving Blobs larger than max_allowed_packet (see this SO question ).

          lbardon Loïc Bardon added a comment - Connect string option blobSendChunkSize from the original MySQL connector seems not to be supported by MariaDB Connector/J. This is unfortunate since it prevents storing or retrieving Blobs larger than max_allowed_packet (see this SO question ).
          diego dupin Diego Dupin added a comment - - edited

          Goal of max_allowed_packet is to set a limit to indicate that if this limit is reach, that is clearly abnormal.
          Then server can reject connection that monopolize network/CPU.

          If this can occur, then max_allowed_packet has to be changed.

          Do use need an option that indicate the chunk size to send stream to server. I don't see why.

          And concerning using that "blobSendChunkSize" option to avoid max_allowed_packet limitation :

          • if this was possible, that would clearly break the limits. (This would be an issue, because those LONG_DATA packet are stored in memory, that would be dangerous)
          • if driver send long_data with length > max_allowed_data ( max_long_data_size in fact) then server will reject the packet with the error "Parameter of prepared statement which is set through mysql_send_long_data() is longer than 'max_long_data_size' bytes" (try with MySQL with MariaDB server and MySQL server).
          diego dupin Diego Dupin added a comment - - edited Goal of max_allowed_packet is to set a limit to indicate that if this limit is reach, that is clearly abnormal. Then server can reject connection that monopolize network/CPU. If this can occur, then max_allowed_packet has to be changed. Do use need an option that indicate the chunk size to send stream to server. I don't see why. And concerning using that "blobSendChunkSize" option to avoid max_allowed_packet limitation : if this was possible, that would clearly break the limits. (This would be an issue, because those LONG_DATA packet are stored in memory, that would be dangerous) if driver send long_data with length > max_allowed_data ( max_long_data_size in fact) then server will reject the packet with the error "Parameter of prepared statement which is set through mysql_send_long_data() is longer than 'max_long_data_size' bytes" (try with MySQL with MariaDB server and MySQL server).
          lbardon Loïc Bardon added a comment -

          Goal of blobSendChunkSize is to allow sending the blob in chunks rather than in one single packet, thus permitting to use blobs larger than max_allowed_packet.
          Without this, since max_allowed_packet has a max value of 1G it would be impossible to handle blobs larger than 1G; yet the LONGBLOB datatype has a max length of 4GB. See details in the SO discussion linked in my previous comment.

          In Mysql Connector/J blobSendChunkSize cannot exceed max_allowed_packet (if set to larger it will be automatically corrected - see their official doc).

          If mariadb will not implement this option and automatically use whatever is appropriate to upload blobs in chunks, that's fine with me (easier to use).
          Is it implemented that way yet ? If it is, it doesn't work : I tried storing a 782M file in a blob : update statement fails with "exceeded max_allowed_packet" error despite configuring it to 1G.

          Thanks

          lbardon Loïc Bardon added a comment - Goal of blobSendChunkSize is to allow sending the blob in chunks rather than in one single packet, thus permitting to use blobs larger than max_allowed_packet. Without this, since max_allowed_packet has a max value of 1G it would be impossible to handle blobs larger than 1G; yet the LONGBLOB datatype has a max length of 4GB. See details in the SO discussion linked in my previous comment. In Mysql Connector/J blobSendChunkSize cannot exceed max_allowed_packet (if set to larger it will be automatically corrected - see their official doc ). If mariadb will not implement this option and automatically use whatever is appropriate to upload blobs in chunks, that's fine with me (easier to use). Is it implemented that way yet ? If it is, it doesn't work : I tried storing a 782M file in a blob : update statement fails with "exceeded max_allowed_packet" error despite configuring it to 1G. Thanks
          diego dupin Diego Dupin added a comment - - edited

          I've been trying to check how that can work with mysql driver, since for me it's not normal, and server disallow having LONG_DATA with length > max_allowed_packet in MariaDB 10.3, 10.2, 10.1, 10.0 and MySQL 5.7 (and that's normal, that's the role of having max_allowed_packet).
          So even if Driver would implement this blobSendChunkSize, that wouldn't work. Maybe 5.5 did permit that. Option blobSendChunkSize doesn't make any sense now.

          >I tried storing a 782M file in a blob : update statement fails with "exceeded max_allowed_packet" error despite configuring it to 1G.
          That is not normal.
          I've check that there was no regression, and successfully send 1 000 000 000 bytes with max_allowed_packet to 1G.
          Can you still reproduced this issue ?

          diego dupin Diego Dupin added a comment - - edited I've been trying to check how that can work with mysql driver, since for me it's not normal, and server disallow having LONG_DATA with length > max_allowed_packet in MariaDB 10.3, 10.2, 10.1, 10.0 and MySQL 5.7 (and that's normal, that's the role of having max_allowed_packet). So even if Driver would implement this blobSendChunkSize, that wouldn't work. Maybe 5.5 did permit that. Option blobSendChunkSize doesn't make any sense now. >I tried storing a 782M file in a blob : update statement fails with "exceeded max_allowed_packet" error despite configuring it to 1G. That is not normal. I've check that there was no regression, and successfully send 1 000 000 000 bytes with max_allowed_packet to 1G. Can you still reproduced this issue ?
          diego dupin Diego Dupin added a comment -

          I'm closing this task because the initial goal was implemented, but if you still face issue with max_allowed_packet, feel free to create a dedicated task.

          diego dupin Diego Dupin added a comment - I'm closing this task because the initial goal was implemented, but if you still face issue with max_allowed_packet, feel free to create a dedicated task.

          People

            diego dupin Diego Dupin
            diego dupin Diego Dupin
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Git Integration

                Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.