Details

    • Bug
    • Status: Closed (View Workflow)
    • Critical
    • Resolution: Fixed
    • 2.0.0-RC, 1.6.0, 2.0.1, 1.6.1, 2.0.2
    • 2.0.3, 1.6.2
    • Other
    • None

    Description

      As I can't copy-paste code here as our application uses Connector/J inside a complicated self-written framework, I'll describe the reproduction steps:

      • Create a table someTable with a field foo type longtext
      • Use a prepared statement with a simple query like INSERT INTO someTable (foo) VALUES ( ? )
      • Generate random strings with below code:

        private String createString(int length) {
          return StringUtils.leftPad("", length, "\"");
        }
        

      • Use this to generate strings length 8000000 (8 million) doublequotes and 10000000 (10 million) doublequotes
      • Check the results (SELECT LENGTH(foo) FROM someTable)

      You'll see (or at least I see) that 8000000 inserts a string length 8000000 (how it should be), but the 10000000 inserts a string half its length: 5000000.

      I think I also found the problem and the solution, so I will add a pull request.

      In our case (where we insert very large JSON strings with a lot of doublequotes) this makes us lose data and generate invalid JSON strings. This does not happen in 1.5.9, but does happen in 1.6.x and 2.0.x as far as I have tested.

      This is a critical bug as this results in invalid and lost data.

      Attachments

        Activity

          bartlaarhoven Bart Laarhoven created issue -
          bartlaarhoven Bart Laarhoven added a comment - Pull request: https://github.com/MariaDB/mariadb-connector-j/pull/107
          diego dupin Diego Dupin added a comment -

          this is indeed critical, i'm checking now

          diego dupin Diego Dupin added a comment - this is indeed critical, i'm checking now
          bartlaarhoven Bart Laarhoven made changes -
          Field Original Value New Value
          Description As I can't copy-paste code here as our application uses Connector/J inside a complicated self-written framework, I'll describe the reproduction steps:
          * Create a table {{someTable}} with a field {{foo}} type {{longtext}}
          * Use a prepared statement with a simple query like {{INSERT INTO someTable (foo) VALUES (?)}}
          * Generate random strings with below code:
          {code:java}
          private String createString(int length) {
            return StringUtils.leftPad("", length, "\"");
          }
          {code}
          * Use this to generate strings length 8000000 (8 million) doublequotes and 10000000 (10 million) doublequotes
          * Check the results ({{SELECT LENGTH(foo) FROM someTable}})

          You'll see (or at least I see) that 8000000 inserts a string length 8000000 (how it should be), but the 10000000 inserts a string half its length: 5000000.

          I think I also found the problem and the solution, so I will add a pull request.

          In our case (where we insert very large JSON strings with a lot of doublequotes) this makes us lose data and generate invalid JSON strings. This does not happen in 1.5.9, but does happen in 1.6.x and 2.0.x as far as I have tested.

          This is a critical bug as this results in invalid and lost data.
          As I can't copy-paste code here as our application uses Connector/J inside a complicated self-written framework, I'll describe the reproduction steps:
          * Create a table {{someTable}} with a field {{foo}} type {{longtext}}
          * Use a prepared statement with a simple query like {{INSERT INTO someTable (foo) VALUES ( ? )}}
          * Generate random strings with below code:
          {code:java}
          private String createString(int length) {
            return StringUtils.leftPad("", length, "\"");
          }
          {code}
          * Use this to generate strings length 8000000 (8 million) doublequotes and 10000000 (10 million) doublequotes
          * Check the results ({{SELECT LENGTH(foo) FROM someTable}})

          You'll see (or at least I see) that 8000000 inserts a string length 8000000 (how it should be), but the 10000000 inserts a string half its length: 5000000.

          I think I also found the problem and the solution, so I will add a pull request.

          In our case (where we insert very large JSON strings with a lot of doublequotes) this makes us lose data and generate invalid JSON strings. This does not happen in 1.5.9, but does happen in 1.6.x and 2.0.x as far as I have tested.

          This is a critical bug as this results in invalid and lost data.
          diego dupin Diego Dupin made changes -
          Fix Version/s 1.6.2 [ 22560 ]
          Fix Version/s 2.0.3 [ 22559 ]
          diego dupin Diego Dupin made changes -
          Component/s Other [ 12201 ]
          Resolution Fixed [ 1 ]
          Status Open [ 1 ] Closed [ 6 ]
          Tiemo Vorschuetz added a comment - - edited

          This also affects BLOBS. So thank you so much for fixing it! Great!

          Tiemo Vorschuetz added a comment - - edited This also affects BLOBS. So thank you so much for fixing it! Great!
          serg Sergei Golubchik made changes -
          Workflow MariaDB v3 [ 81335 ] MariaDB v4 [ 134997 ]

          People

            diego dupin Diego Dupin
            bartlaarhoven Bart Laarhoven
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Git Integration

                Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.