[MXS-1146] JDBC connection dropping transaction when connecting to MaxScale directly Created: 2017-02-23  Updated: 2017-05-18  Resolved: 2017-05-18

Status: Closed
Project: MariaDB MaxScale
Component/s: readwritesplit
Affects Version/s: 2.0.4
Fix Version/s: 2.2.0

Type: Bug Priority: Critical
Reporter: Bruno Šimić (Inactive) Assignee: markus makela
Resolution: Fixed Votes: 0
Labels: None
Environment:

JDBC Version 1.5.5
10.1.21-MariaDB-enterprise galera cluster on Linux Ubuntu 14.04.4 LTS


Attachments: Text File exception.txt     File maxscale.cnf    
Issue Links:
Duplicate
is duplicated by MXS-1214 Streaming Insert Filter gives errors Closed
Sprint: 2017-29, 2017-30, 2017-31, 2017-32, 2017-33

 Description   

For an unknown reason, the statement aborts down suddenly with error message (attached in exception.txt)

On the node associated with the MaxScale there is following error entry:
Feb 23 12:04:32 mariadbcluster2 mysqld: 2017-02-23 12:04:32
140255419796224 [warning] aborted connection 11242 to db: 'E2_MC_XX'
user: 'E2_USER_MC_XX' host: '192.168.22.181' (got error reading
communication packets)

In MaxScale log no error messages are written.

The statements copy PDF documents and images (as a BLOBs) into the corresponding table in a database.

This PreparedStatement is used:
INSERT INTO DOCUMENTS
(DOCUMENT_ID,FILENAME,REMARKS,PARENT_FOLDER_ID,PREVIEW,DOCUMENT,IS_FOLDER,BUZZWORD1,BUZZWORD2,BUZZWORD3,BUZZWORD4,BUZZWORD5,BUZZWORD6,BUZZWORD7,BUZZWORD8,BUZZWORD9,BUZZWORD10,DOCUMENT_TYPE,WAS_EXPORTED,CLIENT_ID,DOCUMENT_BYTE_SIZE,PREVIEW_BYTE_SIZE,IS_SIGNED,DOCUMENTDATE)
VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)

The problem occurs randomly, only when this statement is executed. Sometime after 100 inserts, other time after more than 4000 inserts.

Connects directly to one of the three nodes are running without problems.
So it seems to be MaxScale problem only.

Cannot setup MaxScale as readwritesplit since read and write operation would be one transaction (application side!)



 Comments   
Comment by markus makela [ 2017-03-02 ]

Are the inserted documents larger than 16MB? If so, this is probably caused by a bug in the client protocol code when packets larger than 2^24 bytes are being processed.

Comment by Bruno Šimić (Inactive) [ 2017-03-02 ]

No, the documents are smaller.

Comment by markus makela [ 2017-03-03 ]

I managed to reproduce this with the following program.

package mxs1146;
 
import java.sql.Blob;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.sql.Statement;
import java.sql.Time;
import java.util.Random;
 
public class MXS1146 {
 
    public static void main(String[] args) {
        try (Connection connection = DriverManager.getConnection("jdbc:mariadb://192.168.121.164:4008/test?user=skysql&password=skysql")) {
            for (int x = 0; x < 1000; x++) {
                System.out.println(x);
                Statement stmt = connection.createStatement();
                stmt.execute("drop table if exists DOCUMENTS");
                stmt.execute("create table DOCUMENTS (DOCUMENT_ID int, FILENAME varchar(255),"
                        + " REMARKS varchar(255), PARENT_FOLDER_ID int, PREVIEW blob, DOCUMENT blob, "
                        + "IS_FOLDER tinyint, BUZZWORD1 varchar(255), BUZZWORD2 varchar(255), "
                        + "BUZZWORD3 varchar(255), BUZZWORD4 varchar(255), BUZZWORD5 varchar(255), "
                        + "BUZZWORD6 varchar(255), BUZZWORD7 varchar(255), BUZZWORD8 varchar(255), "
                        + "BUZZWORD9 varchar(255), BUZZWORD10 varchar(255), DOCUMENT_TYPE varchar(255), "
                        + "WAS_EXPORTED tinyint, CLIENT_ID int, DOCUMENT_BYTE_SIZE int, "
                        + "PREVIEW_BYTE_SIZE int, IS_SIGNED tinyint, DOCUMENTDATE datetime) ENGINE=BLACKHOLE");
 
                try (PreparedStatement ps = connection.prepareStatement("INSERT INTO DOCUMENTS "
                        + "(DOCUMENT_ID, FILENAME, REMARKS, PARENT_FOLDER_ID, PREVIEW, DOCUMENT, "
                        + "IS_FOLDER, BUZZWORD1, BUZZWORD2, BUZZWORD3, BUZZWORD4, BUZZWORD5, "
                        + "BUZZWORD6, BUZZWORD7, BUZZWORD8, BUZZWORD9, BUZZWORD10, DOCUMENT_TYPE, "
                        + "WAS_EXPORTED, CLIENT_ID, DOCUMENT_BYTE_SIZE, PREVIEW_BYTE_SIZE, IS_SIGNED, "
                        + "DOCUMENTDATE) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)")) {
                    for (int i = 0; i < 1000; i++) {
                        Random r = new Random();
                        byte[] b = new byte[10000];
                        r.nextBytes(b);
                        Blob blob1 = connection.createBlob();
                        Blob blob2 = connection.createBlob();
                        blob1.setBytes(1, b);
                        blob2.setBytes(1, b);
 
                        ps.setInt(1, 0);
                        ps.setString(2, "some_file.png");
                        ps.setString(3, "Remarks");
                        ps.setInt(4, 1);
                        ps.setBlob(5, blob1);
                        ps.setBlob(6, blob2);
                        ps.setInt(7, 0);
                        ps.setString(8, "Buzzword1");
                        ps.setString(9, "Buzzword2");
                        ps.setString(10, "Buzzword3");
                        ps.setString(11, "Buzzword4");
                        ps.setString(12, "Buzzword5");
                        ps.setString(13, "Buzzword6");
                        ps.setString(14, "Buzzword7");
                        ps.setString(15, "Buzzword8");
                        ps.setString(16, "Buzzword9");
                        ps.setString(17, "Buzzword10");
                        ps.setString(18, "Image");
                        ps.setInt(19, 0);
                        ps.setInt(20, 123);
                        ps.setInt(21, 10000);
                        ps.setInt(22, 10000);
                        ps.setInt(23, 0);
                        ps.setTime(24, new Time(System.currentTimeMillis()));
                        ps.execute();
                    }
                }
            }
        } catch (SQLException ex) {
            System.out.println("SQLException: " + ex.getMessage());
        }
    }
}

Comment by markus makela [ 2017-03-07 ]

Since then, I haven't been able to reproduce the problem.

Comment by markus makela [ 2017-03-22 ]

How often do the inserts happen and how often do the errors happen? What is the value of wait_timeout on the master?

Comment by markus makela [ 2017-03-24 ]

This could be related to MXS-827.

Comment by markus makela [ 2017-03-31 ]

This is possibly caused and fixed by MXS-1203.

Comment by Bruno Šimić (Inactive) [ 2017-04-19 ]

comment-93293

Error occurres during the migration of Oracle tablespaces in a MariaDB: by opening a result set to the corresponding table in the Oracle tablespace; while iterating through the rows, and writing the values in the corresponding table of the MariaDB DB PreparedStatement
Customer tried 20 times, each time with the same result, but always after a different number of imported lines (or time until this error occurs).
It worked by removing MaxScale connection and writing directly on one of the three galera nodes.

wait_timeout = 28800

Comment by markus makela [ 2017-04-21 ]

bruno Does the JDBC PreparedStatement use batch queries? If so, setting the batch size to one might help fix this. The MariaDB Connector/J documentation tells us to set useBatchMultiSend=false (and optionally useBatchMultiSendNumber=1) in the JDBC connection string to disable batching.

Comment by markus makela [ 2017-05-18 ]

Closing as fixed in 2.2.0. If the problem persists, please reopen the issue.

Generated at Thu Feb 08 04:04:34 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.