[CONJ-1029] After upgrading to MariaDB JDBC Driver for Java version 3.1.0 I'm now receiving error message SQL state [HY000]; error code [1210] Incorrect arguments to mysqld_stmt_bulk_execute Created: 2022-11-25  Updated: 2023-01-12  Resolved: 2022-12-21

Status: Closed
Project: MariaDB Connector/J
Component/s: Other
Affects Version/s: 3.1.0
Fix Version/s: 3.1.1

Type: Bug Priority: Major
Reporter: Thierry Giguere Assignee: Diego Dupin
Resolution: Fixed Votes: 2
Labels: None
Environment:

Java 19.0.1 / JDBC Driver 3.1.0 / MariaDb 10.6.11


Attachments: Text File calcenginegw-mariadb-bulkinsert-failure.log     File test-maria-db-client-bulk-insert-pipeline-bigdecimal.tgz    
Issue Links:
Duplicate
duplicates CONJ-1035 PreparedStatement Insert regression ... Closed
Relates
relates to CONJ-1015 pipelining sending multiple packet to... Closed

 Description   

Using the new JDBC Driver for Java version 3.1.0 I'm now receiving error message never seen before

create table mytable
(
p7 bigint not null comment 'p7 Unique Id',
p1 decimal(10,6) not null comment 'p1',
p2 decimal(10,6) not null comment 'p2',
p3 decimal(10,6) comment 'p3',
p4 decimal(10,6) not null comment 'p4',
p5 decimal(10,6) not null comment 'p5',
p6 decimal(10,6) comment 'p6'
) ENGINE = MEMORY;

INSERT INTO mytable (p1, p2, p3, p4, p5, p6, p7)
VALUES (?, ?, ?, ?, ?, ?, ?);

SQL state [HY000]; error code [1210]; (conn=8568) Incorrect arguments to mysqld_stmt_bulk_execute; nested exception is java.sql.BatchUpdateException: (conn=8568) Incorrect arguments to mysqld_stmt_bulk_execute;

In Java the parameter p7 is a long primitive while parameter p1 to p6 are BigDecimal

Some SQL insert succeed other failed.

While using JDBC Driver version 3.0.8 I didn't have any problem

I suspect it's related to CONN-1015 but don't have much more information

JDBC Driver configuration

useBulkStmts=true
allowLocalInfile=false
useCompression=true
useServerPrepStmts=true
useSSL=true
trustServerCertificate=false
enabledSslProtocolSuites=TLSv1.3

Will provide soon more detailed information at trace level



 Comments   
Comment by Diego Dupin [ 2022-11-25 ]

Thanks to provide trace level info. Can you indicate if you use some sort of proxy , and if so the version as well ? I suspect that issue occurs at that level.

trace info will confirm it, but if the change of CONJ-1015 is the reason of the issue, disabling option permitPipeline is a workaround for this

Comment by Thierry Giguere [ 2022-11-25 ]

Using a AWS RDS Fully managed version of MariaDB. No multi-az. No RDS proxy

With permitPipeline=false added I still have SQLException

I just took a look at individual commit : https://github.com/mariadb-corporation/mariadb-connector-j/commit/eee5184a#diff-7e2ab5e25a936263af07174854494c6233782687b7a2e268391b1422e6f43e70

So I also try with disablePipeline=true but I still have SQLException

I will now try JDBC Driver version 3.0.9 to narrow down the problem

Comment by Thierry Giguere [ 2022-11-25 ]

Version 3.0.9 is fine. So problem seems to be the upgrade from 3.0.9 to 3.1.0

Comment by Thierry Giguere [ 2022-11-25 ]

I will also try to investigate the size of the packet by either playing with option maxAllowedPacket and/or sending at Java level small batches instead of thousands rows. More to come

Comment by Thierry Giguere [ 2022-11-28 ]

Additional info :

For the very same database it works in a different scenario (a different table/statement) :

create table table2
(
c1 timeStamp(6) not null default CURRENT_TIMESTAMP(6),
c2 timeStamp(6) not null default CURRENT_TIMESTAMP(6) ON UPDATE CURRENT_TIMESTAMP(6),
c3 varchar(30) not null default LOWER(SUBSTRING_INDEX(USER(), '@', 1)),
c4 date not null,
c5 char(2) not null,
c6 varchar(255),
primary key (c4, c5)
);

INSERT INTO table2 (c3, c4, c5, c6)
VALUES (?, ?, ?, ?);

So I think we can rule out proxies. Storage engine here is innoDb and not MEMORY. The table do have a primary key. Both table are empty before the insertion. The batch size here is around 600 rows. No decimal(10,6) field in this table

Comment by Thierry Giguere [ 2022-11-30 ]

I provided in attachment a simple plain main java to reproduce the problem

A basic table :

CREATE TABLE mytable
(
c1 bigint not null,
c2 decimal(10,6) not null,
c3 decimal(10,6) not null,
c4 decimal(10,6),
c5 decimal(10,6),
c6 decimal(10,6),
c7 decimal(10,6)
)

A basic Java program :

import java.math.BigDecimal;
import java.sql.Connection;
import java.sql.Driver;
import java.sql.DriverManager;
import java.sql.PreparedStatement;

public class TestMariaDbBatchUpdateBulInsertPipeline {
private static final String[] VALUES =

{"1.1", "2.22", "3.333", "4.4444", "5.55555", "6.666666"}

;
public static void main(String[] args) throws Exception {
if (args.length < 5)

{ System.out.println("\n\nUsage host port database user password keystorePassword\n"); }

DriverManager.registerDriver((Driver) Class.forName("org.mariadb.jdbc.Driver").newInstance());
try (Connection connection = DriverManager.getConnection("jdbc:mariadb://" + args[0]+ ":" + args[1]+ "/" + args[2]+ "?user=" + args[3]+ "&password=" + args[4]+ "&useBulkStmts=true&allowLocalInfile=false&useCompression=true&useServerPrepStmts=true&useSSL=true&trustServerCertificate=true&enabledSslProtocolSuites=TLSv1.3")) {
try (PreparedStatement ps = connection.prepareStatement("INSERT INTO mytable (c1, c2, c3, c4, c5, c6, c7) VALUES (?, ?, ?, ?, ?, ?, ?)")) {
for (long i = 0; i < 200; i++) { // STOP WORKING AT 152 ROWS AND CRASH WITH java.sql.SQLException: (conn=21803) Incorrect arguments to mysqld_stmt_bulk_execute
ps.setLong(1, i);
for (int j = 0; j < 6; j++)

{ ps.setBigDecimal(j + 2, new BigDecimal(VALUES[j])); }

ps.addBatch();
}

ps.executeBatch();
}
}
}
}

Some conclusion :
Works for small number of rows but stop working at 152 rows and more
Over 152 rows only the first 151 rows are inserted
Works well in all scenario with version 3.0.9
Enabling/Disabling compression don't change the outcome

Comment by Thierry Giguere [ 2022-12-06 ]

Using double works well :

ps.setDouble(j + 2, Double.valueOf(VALUES[j]));

instead of

ps.setBigDecimal(j + 2, new BigDecimal(VALUES[j]));

and I can reach over 100,000 rows inserted. So sounds like the problem is limited to BigDecimal usage

Comment by Gaël Jourdan-Weil [ 2022-12-09 ]

We are facing this issue as well when upgrading the client from 3.0.9 to 3.1.0.

If needed I can try to provide reproduction cases.

Comment by Diego Dupin [ 2022-12-21 ]

Allright, after correcting CONJ-1035, this comes from the same problem.
Correction is available through snapshot for now, and will be release in 3.1.1

Comment by Gaël Jourdan-Weil [ 2023-01-12 ]

Is it really fixed in 3.1.1? I don't see any reference to this issue in 3.1.1 changelog not commits history.

Comment by Thierry Giguere [ 2023-01-12 ]

I can see the fix for CONJ-1035 in the log. This is change https://github.com/mariadb-corporation/mariadb-connector-j/commit/7399a260

Comment by Thierry Giguere [ 2023-01-12 ]

Using the test provided in this issue, I confirmed that switching to version 3.1.1 now works. Thanks

Generated at Thu Feb 08 03:20:05 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.