[CONJ-367] Large queries give EOFException Created: 2016-10-11  Updated: 2019-09-13  Resolved: 2019-09-13

Status: Closed
Project: MariaDB Connector/J
Component/s: Other
Affects Version/s: 1.5.3
Fix Version/s: N/A

Type: Bug Priority: Major
Reporter: Lars Liljestad Assignee: Diego Dupin
Resolution: Not a Bug Votes: 0
Labels: None
Environment:

CentOS Linux release 7.2.1511 (Core)
infobright-iee_mysql-4.8.3p1


Attachments: Java Source File GenData.java     Java Source File fetch.java     File tab_cre.sql    

 Description   

Running four queries in parallel fetching one row from each query at the time the jdbc driver will throw an exception:

java.sql.SQLException: java.io.EOFException: unexpected end of stream, read 0 bytes from 6
at org.mariadb.jdbc.internal.queryresults.resultset.MariaSelectResultSet.next(MariaSelectResultSet.java:547)
at fetch.fetchRow(fetch.java:90)
at fetch.main(fetch.java:77)
Caused by: java.io.EOFException: unexpected end of stream, read 0 bytes from 6
at org.mariadb.jdbc.internal.packet.read.ReadPacketFetcher.getReusableBuffer(ReadPacketFetcher.java:155)
at org.mariadb.jdbc.internal.queryresults.resultset.MariaSelectResultSet.readNextValue(MariaSelectResultSet.java:407)
at org.mariadb.jdbc.internal.queryresults.resultset.MariaSelectResultSet.nextStreamingValue(MariaSelectResultSet.java:378)
at org.mariadb.jdbc.internal.queryresults.resultset.MariaSelectResultSet.next(MariaSelectResultSet.java:545)
... 2 more

The code works fine with mysql jdbc mysql-connector-java-5.1.20.jar.

How to reproduce:

You need a lot of data to reproduce this problem, ~50Gb.

1) Create 2 databases EOF_MYISAM_1 and EOF_MYISAM_2
2) Run the tab_cre.sql script in both db to create tables.
3) Run the java pgm GenData to produce table data:
java GenData 10000000 > data.txt
The 10000000 is number of rows, might need adjustment due to machine speed
If you are low on disk you can redirect to a named pipe and do the load data infile
from that to avoid needing space for the data.txt file.
4) Do:
LOAD DATA INFILE 'data.txt' INTO TABLE EOF_big_t1
FIELDS TERMINATED BY ',';
in both databases for both EOF_big_t1 and EOF_big_t2.
5) Run:
java -cp <classpath> fetch

The exception is thrown with mariadb jdbc, but not if you run same thing with mysql. To use mysql you need to change driver string at the start of the fetch class body.



 Comments   
Comment by Vladislav Vaintroub [ 2016-10-11 ]

you are using streaming results, run your queries, don't read the whole data from socket, thus the server stucks writing to socket, and after a while (i.e net_write_tmeout) it would close connection . If I remember correctly, MySQL's driver is doing something with net_write_timeout on its own. If you wish, you can achieve something similar passing the large net_write_tmeout as session variable
(sessionVariables in the JDBC URL). Or you can set it on the server side.

See (http://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html#sysvar_net_write_timeout)

Comment by Diego Dupin [ 2016-10-11 ]

Current implementation of fetch is similar to select all results. Difference is that results are read by bulk sith size correspond to fetch size.
Problem is when select will take more than session variable "net_write_timeout" value (default to 60s), like the one in the example : Connection will close the socket after "net_write_timeout" value.

Temporary solutions :

  • during connection, set net_write_timeout to a long time. That can be done by addition option "sessionVariables=net_write_timeout=10000"
  • or for a specific query XXX, replace query by "SET STATEMENT net_write_timeout=10000 FOR XXX" if server > 10.0,
  • or like in mysql driver internally : issue a "SET net_write_timeout=..." before each query with fetch size, and net_write_timeout is reset to old value after closing resultset)

In next version 1.6.0, for CONJ-315, for server prepare statement, fetch size will be handled differently, with real fetch size (and so if closing statement/resultset, all data will not be send to client)

Comment by Lars Liljestad [ 2016-10-12 ]

I can confirm that the workaround, setting net_write_timeout, works!

Thanks for the fast reply!

Comment by Diego Dupin [ 2016-10-12 ]

Good.
According documentation has been updated https://mariadb.com/kb/en/mariadb/about-mariadb-connector-j/#streaming-result-sets

Comment by Diego Dupin [ 2019-09-13 ]

Closing, since now documented

Generated at Thu Feb 08 03:15:09 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.