[CONJ-553] RejectedExecutionException sending a large amount of concurrent batches Created: 2017-11-29  Updated: 2017-12-26  Resolved: 2017-12-21

Status: Closed
Project: MariaDB Connector/J
Component/s: Other
Affects Version/s: 2.1.1
Fix Version/s: 2.2.1, 1.7.1

Type: Bug Priority: Minor
Reporter: María Assignee: Diego Dupin
Resolution: Fixed Votes: 0
Labels: None

Attachments: Java Source File Example.java     Java Source File SchedulerServiceProviderHolder.java     Text File SchedulerServiceProviderHolder.patch    

 Description   

RejectedExecutionException is thrown in high load environments, while issuing batch operations.

java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@3f924f3d rejected from java.util.concurrent.ThreadPoolExecutor@7a90c6ab[Running, pool size = 100, active threads = 99, queued tasks = 0, completed tasks = 52]
	at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
	at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
	at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
	at org.mariadb.jdbc.internal.protocol.AbstractMultiSend.executeBatchStandard(AbstractMultiSend.java:237)
	at org.mariadb.jdbc.internal.protocol.AbstractMultiSend.executeBatch(AbstractMultiSend.java:189)
	at org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.executeBatch(AbstractQueryProtocol.java:669)
	at org.mariadb.jdbc.internal.protocol.AbstractQueryProtocol.executeBatchStmt(AbstractQueryProtocol.java:586)
	at org.mariadb.jdbc.MariaDbStatement.internalBatchExecution(MariaDbStatement.java:1283)
	at org.mariadb.jdbc.MariaDbStatement.executeBatch(MariaDbStatement.java:1235)
	at ClientStandalone.executeUpdate(ClientStandalone.java:35)
	at ClientStandalone.execute(ClientStandalone.java:55)
	at ClientStandalone.access$0(ClientStandalone.java:49)
	at ClientStandalone$1.run(ClientStandalone.java:80)
	at java.lang.Thread.run(Thread.java:745)

We think that changing the executor rejection policy in class org.mariadb.jdbc.internal.util.scheduler.SchedulerServiceProviderHolder, the problem would be solved.

Attached example and patch.



 Comments   
Comment by Diego Dupin [ 2017-11-29 ]

Hi, Thanks for reporting this issue.
The patch can solve the issue, but not in a reliable way: using the same thread to send and read using pipelining may result having client socket send buffer full and client socket read buffer full at the same time resulting in a lock.

The best solution is that if the thread pool queue is full, the driver must not use pipelining, use standard send query/read result, send next query / read result ...

Can you confirm that server is < 10.2.4 ? (driver would then use a dedicated protocol for batch that has better performance results)

Comment by María [ 2017-11-29 ]

Hi,
Server version is 10.2.10.

Comment by Diego Dupin [ 2017-11-29 ]

Bad assumption, i've read example to quickly : example is using Statement.addBatch(), not PreparedStatement.addBatch().

Comment by María [ 2017-11-30 ]

And something like this? Enqueueing again the rejected runnables. If the queue is full, threadPool.execute() will block.
I don´t know if this would cause blocking due to I/O operations wrong ordering.

@Override
        public ThreadPoolExecutor getBulkScheduler() {
            final int queueSize = 200; //magic number
            final BlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(queueSize);
            final ThreadPoolExecutor tp = new ThreadPoolExecutor(5, 100, 1, TimeUnit.MINUTES, queue,
                    new MariaDbThreadFactory("bulk"));
            tp.setRejectedExecutionHandler(new RejectedExecutionHandler() {
                @Override
                public void rejectedExecution(final Runnable r, final ThreadPoolExecutor executor) {
                    try {
                        tp.getQueue().put(r);
                    } catch (final InterruptedException ie) {
                        throw new RejectedExecutionException(ie);
                    }
                }
            });
            return tp;        }

Comment by Diego Dupin [ 2017-11-30 ]

wlad indicate a smart idea: the best would be using non-blocking sockets, driver can send until EWOULDBLOCK (buffer is full), that would avoid any need of other thread. That mean using NIO2, possibly NIO.

Possible implementation using NIO Channels seem to permit having that information :
from https://docs.oracle.com/javase/7/docs/api/java/nio/channels/SocketChannel.html#write(java.nio.ByteBuffer)

A socket channel in non-blocking mode, for example, cannot write any more bytes than are free in the socket's output buffer.

Comment by Diego Dupin [ 2017-12-13 ]

Using non-blocking socket is a solution that needs profound change and cannot be correct in next corrective version.
That will be handled in CONJ-447.

Until then, there are different solutions, but the only reliable one is to cancel pipelining use: this permit to avoid any timeout limit issue queuing, and perform better (there is already more than 100 batches running from one client)

Comment by María [ 2017-12-26 ]

Thank you!

Generated at Thu Feb 08 03:16:36 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.