[CONJS-240] Repeating calling the same procedure gets a release prepare error. Created: 2023-02-06  Updated: 2023-02-14  Resolved: 2023-02-08

Status: Closed
Project: MariaDB Connector/node.js
Component/s: execute
Affects Version/s: None
Fix Version/s: 3.1.1

Type: Bug Priority: Critical
Reporter: Tom Miller Assignee: Diego Dupin
Resolution: Fixed Votes: 0
Labels: None
Environment:

node.js connector 3.02
mariadb 10.6.7
ubuntu linux 20.04



 Description   

SQLState: HY000) Unknown prepared statement handler (32) given to mysqld_stmt_execute

You can find info here: https://github.com/sidorares/node-mysql2/issues/805

We are getting close to going live and I can't find a workaround.

The main difference here is I am not doing a manual prepare, but it sounds like the same problem of releasing the prepare object but not cleaning up a pointer list. This stored proc runs every 1m 30s. This runs fine for about an hour and then I start getting the error. There is a TRANSACTION / COMMIT in the stored procedure, so it isn't leaving something open.
==========================================================

mdbPool = getPool();
mdbConn = await mdbPool.getConnection();
// Use a stored procedure to select and set a job number to prevent duplicate sending of emails.

const resultSpPrepare = await mdbConn.execute("CALL
M_Email_Prepare_To_Send_U_sp;", [
data.RequestUserId,
]);

// Check for db error
if (JSON.stringify(resultSpPrepare).includes("@full_error")) {
const dbResponseErrorMsg = JSON.stringify(resultSpPrepare[0][0]["@full_error"]);
dlog("Throw", dbResponseErrorMsg);
throw

{ name: "Database Error:", message: dbResponseErrorMsg, }

;
}

// No error, get returned ID to process emails
const jobNumber = resultSpPrepare[0][0].JobNumber;
=====================================================
DELIMITER //
CREATE OR REPLACE PROCEDURE EM_Email_Prepare_To_Send_U_sp(IN pin_RequestUserId BigInt(20))
NO SQL
SQL SECURITY INVOKER
BEGIN
DECLARE aTimeStamp DateTime DEFAULT UTC_TIMESTAMP(3);
DECLARE aJobNumber BigInt(20);
DECLARE aDbId nVarChar(50);

DECLARE EXIT HANDLER FOR SQLEXCEPTION BEGIN
ROLLBACK;
GET DIAGNOSTICS CONDITION 1 @sqlstate = RETURNED_SQLSTATE,
@errno = MYSQL_ERRNO, @text = MESSAGE_TEXT;
SET @full_error = CONCAT("ERROR ", @errno, " (", @sqlstate, "): ", @text);
SELECT @full_error;
END;

SELECT KeyValue INTO aDbId FROM EM_Config WHERE SubSys = 'CORE' AND KeyName = 'coreDatabaseId';
SET aJobNumber = NEXT VALUE FOR EM_EmailSeq;

START TRANSACTION;
UPDATE EM_Email SET
emProcJobNumber = aJobNumber,
emProcUTCTimeStamp = atimeStamp,
UpdateUserId = pin_RequestUserId,
UpdateUTCTimeStamp = aTimeStamp
WHERE ((emProcStatus = 'Queued') AND (emProcJobNumber = 0)) or
((emProcStatus = 'Retry') AND (emProcJobNumber = 0) AND
(DATE_ADD(emProcUTCTimeStamp, INTERVAL (POWER(emProcTryCount, 2.5)) HOUR) < NOW()));

COMMIT;
SELECT aJobNumber AS JobNumber;
END //
DELIMITER ;



 Comments   
Comment by Diego Dupin [ 2023-02-06 ]

the only way this can occurs is having a race condition releasing prepare from cache before execution.
This is strange, because node.js single thread usually permit to avoid this kind of problem.
I'll check that in detail this afternoon.

Comment by Diego Dupin [ 2023-02-06 ]

Can you confirm you never use prepare + execute type of implementation ? i.e. like :

  const prepare = await conn.prepare('SELECT * FROM mysql.user where host = ?');
  try {
     const res = await prepare.execute(['localhost']);
  } finally {    
     prepare.close();
  }

Comment by Diego Dupin [ 2023-02-06 ]

The only way this can occurs is that since PREPARE cache can be use in IO threads, that can lead to race conditions.
Prepare cache need to be slightly changed in order to be used in event only, to ensure cache beeing only used in node.js main thread.

Comment by Tom Miller [ 2023-02-07 ]

I can confirm that we don't manually "prepare" any statements. I actually return the connection to the pool as most of the time there aren't any new emails waiting to be sent. So 99% of the time it tries to update records to put them into a batch and doesn't update anything. Please let me know when you have a build of 3.1 and I will be happy to alpha/beta test it.

The scheduler was actually set to 60 seconds. I will increase the time between each job for now. And I am still willing to test the fix for you.

Comment by Diego Dupin [ 2023-02-07 ]

There cannot be any use in I/O thread in current connectors. Wrong assessment.

I've changed implementation to ensure that Prepare result cannot be use when closed. Even if i've not been able to reproduced this issue, correction must avoid this kind of situation.
(commit https://github.com/mariadb-corporation/mariadb-connector-nodejs/commit/14f770456652b3511a443b9dd8eff03293cfb7a4)

develop branch contain the correction

Comment by Diego Dupin [ 2023-02-07 ]

about testing, you can already,
cloning project :

git clone https://github.com/mariadb-corporation/mariadb-connector-nodejs.git
cd mariadb-connector-nodejs
git checkout develop

and in your project, change package.json to replace mariadb version to connector project, like :

"dependencies": {
    ...
    "mariadb": "/path/to/connector/mariadb-connector-nodejs",
    ...

Comment by Diego Dupin [ 2023-02-08 ]

1222tmiller since 3.1.0 is being release, i'm closing this task in order to get correction in new release. However, since I haven't reproduced the problem, I can't be sure that this solves your problem. If this is not the case, please recreate a new jira entry referring to this file

Comment by Tom Miller [ 2023-02-08 ]

I just finished deploying it on our test server. I should know in a couple of hours if it solved the problem.

Comment by Tom Miller [ 2023-02-08 ]

Just pointing to the directory didn't work. I added the appropriate modules to the project and moved it into the node_modules and renamed the directory to mariadb. But it doesn't seem to fix the problem. Let me know if you want me to run some test or other stuff to see anything in a live system, just let me know.

I have done a lot of pool/warehouse work in other languages. Calling it a warehouse shows my age, but it was very applicable as you would treat the pool like traditional inventory and use a first in - last out inventory methodology. Basically using a stack methodology. You pop a connection off the top and return it to the bottom. With 50 connections, and running every minute, with not much traffic, it should take about 30 minutes for a connection to be reused. And are you using two lists? One for tracking in-use connections and one for unused connections (inventory)? In the documentation, it is recommended not using min/max setting and having a fixed number of connections. I am assuming this is because you have one object list tracking used and available connections. If you use two lists, it is inconsequential to have min/max, because as you request a new connection, and the inventory is empty, you create a new one. Unfortunately, I am a newbie with JavaScript and I am not able to do this. You should also have a setting or algorithm on how quickly you reduce inventory once traffic falls off.

I wish there was a way to reset a connection when returning it to the pool. Having commits in the stored procedure means that by the time I am returning it to the pool, I am not worried about a rollback of any kind. I am also worried that simple selects not in a transaction are leaving things open long after I have used the data returned by the Stored Procedure or Select statement.

conn.release(true);

With the default being false. A true would expedite the resetting of the connection and block it from being used until it is reset.

Let me know how I can help.

Comment by Diego Dupin [ 2023-02-09 ]

1222tmiller hmm. ok, could you confirm you get the same type of "Unknown prepared statement handler" error ?

The reason to recommand having minimum connection equal maximum is not due to implementation: If you want an application or website that is capable of handling unexpected bursts (peaks) without stumbling, you pretty much have to avoid creating connections on-demand. Trying to spool up new connections when a peak is occuring is almost guaranteed to make things worse. Specifically for node.js : keeping some idle connections cost nearly nothing for node.js : at the difference of some other languages, there is no multithread context, so no context switching.

Comment by Tom Miller [ 2023-02-09 ]

Yes, getting the same error.

I switched out all of my code to use a connection vs a pool and it works fine now. But I would like to use the pool capability. We are getting down to the last two weeks before release. And you are working on trying to get 3.1 out the door. Maybe we pick this back up for your 3.1.1 release. I will have more time to help you set up something locally that mimics my setup.

About setting it to the maximum makes sense. I have often used a warehouse for complicated screens with thousands of objects and the screen takes up a lot of memory. But I still think it is a good best practice to do a first in / last out methodology for pools/warehouses.

Comment by Diego Dupin [ 2023-02-10 ]

>I switched out all of my code to use a connection vs a pool and it works fine now
That's a good information, I know at least where to search.

>But I still think it is a good best practice to do a first in / last out methodology for pools/warehouses.
that's the case: connections are put in a queue (idleConnections) when created/release to pool, connections are added to the end of the queue. When client ask for a connection, pool give first valid entry of queue.

Comment by Diego Dupin [ 2023-02-14 ]

I think i've finally found the problem: Could you share the pool configuration to confirm option  `resetAfterUse` is enable ?
Problem here is that prepare cache is not emptied, even if server side all prepared are released.

Comment by Diego Dupin [ 2023-02-14 ]

correction is done with this commit on develop branch.

since 3.1.0 is in the process of being release, this correction will only be available on 3.1.1.

Generated at Thu Feb 08 03:23:47 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.