[MXS-2443] ORM Connection Pooling Not Working With Causal Reads Created: 2019-04-20 Updated: 2024-01-04 Resolved: 2019-11-05 |
|
| Status: | Closed |
| Project: | MariaDB MaxScale |
| Component/s: | readwritesplit |
| Affects Version/s: | None |
| Fix Version/s: | 2.5.0 |
| Type: | New Feature | Priority: | Major |
| Reporter: | Todd Stoffel (Inactive) | Assignee: | markus makela |
| Resolution: | Fixed | Votes: | 1 |
| Labels: | None | ||
| Environment: |
Centos 7.x |
||
| Issue Links: |
|
||||||||
| Epic Link: | Router Improvements | ||||||||
| Sprint: | MXS-SPRINT-93 | ||||||||
| Description |
|
The causal read feature of the ReadWriteSplit router does not work if the application that connects to MaxScale is using connection pooling. For example: Sequelize is a promise-based Node.js ORM that includes connection pooling. Typical database connection construction looks like this:
In this scenario, the customer is using MaxScale 2.3.4 with MariaDB 10.3.12. session_track_system_variables is set to include last_gtid. Causal reads work as expected when directly connecting to MaxScale with a single client. However, when using pooling in the application layer or abstraction layer, one thread in the pool might have done the write while others may be doing the reads. Currently MaxScale cannot track this type of behavior and sees them as two separate connections which is not caught by the causal read feature of MaxScale. |
| Comments |
| Comment by markus makela [ 2019-04-21 ] |
|
The fact that this doesn't work is expected behavior as the feature is not supposed to resolve cross-connection dependencies. As long as the same connection object that is given by the connection pool is used to do all operations, the feature should guarantee happens-before ordering on the whole cluster. If a different connection object, analogous to a command line client connection, is used the relationship between the two is lost. Although outside of the scope of the original feature, an enhancement to it could be made so that the GTID information is stored globally. This would guarantee a relatively sequential order of events across all connections but each write would cause all other reads to wait for it to replicate. An alternative implementation would be to probe the GTID via replication or polling the servers and route reads to the master if the slaves are lagging behind. |
| Comment by markus makela [ 2019-04-29 ] |
|
The method described in |
| Comment by markus makela [ 2019-11-05 ] |
|
Added the new causal_reads_mode parameter that accepts either local (the default) or global. The latter causes the writes to be visible across all connections. |