[CONJ-741] when only one slave and one master, connector/j only connect to master in some case Created: 2019-10-25 Updated: 2020-05-29 |
|
| Status: | Open |
| Project: | MariaDB Connector/J |
| Component/s: | aurora, Failover |
| Affects Version/s: | 2.2.1 |
| Fix Version/s: | None |
| Type: | Bug | Priority: | Major |
| Reporter: | Hui Dong | Assignee: | Diego Dupin |
| Resolution: | Unresolved | Votes: | 0 |
| Labels: | None | ||
| Environment: |
AWS Aurora with 1 master node and 1 slave node |
||
| Description |
|
In below scenarios, connector/j only connect to master at last. Scenario 1: 2, Start spring boot program using below URL to connect aurora. 3. Remove slave node from aurora cluster 4. Add slave node into aurora cluster again. Scenario 2: 2, Start spring boot program using below URL to connect aurora. 3. Add slave node into aurora cluster again. Cause: |
| Comments |
| Comment by Hui Dong [ 2019-12-02 ] | ||
|
We found we set pool=true in jdbcurl and we also use tomcat jdbc pool with spring boot. So I think this make our program create two connection pool. And this make our program not reconnect database normally when new node added into cluster. | ||
| Comment by Peter Lebedev [ 2020-05-29 ] | ||
|
We also have a similar problem with 2.5.4 driver, and we have not had it with 1.7.2 driver. Here is our use case: Per documentation, when aurora mode is used, the driver supposed to pick a reader node if a connection is explicitly set to read only:
We have been running this task for about two years using 1.7.2 driver w/o any issues. This week we decided to test 2.5.4 and when we just changed the driver, and restarted tomcat, we got a burst of errors because of (5), then after 10s of failed executions, it started to execute normally with occasional failures because of (5). When we switched back to 1.7.2 driver, all these issues went away. Looks like that there was a regression since that version, so the driver is not always picking a slave for read-only connections. |