Details
-
Bug
-
Status: Closed (View Workflow)
-
Major
-
Resolution: Duplicate
-
2.1.13, 2.2.3
-
None
Description
This seems kind of similar to MXS-1516.
When a connection is open with ReadWriteSplit and the master changes, the connection becomes unable to find the master, and the log contains messages like the following:
2018-03-22 17:01:23 notice : Server changed state: C1N1[172.30.0.249:3306]: master_down. [Master, Synced, Running] -> [Down]
|
2018-03-22 17:01:23 notice : Server changed state: C1N2[172.30.0.32:3306]: new_master. [Slave, Synced, Running] -> [Master, Synced, Running]
|
2018-03-22 17:02:54 notice : Server changed state: C1N1[172.30.0.249:3306]: master_up. [Down] -> [Master, Synced, Running]
|
2018-03-22 17:02:54 notice : Server changed state: C1N2[172.30.0.32:3306]: new_slave. [Master, Synced, Running] -> [Slave, Synced, Running]
|
2018-03-22 17:03:04 error : (4) [readwritesplit] Could not find master among the backend servers. Previous master's state : RUNNING SLAVE
|
2018-03-22 17:03:05 error : (4) [readwritesplit] Could not find master among the backend servers. Previous master's state : RUNNING SLAVE
|
2018-03-22 17:03:06 error : (4) [readwritesplit] Could not find master among the backend servers. Previous master's state : RUNNING SLAVE
|
2018-03-22 17:03:08 error : (4) [readwritesplit] Could not find master among the backend servers. Previous master's state : RUNNING SLAVE
|
2018-03-22 17:03:09 error : (4) [readwritesplit] Could not find master among the backend servers. Previous master's state : RUNNING SLAVE
|
2018-03-22 17:03:12 error : (4) [readwritesplit] Could not find master among the backend servers. Previous master's state : RUNNING SLAVE
|
2018-03-22 17:03:14 error : (4) [readwritesplit] Could not find master among the backend servers. Previous master's state : RUNNING SLAVE
|
It also seems to effect connections when a master goes into maintenance mode:
2018-03-12 14:04:54 error : (165055) [readwritesplit] Could not find master among the backend servers. Previous master's state : RUNNING MAINTENANCE
|
How I reproduced:
- Open a ReadWriteSplit connection.
- Execute a query.
- Stop the current master.
- Execute a query.
- Bring the old master back up.
- Execute a query.
At the last point, you should see an error in the MaxScale log.
The configuration I used was:
[C1N1]
|
type=server
|
address=172.30.0.249
|
port=3306
|
protocol=MySQLBackend
|
persistpoolmax=100
|
persistmaxtime=601
|
|
[C1N2]
|
type=server
|
address=172.30.0.32
|
port=3306
|
protocol=MySQLBackend
|
persistpoolmax=100
|
persistmaxtime=601
|
|
[Galera Monitor]
|
type=monitor
|
module=galeramon
|
servers=C1N1,
|
C1N2
|
user=maxscale
|
passwd=password
|
monitor_interval=10000
|
|
[Read Listener]
|
type=listener
|
service=Splitter Service
|
protocol=MySQLClient
|
port=3306
|
|
[Splitter Service]
|
type=service
|
router=readwritesplit
|
servers=C1N1,
|
C1N2
|
user=maxscale
|
passwd=password
|
max_slave_connections=100%
|
Attachments
Issue Links
- duplicates
-
MXS-359 keepalive client connection on master failover
- Closed