Uploaded image for project: 'MariaDB MaxScale'
  1. MariaDB MaxScale
  2. MXS-2700

Maxscale needs to check how up-to-date the slave is, before move traffic to it.

    XMLWordPrintable

    Details

    • Type: New Feature
    • Status: Closed (View Workflow)
    • Priority: Major
    • Resolution: Fixed
    • Affects Version/s: None
    • Fix Version/s: 2.5.0
    • Component/s: mariadbmon
    • Labels:
      None

      Description

      If somehow, slave (node1) can't connect to master and it's retrying again and again
      We'll see the below state in show slave status

      Slave_IO_Running: Connecting
      Slave_SQL_Running: Yes
      ..
      Seconds_Behind_Master: NULL

      In maxscale 'list servers' we could see the node1 status as 'Slave, Running' and maxscale is routing connections to the slave in which replication is broken.

      ┌────────┬─────────────┬──────┬─────────────┬─────────────────┬─────────────┐
      │ Server │ Address     │ Port │ Connections │ State           │ GTID        │
      ├────────┼─────────────┼──────┼─────────────┼─────────────────┼─────────────┤
      │ node1  │ 10.66.21.38 │ 6603 │ 17          │ Slave, Running  │ 1-2-3312917 │
      ├────────┼─────────────┼──────┼─────────────┼─────────────────┼─────────────┤
      │ node2  │ 10.66.21.37 │ 6603 │ 49          │ Master, Running │ 1-2-3319973 │
      └────────┴─────────────┴──────┴─────────────┴─────────────────┴─────────────┘
      

      Since this 'node 1' replication is already broken maxscale status for this node should be in 'Running' state and no connections should be routed to this slave. But this is the current behaviour which needs to be changed. On the long run, we probably need to implement some kind of grading system, where the monitor or router checks how "up-to-date" the slave is.

        Attachments

          Activity

            People

            Assignee:
            toddstoffel Todd Stoffel
            Reporter:
            niljoshi Nilnandan Joshi
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Dates

              Created:
              Updated:
              Resolved: