Uploaded image for project: 'MariaDB ColumnStore'
  1. MariaDB ColumnStore
  2. MCOL-6147

CMAPI fails to remove a node

    XMLWordPrintable

Details

    • Bug
    • Status: Closed (View Workflow)
    • Major
    • Resolution: Fixed
    • 23.10.4
    • 23.10.6
    • cmapi
    • None
    • 2025-7, 2025-8

    Description

      I was trying to delete a node from the cluster using `mcs cluster node remove`, the command didn't show any error, but the node was not removed from the cluster. Currently it looks like the node wasn't removed from the nodes completely, but was moved to DesiredNodes, then failover saw it online and added it back.

      Before the removal:

      $ sudo mcs status 
      {                                                                                                                                                                                                                                
        "timestamp": "2025-08-20 21:32:11.817506",                                                                    
        "ip-172-31-47-126.us-west-2.compute.internal": {
          "timestamp": "2025-08-20 21:32:11.828663",
          "uptime": 13872,                              
          "dbrm_mode": "master",                    
          "cluster_mode": "readwrite", 
          "dbroots": [          
            "1"                      
          ],                       
          "module_id": 1,          
          "services": [                
            {            
              "name": "StorageManager",   
              "pid": 8841        
            },                         
            {             
              "name": "workernode",       
              "pid": 8879                 
            },                     
            {                                                                                                                                                                                                                          
              "name": "controllernode",                                                                                                                                                                                                ug-25
              "pid": 8890
            },
            {                                                                                                         
              "name": "PrimProc",                                                                                     
              "pid": 8907
            },
            {                                       
              "name": "WriteEngineServer",                                                                            
              "pid": 9034
            },
            {                                                                                                                                                                                                                          
              "name": "DMLProc",                                                                                      
              "pid": 9043
            },                                    
            {                                           
              "name": "DDLProc",                    
              "pid": 9074 
            }                   
          ]                          
        },            
        "ip-172-31-43-121.us-west-2.compute.internal": {
          "timestamp": "2025-08-20 21:32:11.911411",
          "uptime": 13871,
          "dbrm_mode": "slave",
          "cluster_mode": "readonly",
          "dbroots": [                 
            "2"           
          ],  
          "module_id": 2,
          "services": [            
            {             
              "name": "StorageManager",
              "pid": 7657
            },                         
            {             
              "name": "workernode",
              "pid": 7694
            },
            {                                           
              "name": "PrimProc",                   
              "pid": 7720 
            },                 
      {
              "name": "WriteEngineServer",
              "pid": 7838
            }
          ]
        },
        "ip-172-31-45-153.us-west-2.compute.internal": {
          "timestamp": "2025-08-20 21:32:17.008101",
          "uptime": 13878,
          "dbrm_mode": "slave",
          "cluster_mode": "readonly",
          "dbroots": [
            "3"
          ],
          "module_id": 3,
          "services": [
            {
              "name": "StorageManager",
              "pid": 7324
            },
            {
              "name": "workernode",
              "pid": 7365
            },
            {
              "name": "PrimProc",
              "pid": 7387
            },
            {
              "name": "WriteEngineServer",
              "pid": 7505
            }
          ]
        },
        "num_nodes": 3
      }
      

      Removal:

      $ sudo mcs cluster node remove --node ip-172-31-45-153.us-west-2.compute.internal
      [
        {
          "timestamp": "2025-08-20 21:33:02.005740",
          "node_id": "ip-172-31-45-153.us-west-2.compute.internal"
        }
      ]
      

      After the removal (the node is in place, the number of nodes didn't change)

      {                                                                                                                                                                                                               21:33:35 [7/1816]
        "timestamp": "2025-08-20 21:33:29.397097",
        "ip-172-31-47-126.us-west-2.compute.internal": {
          "timestamp": "2025-08-20 21:33:29.403882",
          "uptime": 13950,
          "dbrm_mode": "master",
          "cluster_mode": "readonly",
          "dbroots": [
            "1"
          ],
          "module_id": 1,
          "services": [
            {
              "name": "StorageManager",
              "pid": 14381
            },
            {
              "name": "workernode",
              "pid": 14480
            },
            {
              "name": "controllernode",
              "pid": 14491
            }
          ]
        },
        "ip-172-31-43-121.us-west-2.compute.internal": {
          "timestamp": "2025-08-20 21:33:29.467990",
          "uptime": 13949,
          "dbrm_mode": "slave",
          "cluster_mode": "readonly",
          "dbroots": [
            "2"
          ],
          "module_id": 2,
          "services": [
            {
              "name": "StorageManager",
              "pid": 12650
            },
            {
              "name": "workernode",
              "pid": 12690
            },
            {
              "name": "PrimProc",
              "pid": 12712
            },
            {
              "name": "WriteEngineServer",
              "pid": 12830
            }
          ]
        },
        "ip-172-31-45-153.us-west-2.compute.internal": {
          "timestamp": "2025-08-20 21:33:29.527868",
          "uptime": 13950,
          "dbrm_mode": "slave",
          "cluster_mode": "readonly",
          "dbroots": [
            "3"
          ],
          "module_id": 3,
          "services": [
            {
              "name": "StorageManager",
              "pid": 12037
            },
            {
              "name": "workernode",
              "pid": 12228
            },
            {
              "name": "PrimProc",
              "pid": 12250
            },
            {
              "name": "WriteEngineServer",
              "pid": 12368
            }
          ]
        },
        "num_nodes": 3
      }
      

      Attachments

        1. cmapi_server.log
          154 kB
          Alexander Presniakov

        Activity

          People

            AlexanderPresniakov Alexander Presniakov
            AlexanderPresniakov Alexander Presniakov
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Git Integration

                Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.