Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-5829

STOP SLAVE resets global status variables

Details

    • Bug
    • Status: Closed (View Workflow)
    • Critical
    • Resolution: Fixed
    • 5.5.36, 10.0.9
    • 5.5.37, 10.0.10
    • None
    • None

    Description

      Using mariadb-10.0.9-linux-x86_64.tar.gz.

      MariaDB [test]> show global status like 'com_insert';
      +---------------+--------+
      | Variable_name | Value  |
      +---------------+--------+
      | Com_insert    | 646298 |
      +---------------+--------+
      1 row in set (0.00 sec)
       
      MariaDB [test]> stop slave;
      Query OK, 0 rows affected (0.00 sec)
       
      MariaDB [test]> show global status like 'com_insert';
      +---------------+-------+
      | Variable_name | Value |
      +---------------+-------+
      | Com_insert    | 0     |
      +---------------+-------+
      1 row in set (0.00 sec)

      Attachments

        Activity

          The problem was introduced in 5.5 tree by the following revision:

          revno: 3601
          revision-id: wlad@montyprogram.com-20121220231237-0xv7egt3s225bx7j
          parent: timour@askmonty.org-20121220203840-ofoavsm70g8ouk0m
          committer: Vladislav Vaintroub <wlad@montyprogram.com>
          branch nick: 5.5
          timestamp: Fri 2012-12-21 00:12:37 +0100
          message:
            MDEV-3945 - do not hold LOCK_thread_count when freeing THD.
              
            The patch decreases the duration of LOCK_thread_count, so it is not hold during THD destructor and freeing memory.
            This mutex  now only protects the integrity of threads list, when removing THD from it,  and thread_count variable.
              
            The add_to_status() function that updates global status during client disconnect,  is now correctly protected by the LOCK_status mutex.
            
            Benchmark : in a  "non-persistent" sysbench test (oltp_ro with reconnect after each query),  ~ 25% more connects/disconnects were measured

          I agree that the effect is bad and must be fixed, but given that it had survived for over an year without affecting anybody heavily enough to be noticed, I demote it from Blocker

          elenst Elena Stepanova added a comment - The problem was introduced in 5.5 tree by the following revision: revno: 3601 revision-id: wlad@montyprogram.com-20121220231237-0xv7egt3s225bx7j parent: timour@askmonty.org-20121220203840-ofoavsm70g8ouk0m committer: Vladislav Vaintroub <wlad@montyprogram.com> branch nick: 5.5 timestamp: Fri 2012-12-21 00:12:37 +0100 message: MDEV-3945 - do not hold LOCK_thread_count when freeing THD. The patch decreases the duration of LOCK_thread_count, so it is not hold during THD destructor and freeing memory. This mutex now only protects the integrity of threads list, when removing THD from it, and thread_count variable. The add_to_status() function that updates global status during client disconnect, is now correctly protected by the LOCK_status mutex. Benchmark : in a "non-persistent" sysbench test (oltp_ro with reconnect after each query), ~ 25% more connects/disconnects were measured I agree that the effect is bad and must be fixed, but given that it had survived for over an year without affecting anybody heavily enough to be noticed, I demote it from Blocker

          Problem was that global status was not updated when slave thread thd was deleted.

          Fix pushed into 5.5. Will be in 10.0.10 when we do next merge from 5.5

          monty Michael Widenius added a comment - Problem was that global status was not updated when slave thread thd was deleted. Fix pushed into 5.5. Will be in 10.0.10 when we do next merge from 5.5

          People

            monty Michael Widenius
            kolbe Kolbe Kegel (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            6 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Git Integration

                Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.