Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-32168

slave_error_param condition is never checked from the wait_for_slave_param.inc

Details

    Description

      The is the bug that is very old (from 2010).
      In the include file wait_for_slave_param.inc there is the check for the slave_error_param.
      This value is either set by the test to some condition or in case if empty it is checked by the include file itself

        --let $slave_error_param= 1
      

      Based on this condition later in the same file condition is validated and compared to empty value:

        # Check if an error condition is reached.
        if (!$slave_error_param)
        {
          --let $_show_slave_status_error_value= query_get_value("SHOW SLAVE STATUS", $slave_error_param, 1)
          if ($_show_slave_status_error_value)
          {
            --echo **** ERROR: $slave_error_param = '$_show_slave_status_error_value' while waiting for slave parameter $slave_param $_slave_param_comparison $slave_param_value ****
            --source include/show_rpl_debug_info.inc
            --die Error condition reached in include/wait_for_slave_param.inc
          }
        }
      

      The task consists of :
      1. Allow the condition to be checked
      2. By fixing the bug validate the tests that are using this paramater, there are about 20+ of them

          binlog_encryption.rpl_parallel
          binlog_encryption.rpl_mixed_binlog_max_cache_size
          binlog_encryption.rpl_mixed_binlog_max_cache_size
          binlog_encryption.rpl_parallel
          binlog_encryption.encrypted_master_switch_to_unencrypted_coords
          binlog_encryption.encrypted_master_switch_to_unencrypted_gtid
          binlog_encryption.rpl_parallel
          binlog_encryption.rpl_parallel_ignored_errors
          binlog_encryption.rpl_parallel
          binlog_encryption.rpl_parallel
          binlog_encryption.rpl_parallel_ignored_errors
          binlog_encryption.encrypted_master_switch_to_unencrypted_coords
          binlog_encryption.rpl_parallel
          binlog_encryption.encrypted_master_switch_to_unencrypted_gtid
          multi_source.gtid_slave_pos
          multi_source.gtid_slave_pos
          rpl.rpl_incompatible_heartbeat rpl.rpl_innodb_mixed_dml rpl.rpl_innodb_mixed_ddl rpl.rpl_mixed_binlog_max_cache_size rpl.rpl_gtid_grouping
          rpl.mdev-31448_kill_ooo_finish_optimistic
          rpl.rpl_mixed_binlog_max_cache_size
          rpl.mdev-31448_kill_ooo_finish_optimistic
          rpl.rpl_domain_id_filter_io_crash
          rpl.rpl_cant_read_event_incident rpl.rpl_connection
          rpl.rpl_domain_id_filter_io_crash
          rpl.rpl_connection
          rpl.rpl_domain_id_filter_restart rpl.rpl_drop_db rpl.rpl_drop rpl.rpl_gtid_errorlog rpl.rpl_grant rpl.rpl_dump_request_retry_warning rpl.rpl_get_lock rpl.rpl_drop_view rpl.rpl_gtid_delete_domain rpl.rpl_function_defaults rpl.rpl_events
          rpl.rpl_gtid_errorlog
          rpl.rpl_domain_id_filter_io_crash
          rpl.rpl_parallel_kill
          rpl.rpl_domain_id_filter_io_crash
          rpl.rpl_foreign_key_innodb rpl.rpl_gtid_errorhandling
          rpl.rpl_parallel_kill
          rpl.rpl_gtid_strict rpl.rpl_mdev382 rpl.rpl_loaddata rpl.rpl_mdev_17614
          rpl.rpl_gtid_errorhandling
          rpl.rpl_mdev_17614
          rpl.rpl_gtid_reconnect rpl.rpl_gtid_startpos rpl.rpl_gtid_sort
          rpl.rpl_gtid_startpos
          rpl_sql_thd_start_errno_cleared
      

      • This error is reported on Zulip .

      Attachments

        Issue Links

          Activity

            anel Anel Husakovic created issue -
            anel Anel Husakovic made changes -
            Field Original Value New Value
            Description The is the bug that is very old (from 2010).
            In the include file {{wait_for_slave_param.inc }} there is the check for the {{slave_error_param}}.
            This value is either set by the test to some condition or in case if empty it is checked by the include file itself
            {code:noformat}
              --let $slave_error_param= 1
            {code}
            Based on this condition later in the same file condition is validated and compared to empty value:
            {code:noformat}
              # Check if an error condition is reached.
              if (!$slave_error_param)
              {
                --let $_show_slave_status_error_value= query_get_value("SHOW SLAVE STATUS", $slave_error_param, 1)
                if ($_show_slave_status_error_value)
                {
                  --echo **** ERROR: $slave_error_param = '$_show_slave_status_error_value' while waiting for slave parameter $slave_param $_slave_param_comparison $slave_param_value ****
                  --source include/show_rpl_debug_info.inc
                  --die Error condition reached in include/wait_for_slave_param.inc
                }
              }
            {code}

            The task consists of :
            1. Allow the condition to be checked
            2. By fixing the bug validate the tests that are using this paramater, there are about 20+ of them
            {code:noformat}
                binlog_encryption.rpl_parallel
                binlog_encryption.rpl_mixed_binlog_max_cache_size
                binlog_encryption.rpl_mixed_binlog_max_cache_size
                binlog_encryption.rpl_parallel
                binlog_encryption.encrypted_master_switch_to_unencrypted_coords
                binlog_encryption.encrypted_master_switch_to_unencrypted_gtid
                binlog_encryption.rpl_parallel
                binlog_encryption.rpl_parallel_ignored_errors
                binlog_encryption.rpl_parallel
                binlog_encryption.rpl_parallel
                binlog_encryption.rpl_parallel_ignored_errors
                binlog_encryption.encrypted_master_switch_to_unencrypted_coords
                binlog_encryption.rpl_parallel
                binlog_encryption.encrypted_master_switch_to_unencrypted_gtid
                multi_source.gtid_slave_pos
                multi_source.gtid_slave_pos
                rpl.rpl_incompatible_heartbeat rpl.rpl_innodb_mixed_dml rpl.rpl_innodb_mixed_ddl rpl.rpl_mixed_binlog_max_cache_size rpl.rpl_gtid_grouping
                rpl.mdev-31448_kill_ooo_finish_optimistic
                rpl.rpl_mixed_binlog_max_cache_size
                rpl.mdev-31448_kill_ooo_finish_optimistic
                rpl.rpl_domain_id_filter_io_crash
                rpl.rpl_cant_read_event_incident rpl.rpl_connection
                rpl.rpl_domain_id_filter_io_crash
                rpl.rpl_connection
                rpl.rpl_domain_id_filter_restart rpl.rpl_drop_db rpl.rpl_drop rpl.rpl_gtid_errorlog rpl.rpl_grant rpl.rpl_dump_request_retry_warning rpl.rpl_get_lock rpl.rpl_drop_view rpl.rpl_gtid_delete_domain rpl.rpl_function_defaults rpl.rpl_events
                rpl.rpl_gtid_errorlog
                rpl.rpl_domain_id_filter_io_crash
                rpl.rpl_parallel_kill
                rpl.rpl_domain_id_filter_io_crash
                rpl.rpl_foreign_key_innodb rpl.rpl_gtid_errorhandling
                rpl.rpl_parallel_kill
                rpl.rpl_gtid_strict rpl.rpl_mdev382 rpl.rpl_loaddata rpl.rpl_mdev_17614
                rpl.rpl_gtid_errorhandling
                rpl.rpl_mdev_17614
                rpl.rpl_gtid_reconnect rpl.rpl_gtid_startpos rpl.rpl_gtid_sort
                rpl.rpl_gtid_startpos
                rpl_sql_thd_start_errno_cleared
            {code}
            The is the bug that is very old (from 2010).
            In the include file {{wait_for_slave_param.inc }} there is the check for the {{slave_error_param}}.
            This value is either set by the test to some condition or in case if empty it is checked by the include file itself
            {code:noformat}
              --let $slave_error_param= 1
            {code}
            Based on this condition later in the same file condition is validated and compared to empty value:
            {code:noformat}
              # Check if an error condition is reached.
              if (!$slave_error_param)
              {
                --let $_show_slave_status_error_value= query_get_value("SHOW SLAVE STATUS", $slave_error_param, 1)
                if ($_show_slave_status_error_value)
                {
                  --echo **** ERROR: $slave_error_param = '$_show_slave_status_error_value' while waiting for slave parameter $slave_param $_slave_param_comparison $slave_param_value ****
                  --source include/show_rpl_debug_info.inc
                  --die Error condition reached in include/wait_for_slave_param.inc
                }
              }
            {code}

            The task consists of :
            1. Allow the condition to be checked
            2. By fixing the bug validate the tests that are using this paramater, there are about 20+ of them
            {code:noformat}
                binlog_encryption.rpl_parallel
                binlog_encryption.rpl_mixed_binlog_max_cache_size
                binlog_encryption.rpl_mixed_binlog_max_cache_size
                binlog_encryption.rpl_parallel
                binlog_encryption.encrypted_master_switch_to_unencrypted_coords
                binlog_encryption.encrypted_master_switch_to_unencrypted_gtid
                binlog_encryption.rpl_parallel
                binlog_encryption.rpl_parallel_ignored_errors
                binlog_encryption.rpl_parallel
                binlog_encryption.rpl_parallel
                binlog_encryption.rpl_parallel_ignored_errors
                binlog_encryption.encrypted_master_switch_to_unencrypted_coords
                binlog_encryption.rpl_parallel
                binlog_encryption.encrypted_master_switch_to_unencrypted_gtid
                multi_source.gtid_slave_pos
                multi_source.gtid_slave_pos
                rpl.rpl_incompatible_heartbeat rpl.rpl_innodb_mixed_dml rpl.rpl_innodb_mixed_ddl rpl.rpl_mixed_binlog_max_cache_size rpl.rpl_gtid_grouping
                rpl.mdev-31448_kill_ooo_finish_optimistic
                rpl.rpl_mixed_binlog_max_cache_size
                rpl.mdev-31448_kill_ooo_finish_optimistic
                rpl.rpl_domain_id_filter_io_crash
                rpl.rpl_cant_read_event_incident rpl.rpl_connection
                rpl.rpl_domain_id_filter_io_crash
                rpl.rpl_connection
                rpl.rpl_domain_id_filter_restart rpl.rpl_drop_db rpl.rpl_drop rpl.rpl_gtid_errorlog rpl.rpl_grant rpl.rpl_dump_request_retry_warning rpl.rpl_get_lock rpl.rpl_drop_view rpl.rpl_gtid_delete_domain rpl.rpl_function_defaults rpl.rpl_events
                rpl.rpl_gtid_errorlog
                rpl.rpl_domain_id_filter_io_crash
                rpl.rpl_parallel_kill
                rpl.rpl_domain_id_filter_io_crash
                rpl.rpl_foreign_key_innodb rpl.rpl_gtid_errorhandling
                rpl.rpl_parallel_kill
                rpl.rpl_gtid_strict rpl.rpl_mdev382 rpl.rpl_loaddata rpl.rpl_mdev_17614
                rpl.rpl_gtid_errorhandling
                rpl.rpl_mdev_17614
                rpl.rpl_gtid_reconnect rpl.rpl_gtid_startpos rpl.rpl_gtid_sort
                rpl.rpl_gtid_startpos
                rpl_sql_thd_start_errno_cleared
            {code}

            - This error is reported on [Zulip|https://mariadb.zulipchat.com/#narrow/stream/118759-general/topic/Replication.20questions/near/387601300] .
            anel Anel Husakovic made changes -
            Status Open [ 1 ] In Progress [ 3 ]
            anel Anel Husakovic made changes -
            Description The is the bug that is very old (from 2010).
            In the include file {{wait_for_slave_param.inc }} there is the check for the {{slave_error_param}}.
            This value is either set by the test to some condition or in case if empty it is checked by the include file itself
            {code:noformat}
              --let $slave_error_param= 1
            {code}
            Based on this condition later in the same file condition is validated and compared to empty value:
            {code:noformat}
              # Check if an error condition is reached.
              if (!$slave_error_param)
              {
                --let $_show_slave_status_error_value= query_get_value("SHOW SLAVE STATUS", $slave_error_param, 1)
                if ($_show_slave_status_error_value)
                {
                  --echo **** ERROR: $slave_error_param = '$_show_slave_status_error_value' while waiting for slave parameter $slave_param $_slave_param_comparison $slave_param_value ****
                  --source include/show_rpl_debug_info.inc
                  --die Error condition reached in include/wait_for_slave_param.inc
                }
              }
            {code}

            The task consists of :
            1. Allow the condition to be checked
            2. By fixing the bug validate the tests that are using this paramater, there are about 20+ of them
            {code:noformat}
                binlog_encryption.rpl_parallel
                binlog_encryption.rpl_mixed_binlog_max_cache_size
                binlog_encryption.rpl_mixed_binlog_max_cache_size
                binlog_encryption.rpl_parallel
                binlog_encryption.encrypted_master_switch_to_unencrypted_coords
                binlog_encryption.encrypted_master_switch_to_unencrypted_gtid
                binlog_encryption.rpl_parallel
                binlog_encryption.rpl_parallel_ignored_errors
                binlog_encryption.rpl_parallel
                binlog_encryption.rpl_parallel
                binlog_encryption.rpl_parallel_ignored_errors
                binlog_encryption.encrypted_master_switch_to_unencrypted_coords
                binlog_encryption.rpl_parallel
                binlog_encryption.encrypted_master_switch_to_unencrypted_gtid
                multi_source.gtid_slave_pos
                multi_source.gtid_slave_pos
                rpl.rpl_incompatible_heartbeat rpl.rpl_innodb_mixed_dml rpl.rpl_innodb_mixed_ddl rpl.rpl_mixed_binlog_max_cache_size rpl.rpl_gtid_grouping
                rpl.mdev-31448_kill_ooo_finish_optimistic
                rpl.rpl_mixed_binlog_max_cache_size
                rpl.mdev-31448_kill_ooo_finish_optimistic
                rpl.rpl_domain_id_filter_io_crash
                rpl.rpl_cant_read_event_incident rpl.rpl_connection
                rpl.rpl_domain_id_filter_io_crash
                rpl.rpl_connection
                rpl.rpl_domain_id_filter_restart rpl.rpl_drop_db rpl.rpl_drop rpl.rpl_gtid_errorlog rpl.rpl_grant rpl.rpl_dump_request_retry_warning rpl.rpl_get_lock rpl.rpl_drop_view rpl.rpl_gtid_delete_domain rpl.rpl_function_defaults rpl.rpl_events
                rpl.rpl_gtid_errorlog
                rpl.rpl_domain_id_filter_io_crash
                rpl.rpl_parallel_kill
                rpl.rpl_domain_id_filter_io_crash
                rpl.rpl_foreign_key_innodb rpl.rpl_gtid_errorhandling
                rpl.rpl_parallel_kill
                rpl.rpl_gtid_strict rpl.rpl_mdev382 rpl.rpl_loaddata rpl.rpl_mdev_17614
                rpl.rpl_gtid_errorhandling
                rpl.rpl_mdev_17614
                rpl.rpl_gtid_reconnect rpl.rpl_gtid_startpos rpl.rpl_gtid_sort
                rpl.rpl_gtid_startpos
                rpl_sql_thd_start_errno_cleared
            {code}

            - This error is reported on [Zulip|https://mariadb.zulipchat.com/#narrow/stream/118759-general/topic/Replication.20questions/near/387601300] .
            The is the bug that is very old (from 2010).
            In the include file {{wait_for_slave_param.inc}} there is the check for the {{slave_error_param}}.
            This value is either set by the test to some condition or in case if empty it is checked by the include file itself
            {code:noformat}
              --let $slave_error_param= 1
            {code}
            Based on this condition later in the same file condition is validated and compared to empty value:
            {code:noformat}
              # Check if an error condition is reached.
              if (!$slave_error_param)
              {
                --let $_show_slave_status_error_value= query_get_value("SHOW SLAVE STATUS", $slave_error_param, 1)
                if ($_show_slave_status_error_value)
                {
                  --echo **** ERROR: $slave_error_param = '$_show_slave_status_error_value' while waiting for slave parameter $slave_param $_slave_param_comparison $slave_param_value ****
                  --source include/show_rpl_debug_info.inc
                  --die Error condition reached in include/wait_for_slave_param.inc
                }
              }
            {code}

            The task consists of :
            1. Allow the condition to be checked
            2. By fixing the bug validate the tests that are using this paramater, there are about 20+ of them
            {code:noformat}
                binlog_encryption.rpl_parallel
                binlog_encryption.rpl_mixed_binlog_max_cache_size
                binlog_encryption.rpl_mixed_binlog_max_cache_size
                binlog_encryption.rpl_parallel
                binlog_encryption.encrypted_master_switch_to_unencrypted_coords
                binlog_encryption.encrypted_master_switch_to_unencrypted_gtid
                binlog_encryption.rpl_parallel
                binlog_encryption.rpl_parallel_ignored_errors
                binlog_encryption.rpl_parallel
                binlog_encryption.rpl_parallel
                binlog_encryption.rpl_parallel_ignored_errors
                binlog_encryption.encrypted_master_switch_to_unencrypted_coords
                binlog_encryption.rpl_parallel
                binlog_encryption.encrypted_master_switch_to_unencrypted_gtid
                multi_source.gtid_slave_pos
                multi_source.gtid_slave_pos
                rpl.rpl_incompatible_heartbeat rpl.rpl_innodb_mixed_dml rpl.rpl_innodb_mixed_ddl rpl.rpl_mixed_binlog_max_cache_size rpl.rpl_gtid_grouping
                rpl.mdev-31448_kill_ooo_finish_optimistic
                rpl.rpl_mixed_binlog_max_cache_size
                rpl.mdev-31448_kill_ooo_finish_optimistic
                rpl.rpl_domain_id_filter_io_crash
                rpl.rpl_cant_read_event_incident rpl.rpl_connection
                rpl.rpl_domain_id_filter_io_crash
                rpl.rpl_connection
                rpl.rpl_domain_id_filter_restart rpl.rpl_drop_db rpl.rpl_drop rpl.rpl_gtid_errorlog rpl.rpl_grant rpl.rpl_dump_request_retry_warning rpl.rpl_get_lock rpl.rpl_drop_view rpl.rpl_gtid_delete_domain rpl.rpl_function_defaults rpl.rpl_events
                rpl.rpl_gtid_errorlog
                rpl.rpl_domain_id_filter_io_crash
                rpl.rpl_parallel_kill
                rpl.rpl_domain_id_filter_io_crash
                rpl.rpl_foreign_key_innodb rpl.rpl_gtid_errorhandling
                rpl.rpl_parallel_kill
                rpl.rpl_gtid_strict rpl.rpl_mdev382 rpl.rpl_loaddata rpl.rpl_mdev_17614
                rpl.rpl_gtid_errorhandling
                rpl.rpl_mdev_17614
                rpl.rpl_gtid_reconnect rpl.rpl_gtid_startpos rpl.rpl_gtid_sort
                rpl.rpl_gtid_startpos
                rpl_sql_thd_start_errno_cleared
            {code}

            - This error is reported on [Zulip|https://mariadb.zulipchat.com/#narrow/stream/118759-general/topic/Replication.20questions/near/387601300] .
            serg Sergei Golubchik made changes -
            Fix Version/s 10.4 [ 22408 ]
            Fix Version/s 10.4.32 [ 29300 ]
            anel Anel Husakovic added a comment - - edited

            Hi knielsen, I'm working on this bug in PR 2762 and I have problems with 2 test cases (as asked on Zulip).
            Can I please get your review?
            Best regards,
            Anel

            anel Anel Husakovic added a comment - - edited Hi knielsen , I'm working on this bug in PR 2762 and I have problems with 2 test cases (as asked on Zulip ). Can I please get your review? Best regards, Anel
            anel Anel Husakovic made changes -
            Assignee Anel Husakovic [ anel ] Kristian Nielsen [ knielsen ]
            Status In Progress [ 3 ] In Review [ 10002 ]
            anel Anel Husakovic made changes -
            anel Anel Husakovic made changes -
            Assignee Kristian Nielsen [ knielsen ] Anel Husakovic [ anel ]
            anel Anel Husakovic made changes -
            Assignee Anel Husakovic [ anel ] Brandon Nesterenko [ JIRAUSER48702 ]
            bnestere Brandon Nesterenko made changes -
            Assignee Brandon Nesterenko [ JIRAUSER48702 ] Andrei Elkin [ elkin ]

            Reassigning this to Elkin to sign off.

            bnestere Brandon Nesterenko added a comment - Reassigning this to Elkin to sign off.
            Elkin Andrei Elkin added a comment -

            Looks good, thanks!

            Elkin Andrei Elkin added a comment - Looks good, thanks!
            Elkin Andrei Elkin made changes -
            Assignee Andrei Elkin [ elkin ] Anel Husakovic [ anel ]
            Status In Review [ 10002 ] Stalled [ 10000 ]
            anel Anel Husakovic added a comment - When merge happens here are changes per versions: https://github.com/an3l/server/tree/bb-10.5-anel-rpl-fix-assertion https://github.com/an3l/server/tree/bb-10.6-anel-rpl_fix_assertion https://github.com/an3l/server/tree/bb-10.9-anel-rpl_fix_assert https://github.com/an3l/server/tree/bb-10.10-anel-rpl_fix_assertion Assigning to Brandon to merge specific version. Thanks everyone.
            anel Anel Husakovic made changes -
            Assignee Anel Husakovic [ anel ] Brandon Nesterenko [ JIRAUSER48702 ]
            anel Anel Husakovic added a comment - Added to upstream updated branches, applying reviews in order to make checks for other workers. This should be check after merging PR 2762 : https://buildbot.mariadb.net/buildbot/grid?category=main&category=package&branch=bb-10.5-anel-rpl-fix-assertion https://buildbot.mariadb.net/buildbot/grid?category=main&category=package&branch=bb-10.6-anel-rpl-fix-assertion https://buildbot.mariadb.net/buildbot/grid?category=main&category=package&branch=bb-10.9-anel-rpl-fix-assertion https://buildbot.mariadb.net/buildbot/grid?category=main&category=package&branch=bb-10.10-anel-rpl-fix-assertion

            Pushed to 10.4 with commit a7d186a17d35b2651f.
            Added branches in the server tree for the proper merges 10.5+
            Thanks to the reviewers.

            anel Anel Husakovic added a comment - Pushed to 10.4 with commit a7d186a17d35b2651f . Added branches in the server tree for the proper merges 10.5+ Thanks to the reviewers.
            anel Anel Husakovic made changes -
            Fix Version/s 10.4.33 [ 29516 ]
            Fix Version/s 10.4 [ 22408 ]
            Assignee Brandon Nesterenko [ JIRAUSER48702 ] Anel Husakovic [ anel ]
            Resolution Fixed [ 1 ]
            Status Stalled [ 10000 ] Closed [ 6 ]
            JIraAutomate JiraAutomate made changes -
            Fix Version/s 10.5.24 [ 29517 ]
            Fix Version/s 10.6.17 [ 29518 ]
            Fix Version/s 10.11.7 [ 29519 ]
            Fix Version/s 11.0.5 [ 29520 ]
            Fix Version/s 11.1.4 [ 29024 ]
            Fix Version/s 11.2.3 [ 29521 ]

            People

              anel Anel Husakovic
              anel Anel Husakovic
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.