Details
-
Bug
-
Status: Closed (View Workflow)
-
Major
-
Resolution: Fixed
-
10.4(EOL)
-
None
Description
The is the bug that is very old (from 2010).
In the include file wait_for_slave_param.inc there is the check for the slave_error_param.
This value is either set by the test to some condition or in case if empty it is checked by the include file itself
--let $slave_error_param= 1
|
Based on this condition later in the same file condition is validated and compared to empty value:
# Check if an error condition is reached.
|
if (!$slave_error_param)
|
{
|
--let $_show_slave_status_error_value= query_get_value("SHOW SLAVE STATUS", $slave_error_param, 1)
|
if ($_show_slave_status_error_value)
|
{
|
--echo **** ERROR: $slave_error_param = '$_show_slave_status_error_value' while waiting for slave parameter $slave_param $_slave_param_comparison $slave_param_value ****
|
--source include/show_rpl_debug_info.inc
|
--die Error condition reached in include/wait_for_slave_param.inc
|
}
|
}
|
The task consists of :
1. Allow the condition to be checked
2. By fixing the bug validate the tests that are using this paramater, there are about 20+ of them
binlog_encryption.rpl_parallel
|
binlog_encryption.rpl_mixed_binlog_max_cache_size
|
binlog_encryption.rpl_mixed_binlog_max_cache_size
|
binlog_encryption.rpl_parallel
|
binlog_encryption.encrypted_master_switch_to_unencrypted_coords
|
binlog_encryption.encrypted_master_switch_to_unencrypted_gtid
|
binlog_encryption.rpl_parallel
|
binlog_encryption.rpl_parallel_ignored_errors
|
binlog_encryption.rpl_parallel
|
binlog_encryption.rpl_parallel
|
binlog_encryption.rpl_parallel_ignored_errors
|
binlog_encryption.encrypted_master_switch_to_unencrypted_coords
|
binlog_encryption.rpl_parallel
|
binlog_encryption.encrypted_master_switch_to_unencrypted_gtid
|
multi_source.gtid_slave_pos
|
multi_source.gtid_slave_pos
|
rpl.rpl_incompatible_heartbeat rpl.rpl_innodb_mixed_dml rpl.rpl_innodb_mixed_ddl rpl.rpl_mixed_binlog_max_cache_size rpl.rpl_gtid_grouping
|
rpl.mdev-31448_kill_ooo_finish_optimistic
|
rpl.rpl_mixed_binlog_max_cache_size
|
rpl.mdev-31448_kill_ooo_finish_optimistic
|
rpl.rpl_domain_id_filter_io_crash
|
rpl.rpl_cant_read_event_incident rpl.rpl_connection
|
rpl.rpl_domain_id_filter_io_crash
|
rpl.rpl_connection
|
rpl.rpl_domain_id_filter_restart rpl.rpl_drop_db rpl.rpl_drop rpl.rpl_gtid_errorlog rpl.rpl_grant rpl.rpl_dump_request_retry_warning rpl.rpl_get_lock rpl.rpl_drop_view rpl.rpl_gtid_delete_domain rpl.rpl_function_defaults rpl.rpl_events
|
rpl.rpl_gtid_errorlog
|
rpl.rpl_domain_id_filter_io_crash
|
rpl.rpl_parallel_kill
|
rpl.rpl_domain_id_filter_io_crash
|
rpl.rpl_foreign_key_innodb rpl.rpl_gtid_errorhandling
|
rpl.rpl_parallel_kill
|
rpl.rpl_gtid_strict rpl.rpl_mdev382 rpl.rpl_loaddata rpl.rpl_mdev_17614
|
rpl.rpl_gtid_errorhandling
|
rpl.rpl_mdev_17614
|
rpl.rpl_gtid_reconnect rpl.rpl_gtid_startpos rpl.rpl_gtid_sort
|
rpl.rpl_gtid_startpos
|
rpl_sql_thd_start_errno_cleared
|
- This error is reported on Zulip .
Attachments
Issue Links
- links to
Activity
Field | Original Value | New Value |
---|---|---|
Description |
The is the bug that is very old (from 2010).
In the include file {{wait_for_slave_param.inc }} there is the check for the {{slave_error_param}}. This value is either set by the test to some condition or in case if empty it is checked by the include file itself {code:noformat} --let $slave_error_param= 1 {code} Based on this condition later in the same file condition is validated and compared to empty value: {code:noformat} # Check if an error condition is reached. if (!$slave_error_param) { --let $_show_slave_status_error_value= query_get_value("SHOW SLAVE STATUS", $slave_error_param, 1) if ($_show_slave_status_error_value) { --echo **** ERROR: $slave_error_param = '$_show_slave_status_error_value' while waiting for slave parameter $slave_param $_slave_param_comparison $slave_param_value **** --source include/show_rpl_debug_info.inc --die Error condition reached in include/wait_for_slave_param.inc } } {code} The task consists of : 1. Allow the condition to be checked 2. By fixing the bug validate the tests that are using this paramater, there are about 20+ of them {code:noformat} binlog_encryption.rpl_parallel binlog_encryption.rpl_mixed_binlog_max_cache_size binlog_encryption.rpl_mixed_binlog_max_cache_size binlog_encryption.rpl_parallel binlog_encryption.encrypted_master_switch_to_unencrypted_coords binlog_encryption.encrypted_master_switch_to_unencrypted_gtid binlog_encryption.rpl_parallel binlog_encryption.rpl_parallel_ignored_errors binlog_encryption.rpl_parallel binlog_encryption.rpl_parallel binlog_encryption.rpl_parallel_ignored_errors binlog_encryption.encrypted_master_switch_to_unencrypted_coords binlog_encryption.rpl_parallel binlog_encryption.encrypted_master_switch_to_unencrypted_gtid multi_source.gtid_slave_pos multi_source.gtid_slave_pos rpl.rpl_incompatible_heartbeat rpl.rpl_innodb_mixed_dml rpl.rpl_innodb_mixed_ddl rpl.rpl_mixed_binlog_max_cache_size rpl.rpl_gtid_grouping rpl.mdev-31448_kill_ooo_finish_optimistic rpl.rpl_mixed_binlog_max_cache_size rpl.mdev-31448_kill_ooo_finish_optimistic rpl.rpl_domain_id_filter_io_crash rpl.rpl_cant_read_event_incident rpl.rpl_connection rpl.rpl_domain_id_filter_io_crash rpl.rpl_connection rpl.rpl_domain_id_filter_restart rpl.rpl_drop_db rpl.rpl_drop rpl.rpl_gtid_errorlog rpl.rpl_grant rpl.rpl_dump_request_retry_warning rpl.rpl_get_lock rpl.rpl_drop_view rpl.rpl_gtid_delete_domain rpl.rpl_function_defaults rpl.rpl_events rpl.rpl_gtid_errorlog rpl.rpl_domain_id_filter_io_crash rpl.rpl_parallel_kill rpl.rpl_domain_id_filter_io_crash rpl.rpl_foreign_key_innodb rpl.rpl_gtid_errorhandling rpl.rpl_parallel_kill rpl.rpl_gtid_strict rpl.rpl_mdev382 rpl.rpl_loaddata rpl.rpl_mdev_17614 rpl.rpl_gtid_errorhandling rpl.rpl_mdev_17614 rpl.rpl_gtid_reconnect rpl.rpl_gtid_startpos rpl.rpl_gtid_sort rpl.rpl_gtid_startpos rpl_sql_thd_start_errno_cleared {code} |
The is the bug that is very old (from 2010).
In the include file {{wait_for_slave_param.inc }} there is the check for the {{slave_error_param}}. This value is either set by the test to some condition or in case if empty it is checked by the include file itself {code:noformat} --let $slave_error_param= 1 {code} Based on this condition later in the same file condition is validated and compared to empty value: {code:noformat} # Check if an error condition is reached. if (!$slave_error_param) { --let $_show_slave_status_error_value= query_get_value("SHOW SLAVE STATUS", $slave_error_param, 1) if ($_show_slave_status_error_value) { --echo **** ERROR: $slave_error_param = '$_show_slave_status_error_value' while waiting for slave parameter $slave_param $_slave_param_comparison $slave_param_value **** --source include/show_rpl_debug_info.inc --die Error condition reached in include/wait_for_slave_param.inc } } {code} The task consists of : 1. Allow the condition to be checked 2. By fixing the bug validate the tests that are using this paramater, there are about 20+ of them {code:noformat} binlog_encryption.rpl_parallel binlog_encryption.rpl_mixed_binlog_max_cache_size binlog_encryption.rpl_mixed_binlog_max_cache_size binlog_encryption.rpl_parallel binlog_encryption.encrypted_master_switch_to_unencrypted_coords binlog_encryption.encrypted_master_switch_to_unencrypted_gtid binlog_encryption.rpl_parallel binlog_encryption.rpl_parallel_ignored_errors binlog_encryption.rpl_parallel binlog_encryption.rpl_parallel binlog_encryption.rpl_parallel_ignored_errors binlog_encryption.encrypted_master_switch_to_unencrypted_coords binlog_encryption.rpl_parallel binlog_encryption.encrypted_master_switch_to_unencrypted_gtid multi_source.gtid_slave_pos multi_source.gtid_slave_pos rpl.rpl_incompatible_heartbeat rpl.rpl_innodb_mixed_dml rpl.rpl_innodb_mixed_ddl rpl.rpl_mixed_binlog_max_cache_size rpl.rpl_gtid_grouping rpl.mdev-31448_kill_ooo_finish_optimistic rpl.rpl_mixed_binlog_max_cache_size rpl.mdev-31448_kill_ooo_finish_optimistic rpl.rpl_domain_id_filter_io_crash rpl.rpl_cant_read_event_incident rpl.rpl_connection rpl.rpl_domain_id_filter_io_crash rpl.rpl_connection rpl.rpl_domain_id_filter_restart rpl.rpl_drop_db rpl.rpl_drop rpl.rpl_gtid_errorlog rpl.rpl_grant rpl.rpl_dump_request_retry_warning rpl.rpl_get_lock rpl.rpl_drop_view rpl.rpl_gtid_delete_domain rpl.rpl_function_defaults rpl.rpl_events rpl.rpl_gtid_errorlog rpl.rpl_domain_id_filter_io_crash rpl.rpl_parallel_kill rpl.rpl_domain_id_filter_io_crash rpl.rpl_foreign_key_innodb rpl.rpl_gtid_errorhandling rpl.rpl_parallel_kill rpl.rpl_gtid_strict rpl.rpl_mdev382 rpl.rpl_loaddata rpl.rpl_mdev_17614 rpl.rpl_gtid_errorhandling rpl.rpl_mdev_17614 rpl.rpl_gtid_reconnect rpl.rpl_gtid_startpos rpl.rpl_gtid_sort rpl.rpl_gtid_startpos rpl_sql_thd_start_errno_cleared {code} - This error is reported on [Zulip|https://mariadb.zulipchat.com/#narrow/stream/118759-general/topic/Replication.20questions/near/387601300] . |
Status | Open [ 1 ] | In Progress [ 3 ] |
Description |
The is the bug that is very old (from 2010).
In the include file {{wait_for_slave_param.inc }} there is the check for the {{slave_error_param}}. This value is either set by the test to some condition or in case if empty it is checked by the include file itself {code:noformat} --let $slave_error_param= 1 {code} Based on this condition later in the same file condition is validated and compared to empty value: {code:noformat} # Check if an error condition is reached. if (!$slave_error_param) { --let $_show_slave_status_error_value= query_get_value("SHOW SLAVE STATUS", $slave_error_param, 1) if ($_show_slave_status_error_value) { --echo **** ERROR: $slave_error_param = '$_show_slave_status_error_value' while waiting for slave parameter $slave_param $_slave_param_comparison $slave_param_value **** --source include/show_rpl_debug_info.inc --die Error condition reached in include/wait_for_slave_param.inc } } {code} The task consists of : 1. Allow the condition to be checked 2. By fixing the bug validate the tests that are using this paramater, there are about 20+ of them {code:noformat} binlog_encryption.rpl_parallel binlog_encryption.rpl_mixed_binlog_max_cache_size binlog_encryption.rpl_mixed_binlog_max_cache_size binlog_encryption.rpl_parallel binlog_encryption.encrypted_master_switch_to_unencrypted_coords binlog_encryption.encrypted_master_switch_to_unencrypted_gtid binlog_encryption.rpl_parallel binlog_encryption.rpl_parallel_ignored_errors binlog_encryption.rpl_parallel binlog_encryption.rpl_parallel binlog_encryption.rpl_parallel_ignored_errors binlog_encryption.encrypted_master_switch_to_unencrypted_coords binlog_encryption.rpl_parallel binlog_encryption.encrypted_master_switch_to_unencrypted_gtid multi_source.gtid_slave_pos multi_source.gtid_slave_pos rpl.rpl_incompatible_heartbeat rpl.rpl_innodb_mixed_dml rpl.rpl_innodb_mixed_ddl rpl.rpl_mixed_binlog_max_cache_size rpl.rpl_gtid_grouping rpl.mdev-31448_kill_ooo_finish_optimistic rpl.rpl_mixed_binlog_max_cache_size rpl.mdev-31448_kill_ooo_finish_optimistic rpl.rpl_domain_id_filter_io_crash rpl.rpl_cant_read_event_incident rpl.rpl_connection rpl.rpl_domain_id_filter_io_crash rpl.rpl_connection rpl.rpl_domain_id_filter_restart rpl.rpl_drop_db rpl.rpl_drop rpl.rpl_gtid_errorlog rpl.rpl_grant rpl.rpl_dump_request_retry_warning rpl.rpl_get_lock rpl.rpl_drop_view rpl.rpl_gtid_delete_domain rpl.rpl_function_defaults rpl.rpl_events rpl.rpl_gtid_errorlog rpl.rpl_domain_id_filter_io_crash rpl.rpl_parallel_kill rpl.rpl_domain_id_filter_io_crash rpl.rpl_foreign_key_innodb rpl.rpl_gtid_errorhandling rpl.rpl_parallel_kill rpl.rpl_gtid_strict rpl.rpl_mdev382 rpl.rpl_loaddata rpl.rpl_mdev_17614 rpl.rpl_gtid_errorhandling rpl.rpl_mdev_17614 rpl.rpl_gtid_reconnect rpl.rpl_gtid_startpos rpl.rpl_gtid_sort rpl.rpl_gtid_startpos rpl_sql_thd_start_errno_cleared {code} - This error is reported on [Zulip|https://mariadb.zulipchat.com/#narrow/stream/118759-general/topic/Replication.20questions/near/387601300] . |
The is the bug that is very old (from 2010).
In the include file {{wait_for_slave_param.inc}} there is the check for the {{slave_error_param}}. This value is either set by the test to some condition or in case if empty it is checked by the include file itself {code:noformat} --let $slave_error_param= 1 {code} Based on this condition later in the same file condition is validated and compared to empty value: {code:noformat} # Check if an error condition is reached. if (!$slave_error_param) { --let $_show_slave_status_error_value= query_get_value("SHOW SLAVE STATUS", $slave_error_param, 1) if ($_show_slave_status_error_value) { --echo **** ERROR: $slave_error_param = '$_show_slave_status_error_value' while waiting for slave parameter $slave_param $_slave_param_comparison $slave_param_value **** --source include/show_rpl_debug_info.inc --die Error condition reached in include/wait_for_slave_param.inc } } {code} The task consists of : 1. Allow the condition to be checked 2. By fixing the bug validate the tests that are using this paramater, there are about 20+ of them {code:noformat} binlog_encryption.rpl_parallel binlog_encryption.rpl_mixed_binlog_max_cache_size binlog_encryption.rpl_mixed_binlog_max_cache_size binlog_encryption.rpl_parallel binlog_encryption.encrypted_master_switch_to_unencrypted_coords binlog_encryption.encrypted_master_switch_to_unencrypted_gtid binlog_encryption.rpl_parallel binlog_encryption.rpl_parallel_ignored_errors binlog_encryption.rpl_parallel binlog_encryption.rpl_parallel binlog_encryption.rpl_parallel_ignored_errors binlog_encryption.encrypted_master_switch_to_unencrypted_coords binlog_encryption.rpl_parallel binlog_encryption.encrypted_master_switch_to_unencrypted_gtid multi_source.gtid_slave_pos multi_source.gtid_slave_pos rpl.rpl_incompatible_heartbeat rpl.rpl_innodb_mixed_dml rpl.rpl_innodb_mixed_ddl rpl.rpl_mixed_binlog_max_cache_size rpl.rpl_gtid_grouping rpl.mdev-31448_kill_ooo_finish_optimistic rpl.rpl_mixed_binlog_max_cache_size rpl.mdev-31448_kill_ooo_finish_optimistic rpl.rpl_domain_id_filter_io_crash rpl.rpl_cant_read_event_incident rpl.rpl_connection rpl.rpl_domain_id_filter_io_crash rpl.rpl_connection rpl.rpl_domain_id_filter_restart rpl.rpl_drop_db rpl.rpl_drop rpl.rpl_gtid_errorlog rpl.rpl_grant rpl.rpl_dump_request_retry_warning rpl.rpl_get_lock rpl.rpl_drop_view rpl.rpl_gtid_delete_domain rpl.rpl_function_defaults rpl.rpl_events rpl.rpl_gtid_errorlog rpl.rpl_domain_id_filter_io_crash rpl.rpl_parallel_kill rpl.rpl_domain_id_filter_io_crash rpl.rpl_foreign_key_innodb rpl.rpl_gtid_errorhandling rpl.rpl_parallel_kill rpl.rpl_gtid_strict rpl.rpl_mdev382 rpl.rpl_loaddata rpl.rpl_mdev_17614 rpl.rpl_gtid_errorhandling rpl.rpl_mdev_17614 rpl.rpl_gtid_reconnect rpl.rpl_gtid_startpos rpl.rpl_gtid_sort rpl.rpl_gtid_startpos rpl_sql_thd_start_errno_cleared {code} - This error is reported on [Zulip|https://mariadb.zulipchat.com/#narrow/stream/118759-general/topic/Replication.20questions/near/387601300] . |
Fix Version/s | 10.4 [ 22408 ] | |
Fix Version/s | 10.4.32 [ 29300 ] |
Assignee | Anel Husakovic [ anel ] | Kristian Nielsen [ knielsen ] |
Status | In Progress [ 3 ] | In Review [ 10002 ] |
Remote Link | This issue links to "PR 2762 (Web Link)" [ 35958 ] |
Assignee | Kristian Nielsen [ knielsen ] | Anel Husakovic [ anel ] |
Assignee | Anel Husakovic [ anel ] | Brandon Nesterenko [ JIRAUSER48702 ] |
Assignee | Brandon Nesterenko [ JIRAUSER48702 ] | Andrei Elkin [ elkin ] |
Assignee | Andrei Elkin [ elkin ] | Anel Husakovic [ anel ] |
Status | In Review [ 10002 ] | Stalled [ 10000 ] |
Assignee | Anel Husakovic [ anel ] | Brandon Nesterenko [ JIRAUSER48702 ] |
Fix Version/s | 10.4.33 [ 29516 ] | |
Fix Version/s | 10.4 [ 22408 ] | |
Assignee | Brandon Nesterenko [ JIRAUSER48702 ] | Anel Husakovic [ anel ] |
Resolution | Fixed [ 1 ] | |
Status | Stalled [ 10000 ] | Closed [ 6 ] |
Fix Version/s | 10.5.24 [ 29517 ] | |
Fix Version/s | 10.6.17 [ 29518 ] | |
Fix Version/s | 10.11.7 [ 29519 ] | |
Fix Version/s | 11.0.5 [ 29520 ] | |
Fix Version/s | 11.1.4 [ 29024 ] | |
Fix Version/s | 11.2.3 [ 29521 ] |
Hi knielsen, I'm working on this bug in PR 2762 and I have problems with 2 test cases (as asked on Zulip).
Can I please get your review?
Best regards,
Anel