Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-11064

Restrict the speed of reading binlog from Master

Details

    Description

      In some case, the speed of reading binlog from master is high, especially when doing a new replica.
      It would bring the high traffic in master.
      So We introduce a new variable "read_binlog_speed_limit" to control the binlog reading rate for IO thread to solve the problem.

      handle_io_slave:
      last_add_time = now() 
        tokens = read_binlog_speed_limit //there are some initial tokens in the bucket
        while(true){
          event = read_event() 
            if (read_binlog_speed_limit > 0) {
              if (tokents > TOKEN_MAX) { 
                tokens = TOKEN_MAX 
                  last_add_time = now() 
              } 
              do{ 
                //put some token
                tokens = tokens + (now()-last_add_time)*read_binlog_speed_limit;
                //if the token is not enough, sleep some time.
                if(tokens < event.real_network_read_len)
                  sleep((event.len-tokens)/speed_limit) 
              } 
            }
          write_event(event) 
        }
      

      It can work when slave_compressed_protocol is on.
      But it maybe doesn't work well when the binlog event is very big.

      Attachments

        Issue Links

          Activity

            svoj Sergey Vojtovich created issue -
            svoj Sergey Vojtovich made changes -
            Field Original Value New Value

            Pushed to 10.2.3, thanks!

            knielsen Kristian Nielsen added a comment - Pushed to 10.2.3, thanks!
            knielsen Kristian Nielsen made changes -
            Fix Version/s 10.2.3 [ 22115 ]
            Fix Version/s 10.2 [ 14601 ]
            Resolution Fixed [ 1 ]
            Status Open [ 1 ] Closed [ 6 ]
            elenst Elena Stepanova made changes -
            serg Sergei Golubchik made changes -
            Description In some case, the speed of reading binlog from master is high, especially when doing a new replica.
            It would bring the high traffic in master.
            So We introduce a new variable "read_binlog_speed_limit" to control the binlog reading rate for IO thread to solve the problem.

            handle_io_slave:
            last_add_time = now()
            tokens = read_binlog_speed_limit //there are some initial tokens in the bucket
            while(true){
            event = read_event()
            if (read_binlog_speed_limit > 0) {
            if (tokents > TOKEN_MAX) {
            tokens = TOKEN_MAX
            last_add_time = now()
            }
            do{
            //put some token
            tokens = tokens + (now()-last_add_time)*read_binlog_speed_limit;
            //if the token is not enough, sleep some time.
            if(tokens < event.real_network_read_len)
            sleep((event.len-tokens)/speed_limit)
            }
            }
            write_event(event)
            }

            It can work when slave_compressed_protocol is on.
            But it maybe doesn't work well when the binlog event is very big.
            In some case, the speed of reading binlog from master is high, especially when doing a new replica.
            It would bring the high traffic in master.
            So We introduce a new variable "read_binlog_speed_limit" to control the binlog reading rate for IO thread to solve the problem.
            {code:c++}
            handle_io_slave:
            last_add_time = now()
            tokens = read_binlog_speed_limit //there are some initial tokens in the bucket
            while(true){
            event = read_event()
            if (read_binlog_speed_limit > 0) {
            if (tokents > TOKEN_MAX) {
            tokens = TOKEN_MAX
            last_add_time = now()
            }
            do{
            //put some token
            tokens = tokens + (now()-last_add_time)*read_binlog_speed_limit;
            //if the token is not enough, sleep some time.
            if(tokens < event.real_network_read_len)
            sleep((event.len-tokens)/speed_limit)
            }
            }
            write_event(event)
            }
            {code}
            It can work when slave_compressed_protocol is on.
            But it maybe doesn't work well when the binlog event is very big.
            serg Sergei Golubchik made changes -
            Description In some case, the speed of reading binlog from master is high, especially when doing a new replica.
            It would bring the high traffic in master.
            So We introduce a new variable "read_binlog_speed_limit" to control the binlog reading rate for IO thread to solve the problem.
            {code:c++}
            handle_io_slave:
            last_add_time = now()
            tokens = read_binlog_speed_limit //there are some initial tokens in the bucket
            while(true){
            event = read_event()
            if (read_binlog_speed_limit > 0) {
            if (tokents > TOKEN_MAX) {
            tokens = TOKEN_MAX
            last_add_time = now()
            }
            do{
            //put some token
            tokens = tokens + (now()-last_add_time)*read_binlog_speed_limit;
            //if the token is not enough, sleep some time.
            if(tokens < event.real_network_read_len)
            sleep((event.len-tokens)/speed_limit)
            }
            }
            write_event(event)
            }
            {code}
            It can work when slave_compressed_protocol is on.
            But it maybe doesn't work well when the binlog event is very big.
            In some case, the speed of reading binlog from master is high, especially when doing a new replica.
            It would bring the high traffic in master.
            So We introduce a new variable "read_binlog_speed_limit" to control the binlog reading rate for IO thread to solve the problem.
            {code:c++}
            handle_io_slave:
            last_add_time = now()
              tokens = read_binlog_speed_limit //there are some initial tokens in the bucket
              while(true){
                event = read_event()
                  if (read_binlog_speed_limit > 0) {
                    if (tokents > TOKEN_MAX) {
                      tokens = TOKEN_MAX
                        last_add_time = now()
                    }
                    do{
                      //put some token
                      tokens = tokens + (now()-last_add_time)*read_binlog_speed_limit;
                      //if the token is not enough, sleep some time.
                      if(tokens < event.real_network_read_len)
                        sleep((event.len-tokens)/speed_limit)
                    }
                  }
                write_event(event)
              }
            {code}
            It can work when slave_compressed_protocol is on.
            But it maybe doesn't work well when the binlog event is very big.
            serg Sergei Golubchik made changes -
            Workflow MariaDB v3 [ 77844 ] MariaDB v4 [ 132970 ]

            People

              knielsen Kristian Nielsen
              svoj Sergey Vojtovich
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.