Uploaded image for project: 'MariaDB Server'
  1. MariaDB Server
  2. MDEV-6089

MySQL WL#7305 "Improve MDL scalability by using lock-free hash"

    XMLWordPrintable

Details

    • Task
    • Status: Closed (View Workflow)
    • Major
    • Resolution: Fixed
    • 10.1.4
    • Locking
    • None

    Description

      revno: 7249
      committer: Dmitry Lenev <Dmitry.Lenev@oracle.com>
      branch nick: mysql-trunk-wl7305
      timestamp: Fri 2014-01-10 11:53:41 +0400
      message:
        WL#7305 "Improve MDL scalability by using lock-free hash".
       
        The main benefit of this patch is that it opens the way for
        implementing WL7306 which brings significant improvement to
        MDL performance/scalability in some scenarios.
       
        The basic idea behind this patch is to change the MDL_map
        implementation to use LF_HASH instead of a partitioned HASH container,
        where each partition is protected by individual mutexes.
       
        Nice results of such a change:
       
        - Since on systems with atomic support LF_HASH is lock-free,
          MDL_map_partition::m_mutex and potential concurrency bottleneck
          associated with it was removed.
        - For the same reason it doesn't make sense to partition LF_HASH.
          So we returned back to the scheme with one hash for the whole
          MDL_map and removed the MDL_map_partition class and the
          mdl_locks_hash_partitions start-up parameter.
        - Thanks to the fact that LF_HASH is integrated with LF_ALLOCATOR
          and uses per-thread hazard pointers to avoid objects in the
          hash from being deleted immediately after they were looked up, we
          were able to get rid of all MDL_map/MDL_lock machinery responsible
          for reference counting (i.e. MDL_lock::m_ref_usage/m_ref_release/
          m_version).
        - We also no longer need the MDL_map_partition::m_unused_locks_cache
          as LF_ALLOCATOR has its own mechanism for caching objects which are
          expensive to create/destroy.
       
        To support the above changes the following additional steps were taken:
       
        - Since it is tricky to use LF_HASH with objects of different types
          stored in LF_ALLOCATOR, to support these changes we had to get rid
          of MDL_object_lock/MDL_scoped_lock dichotomy. This was done by moving
          out their differences to a MDL_lock_strategy structure which is
          referenced from the MDL_lock object by pointer.
        - To make it easier to use LF_HASH with non-trivially copyable objects
          (such as MDL_lock) a new callback "initialize" was added to it. This
          callback allows finishing of initialization of the object provided by
          LF_ALLOCATOR and set element key from the object passed as parameter to
          lf_hash_insert.
          Also LF_HASH was extended to support a user-provided hash function
          such as MurmurHash3 used in the MDL subsystem.
        - LF_HASH and LF_ALLOCATOR initialization functions were extended to be
          able to accept callback functions used in them as explicit parameters.
        - lf_alloc_direct_free() was fixed to call destructor callback before
          doing my_free() on memory belong to object being freed.
       
        Also the following user visible change was made --
        --metadata_locks_cache_size and --metadata_locks_hash_instances startup
        options and corresponding system variables were declared as deprecated
        as they now have no effect.

      Attachments

        Issue Links

          Activity

            People

              svoj Sergey Vojtovich
              svoj Sergey Vojtovich
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.