Uploaded image for project: 'MariaDB ColumnStore'
  1. MariaDB ColumnStore
  2. MCOL-5131

mcsRebuildEM - add support to calculate HWM for system cat. files.

Details

    • New Feature
    • Status: Closed (View Workflow)
    • Minor
    • Resolution: Fixed
    • 6.2.1
    • 6.4.1
    • None
    • None
    • 2021-17

    Description

      System cat. files are not compressed for some reason, that means, those files do not have a special header with `lbid`, `colwidth` and so on. We restore extents for this files from binary blob (initial state) and HWM is not calculating for them because `ChunkManager` does not have an API to read not compressed files block by block.
      For big data bases - system files will grow as well and HWM will be greater than initial state, this should be supported.

      Attachments

        Issue Links

          Activity

            dleeyh Daniel Lee (Inactive) added a comment - - edited

            Build tested (#4661)

            Executed tests using Docker containers.

            Test scenario

            create 1gb dbt database
            load database
            select count(*) from lineitem
            stop ColumnStore
            rm BRM_saves_em
            mcsRebuildEM -v
            start ColumnStore
            select count(*) from lineitem
            cpimport 1gb lineitem
            select count(*) from lineitem
            

            Dataset size: 1 gb DBT3

            1PM local storage - PASSED
            1PM S3 storage - PASSED
            3PM local storage - PASSED
            3PM S3 storage - PASSED

            Dataset size: 10 gb DBT3

            1PM local storage - FAILED *
            1PM S3 storage - FAILED *
            3PM local storage - PASSED
            3PM S3 storage - PASSED

            • Both local and S3 tests return similar error messages after running 'mcsRebuildEM -v'
              I am not sure if the issue is in the actual data files, or in this tool.

            .
            .
            .
            Setting a HWM for [OID: 3060, partition: 0, segment: 0, col width: 8, lbid:2061312, hwm: 0, isDict: 1]
            Extent is created, allocated size 8192 actual LBID 2069504
            For [OID: 3060, partition: 0, segment: 0, col width: 8, lbid:2069504, hwm: 124032, isDict: 1]
            Setting a HWM for [OID: 3060, partition: 0, segment: 0, col width: 8, lbid:2069504, hwm: 124032, isDict: 1]
            Cannot set local HWM: ExtentMap::setLocalHWM(): new HWM is past the end of the file for OID 3060; partition 0; segment 0; HWM 124032
            Completed.
            

            The two failed cases also failed in VMs, not just in Docker containers.

            dleeyh Daniel Lee (Inactive) added a comment - - edited Build tested (#4661) Executed tests using Docker containers. Test scenario create 1gb dbt database load database select count(*) from lineitem stop ColumnStore rm BRM_saves_em mcsRebuildEM -v start ColumnStore select count(*) from lineitem cpimport 1gb lineitem select count(*) from lineitem Dataset size: 1 gb DBT3 1PM local storage - PASSED 1PM S3 storage - PASSED 3PM local storage - PASSED 3PM S3 storage - PASSED Dataset size: 10 gb DBT3 1PM local storage - FAILED * 1PM S3 storage - FAILED * 3PM local storage - PASSED 3PM S3 storage - PASSED Both local and S3 tests return similar error messages after running 'mcsRebuildEM -v' I am not sure if the issue is in the actual data files, or in this tool. . . . Setting a HWM for [OID: 3060, partition: 0, segment: 0, col width: 8, lbid:2061312, hwm: 0, isDict: 1] Extent is created, allocated size 8192 actual LBID 2069504 For [OID: 3060, partition: 0, segment: 0, col width: 8, lbid:2069504, hwm: 124032, isDict: 1] Setting a HWM for [OID: 3060, partition: 0, segment: 0, col width: 8, lbid:2069504, hwm: 124032, isDict: 1] Cannot set local HWM: ExtentMap::setLocalHWM(): new HWM is past the end of the file for OID 3060; partition 0; segment 0; HWM 124032 Completed. The two failed cases also failed in VMs, not just in Docker containers.

            Build tested (#4749)

            Tested new build with 10gb, 20gb, and 50gb datasets on both 1PM and 3PM configurations.

            Passed.

            dleeyh Daniel Lee (Inactive) added a comment - Build tested (#4749) Tested new build with 10gb, 20gb, and 50gb datasets on both 1PM and 3PM configurations. Passed.

            People

              denis0x0D Denis Khalikov (Inactive)
              denis0x0D Denis Khalikov (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.