Uploaded image for project: 'MariaDB ColumnStore'
  1. MariaDB ColumnStore
  2. MCOL-4218

Remove NFS/EFS/ Filestore prereq for S3 storage

Details

    • New Feature
    • Status: Stalled (View Workflow)
    • Critical
    • Resolution: Unresolved
    • 1.5.3
    • 23.10
    • None
    • 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10, 2023-11, 2023-12, 2024-2

    Description

      When using storagemanager/S3 with Columnstore we want to avoid requiring additional shared storage: NFS or GlusterFS.
      There are three purposes for the shared storage:

      • store S3 meta, that is a list of object files parts of the original MCS file
      • store journal, that is a text file contains <offset, byte array> pairs that alters contents of S3 object files copies stored locally.
      • dbroot ownernership mechanism

      There is a write-up on some S3 implementation details in MCS.

      The suggested solution is to leverage an existing distributed KeyValueStorage to fulfill the purposes mentioned earlier. Distributed KVS chosen is FoundationDB.

      Attachments

        Issue Links

          Activity

            toddstoffel Todd Stoffel (Inactive) created issue -
            toddstoffel Todd Stoffel (Inactive) made changes -
            Field Original Value New Value
            Rank Ranked higher
            toddstoffel Todd Stoffel (Inactive) made changes -
            Epic Link MCOL-3524 [ 79334 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Fix Version/s 1.5.5 [ 24414 ]
            Fix Version/s 1.6 [ 24715 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Labels HA
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked lower
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked higher
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked higher
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked lower
            toddstoffel Todd Stoffel (Inactive) made changes -
            Fix Version/s 5.5.1 [ 25030 ]
            Fix Version/s 5.5.1 [ 25030 ]
            Fix Version/s 1.5.5 [ 24414 ]
            David.Hall David Hall (Inactive) made changes -
            Fix Version/s 5.6.1 [ 25031 ]
            Fix Version/s 5.5.1 [ 25030 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Affects Version/s 1.5.3 [ 24412 ]
            Affects Version/s 1.5 [ 22800 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked higher
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked higher
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked higher
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked higher
            toddstoffel Todd Stoffel (Inactive) made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            I would like to create a system for syncing metadata information over the api connection.
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            I would like to create a system for syncing metadata information without the need for a third party hardware/software solution. Maybe we should consider storing the metadata in a database table. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            MariaDB JSON functions handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `object` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | object |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}
            toddstoffel Todd Stoffel (Inactive) made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            I would like to create a system for syncing metadata information without the need for a third party hardware/software solution. Maybe we should consider storing the metadata in a database table. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            MariaDB JSON functions handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `object` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | object |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            I would like to create a system for syncing metadata information without the need for a third party hardware/software solution. Maybe we should consider storing the metadata in a database table. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            MariaDB JSON functions handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `object` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | object |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            toddstoffel Todd Stoffel (Inactive) made changes -
            Component/s cmapi [ 16117 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            I would like to create a system for syncing metadata information without the need for a third party hardware/software solution. Maybe we should consider storing the metadata in a database table. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            MariaDB JSON functions handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `object` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | object |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a database table. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            MariaDB JSON functions handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `object` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | object |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            toddstoffel Todd Stoffel (Inactive) made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a database table. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            MariaDB JSON functions handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `object` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | object |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            MariaDB JSON functions handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `object` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | object |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            toddstoffel Todd Stoffel (Inactive) made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            MariaDB JSON functions handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `object` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | object |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            MariaDB JSON functions handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            toddstoffel Todd Stoffel (Inactive) made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            MariaDB JSON functions handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. The files are simple JSON files and we already have the JSON functionality within the server that can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            MariaDB JSON functions handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            toddstoffel Todd Stoffel (Inactive) made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. The files are simple JSON files and we already have the JSON functionality within the server that can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            MariaDB JSON functions handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. The files are simple JSON files and we already have the JSON functionality within the server that can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            toddstoffel Todd Stoffel (Inactive) made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. The files are simple JSON files and we already have the JSON functionality within the server that can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. The current system uses files that are simple JSON files and we already have JSON functionality within the server we can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            toddstoffel Todd Stoffel (Inactive) made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. The current system uses files that are simple JSON files and we already have JSON functionality within the server we can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. The current system uses files that are simple JSON files and we already have JSON functionality within the server we can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            Instead of writing to files on disk, we should be sticking this write into the DB. When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            toddstoffel Todd Stoffel (Inactive) made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. The current system uses files that are simple JSON files and we already have JSON functionality within the server we can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            Instead of writing to files on disk, we should be sticking this write into the DB. When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. The current system uses files that are simple JSON files and we already have JSON functionality within the server we can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            Instead of writing to files on disk, we should be sticking this right into the DB. When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            toddstoffel Todd Stoffel (Inactive) made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. The current system uses files that are simple JSON files and we already have JSON functionality within the server we can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            Instead of writing to files on disk, we should be sticking this right into the DB. When written on the Primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. The current system uses files that are simple JSON files and we already have JSON functionality within the server we can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            Instead of writing to files on disk, we should be sticking this right into the database. When written on the primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            toddstoffel Todd Stoffel (Inactive) made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. The current system uses files that are simple JSON files and we already have JSON functionality within the server we can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            Instead of writing to files on disk, we should be sticking this right into the database. When written on the primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. The current system uses files that are simple JSON and we already have JSON functionality within the server we can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            Instead of writing to files on disk, we should be sticking this right into the database. When written on the primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            toddstoffel Todd Stoffel (Inactive) made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. The current system uses files that are simple JSON and we already have JSON functionality within the server we can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            Instead of writing to files on disk, we should be sticking this right into the database. When written on the primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. The current system uses files that are simple JSON and we already have JSON functionality within the server that we can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            Instead of writing to files on disk, we should be sticking this right into the database. When written on the primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            toddstoffel Todd Stoffel (Inactive) made changes -
            Fix Version/s 6.1 [ 25201 ]
            Fix Version/s 5.6.1 [ 25031 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked lower
            toddstoffel Todd Stoffel (Inactive) made changes -
            Fix Version/s 6.1.1 [ 25600 ]
            Fix Version/s 6.1 [ 25201 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked higher
            toddstoffel Todd Stoffel (Inactive) made changes -
            Priority Major [ 3 ] Critical [ 2 ]
            gdorman Gregory Dorman (Inactive) made changes -
            Summary Internal Metadata Sync S3 Journal Sync (aka "internal metadata synch")
            gdorman Gregory Dorman (Inactive) made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            Maybe we should consider storing the metadata in a system database table. The current system uses files that are simple JSON and we already have JSON functionality within the server that we can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            Instead of writing to files on disk, we should be sticking this right into the database. When written on the primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            The subject is touched upon (without proposing a resolution) in https://docs.google.com/document/d/1USO3iXosBIv-jFOQNd820KSXdNPkOnlGDOPcfRkw1rA/edit#heading=h.2bu0ywfefwgs

            Previous discussions centered around run-time synchronization of this object using plain network (like we do for Extent map). Another idea was offered recently (see below).

            An adjacent part of the effort has to be "reconstruction from data" in case of crash.

            ++++++++++++++++++++++++++++++++++ proposal from Todd +++++++++++++++++++++++++++++++++++++++++++++++++
            Maybe we should consider storing the metadata in a system database table. The current system uses files that are simple JSON and we already have JSON functionality within the server that we can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            Instead of writing to files on disk, we should be sticking this right into the database. When written on the primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            gdorman Gregory Dorman (Inactive) made changes -
            Assignee Todd Stoffel [ toddstoffel ] Alexey Antipovsky [ JIRAUSER47594 ]
            gdorman Gregory Dorman (Inactive) made changes -
            Sprint 2021-3 [ 498 ]
            gdorman Gregory Dorman (Inactive) made changes -
            Rank Ranked higher
            gdorman Gregory Dorman (Inactive) made changes -
            Sprint 2021-3 [ 498 ] 2021-3, 2021-4 [ 498, 499 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked higher
            toddstoffel Todd Stoffel (Inactive) made changes -
            Summary S3 Journal Sync (aka "internal metadata synch") S3 Journal Sync (aka "internal metadata sync")
            gdorman Gregory Dorman (Inactive) made changes -
            Sprint 2021-3, 2021-4 [ 498, 499 ] 2021-3, 2021-4, 2021-5 [ 498, 499, 504 ]
            gdorman Gregory Dorman (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5 [ 498, 499, 504 ] 2021-3, 2021-4, 2021-5, 2021-6 [ 498, 499, 504, 509 ]
            gdorman Gregory Dorman (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6 [ 498, 499, 504, 509 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7 [ 498, 499, 504, 509, 514 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked higher
            gdorman Gregory Dorman (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7 [ 498, 499, 504, 509, 514 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8 [ 498, 499, 504, 509, 514, 521 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Fix Version/s 6.5.1 [ 25801 ]
            Fix Version/s 6.1.1 [ 25600 ]
            gdorman Gregory Dorman (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8 [ 498, 499, 504, 509, 514, 521 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9 [ 498, 499, 504, 509, 514, 521, 541 ]
            David.Hall David Hall (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9 [ 498, 499, 504, 509, 514, 521, 541 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10 [ 498, 499, 504, 509, 514, 521, 541, 549 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked higher
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked higher
            gdorman Gregory Dorman (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10 [ 498, 499, 504, 509, 514, 521, 541, 549 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11 [ 498, 499, 504, 509, 514, 521, 541, 549, 567 ]
            gdorman Gregory Dorman (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11 [ 498, 499, 504, 509, 514, 521, 541, 549, 567 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569 ]
            gdorman Gregory Dorman (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-13 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 577 ]
            gdorman Gregory Dorman (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-13 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 577 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569 ]
            gdorman Gregory Dorman (Inactive) made changes -
            Rank Ranked lower
            gdorman Gregory Dorman (Inactive) made changes -
            Rank Ranked higher
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked lower
            gdorman Gregory Dorman (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598 ]
            gdorman Gregory Dorman (Inactive) made changes -
            Rank Ranked lower
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked higher
            gdorman Gregory Dorman (Inactive) made changes -
            Fix Version/s 6.3.2 [ 27303 ]
            Fix Version/s 6.3.1 [ 25801 ]
            gdorman Gregory Dorman (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Fix Version/s 6.4.1 [ 26046 ]
            Fix Version/s 6.3.2 [ 27303 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Priority Critical [ 2 ] Blocker [ 1 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Rank Ranked higher
            toddstoffel Todd Stoffel (Inactive) made changes -
            Fix Version/s 22.08 [ 26904 ]
            Fix Version/s 6.4.1 [ 26046 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Fix Version/s 22.08.1 [ 28206 ]
            Fix Version/s 22.08 [ 26904 ]
            alexey.vorovich alexey vorovich (Inactive) made changes -
            Fix Version/s 22.08 [ 26904 ]
            Fix Version/s 22.08.1 [ 28206 ]
            alexey.vorovich alexey vorovich (Inactive) made changes -
            Assignee Alexey Antipovsky [ JIRAUSER47594 ] Roman [ drrtuy ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            toddstoffel Todd Stoffel (Inactive) made changes -
            alexey.vorovich alexey vorovich (Inactive) made changes -
            Summary S3 Journal Sync (aka "internal metadata sync") Remove NFS/EFS/ FilestoreS3 prereq .
            alexey.vorovich alexey vorovich (Inactive) made changes -
            Summary Remove NFS/EFS/ FilestoreS3 prereq . Remove NFS/EFS/ Filestore prereq for S3 storage
            alexey.vorovich alexey vorovich (Inactive) made changes -
            toddstoffel Todd Stoffel (Inactive) made changes -
            Fix Version/s 23.02 [ 28209 ]
            Fix Version/s 22.08 [ 26904 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2021-18 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672 ]
            alexey.vorovich alexey vorovich (Inactive) made changes -
            Fix Version/s 23.08 [ 28540 ]
            Fix Version/s 23.02 [ 28209 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2022-24 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728 ]
            alexey.vorovich alexey vorovich (Inactive) made changes -
            Assignee Roman [ drrtuy ] Denis Khalikov [ JIRAUSER48434 ]
            alexey.vorovich alexey vorovich (Inactive) made changes -
            Status Open [ 1 ] In Progress [ 3 ]
            alexey.vorovich alexey vorovich (Inactive) made changes -
            Labels HA rm_stability
            denis0x0D Denis Khalikov (Inactive) made changes -
            denis0x0D Denis Khalikov (Inactive) made changes -
            toddstoffel Todd Stoffel (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-9 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 733 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-9 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 733 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 734 ]
            toddstoffel Todd Stoffel (Inactive) made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 734 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10, 2023-11 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 734, 737 ]
            AirFocus AirFocus made changes -
            Summary Remove NFS/EFS/ Filestore prereq for S3 storage Remove NFS/EFS/ Filestore prereq for S3 storage
            julien.fritsch Julien Fritsch made changes -
            Status In Progress [ 3 ] Stalled [ 10000 ]
            leonid.fedorov Leonid Fedorov made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10, 2023-11 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 734, 737 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10, 2023-11, 2023-13 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 734, 737, 748 ]
            julien.fritsch Julien Fritsch made changes -
            Rank Ranked higher
            julien.fritsch Julien Fritsch made changes -
            Assignee Denis Khalikov [ JIRAUSER48434 ] Max Mether [ maxmether ]
            julien.fritsch Julien Fritsch made changes -
            Reporter Todd Stoffel [ toddstoffel ] Max Mether [ maxmether ]
            julien.fritsch Julien Fritsch made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10, 2023-11, 2023-12 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 734, 737, 748 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10, 2023-11, 2023-12, 2023-13 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 734, 737, 748, 755 ]
            julien.fritsch Julien Fritsch made changes -
            Labels rm_stability rm_stability triage
            julien.fritsch Julien Fritsch made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10, 2023-11, 2023-12, 2024-1 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 734, 737, 748, 755 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10, 2023-11, 2023-12 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 734, 737, 748 ]
            julien.fritsch Julien Fritsch made changes -
            Assignee Max Mether [ maxmether ]
            julien.fritsch Julien Fritsch made changes -
            Priority Blocker [ 1 ] Critical [ 2 ]
            drrtuy Roman made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            The subject is touched upon (without proposing a resolution) in https://docs.google.com/document/d/1USO3iXosBIv-jFOQNd820KSXdNPkOnlGDOPcfRkw1rA/edit#heading=h.2bu0ywfefwgs

            Previous discussions centered around run-time synchronization of this object using plain network (like we do for Extent map). Another idea was offered recently (see below).

            An adjacent part of the effort has to be "reconstruction from data" in case of crash.

            ++++++++++++++++++++++++++++++++++ proposal from Todd +++++++++++++++++++++++++++++++++++++++++++++++++
            Maybe we should consider storing the metadata in a system database table. The current system uses files that are simple JSON and we already have JSON functionality within the server that we can take advantage of this. For example:

            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            Instead of writing to files on disk, we should be sticking this right into the database. When written on the primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            The subject is touched upon (without proposing a resolution) in https://docs.google.com/document/d/1USO3iXosBIv-jFOQNd820KSXdNPkOnlGDOPcfRkw1rA/edit#heading=h.2bu0ywfefwgs

            Previous discussions centered around run-time synchronization of this object using plain network (like we do for Extent map). Another idea was offered recently (see below).

            An adjacent part of the effort has to be "reconstruction from data" in case of crash.


            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            Instead of writing to files on disk, we should be sticking this right into the database. When written on the primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            drrtuy Roman made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional third party hardware (NFS) or software (GlusterFS) for HA.

            The subject is touched upon (without proposing a resolution) in https://docs.google.com/document/d/1USO3iXosBIv-jFOQNd820KSXdNPkOnlGDOPcfRkw1rA/edit#heading=h.2bu0ywfefwgs

            Previous discussions centered around run-time synchronization of this object using plain network (like we do for Extent map). Another idea was offered recently (see below).

            An adjacent part of the effort has to be "reconstruction from data" in case of crash.


            {code:java}
            CREATE TABLE `columnstore_info`.`columnstore_meta` (
              `id` int(11) NOT NULL AUTO_INCREMENT,
              `node` varchar(6) NOT NULL DEFAULT '',
              `path` varchar(128) NOT NULL DEFAULT '',
              `name` varchar(128) NOT NULL DEFAULT '',
              `metadata` longtext NOT NULL,
              PRIMARY KEY (`id`)
            ) ENGINE=Aria AUTO_INCREMENT=3 DEFAULT CHARSET=utf8mb4 PAGE_CHECKSUM=1;
            {code}

            Sample data here:

            {code:java}
            MariaDB [(none)]> select * from columnstore_info.columnstore_meta\G
            *************************** 1. row ***************************
                  id: 1
                node: data1
                path: systemFiles/dbrm
                name: tablelocks.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "4",
                        "key": "c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks"
                    }
                ]
            }
            *************************** 2. row ***************************
                  id: 2
                node: data1
                path: 000.dir/000.dir/003.dir/233.dir/000.dir
                name: FILE000.cdf.meta
            metadata: {
                "version": "1",
                "revision": "1",
                "objects":
                [
                    {
                        "offset": "0",
                        "length": "2097152",
                        "key": "d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf"
                    }
                ]
            }
            2 rows in set (0.000 sec)
            {code}

            [MariaDB JSON functions|https://mariadb.com/kb/en/json-functions/] handle all the CRUD functionality that we would need.

            {code:java}
            MariaDB [(none)]> SELECT node, path, name, json_value(metadata,'$.objects[0].key') as `key` from columnstore_info.columnstore_meta;
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | node | path | name | key |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            | data1 | systemFiles/dbrm | tablelocks.meta | c22873a7-77ef-4f95-b29e-de0a3a06145b_0_4_data1~systemFiles~dbrm~tablelocks |
            | data1 | 000.dir/000.dir/003.dir/233.dir/000.dir | FILE000.cdf.meta | d1250ef6-c4ae-4efb-b1a3-784c98f1f230_0_2097152_data1~000.dir~000.dir~003.dir~233.dir~000.dir~FILE000.cdf |
            +-------+-----------------------------------------+------------------+----------------------------------------------------------------------------------------------------------+
            2 rows in set (0.000 sec)
            {code}

            Instead of writing to files on disk, we should be sticking this right into the database. When written on the primary, this could be synced to the replicas via normal binlog traffic. (Same as our DDL)
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional shared storage: NFS or GlusterFS.
            There are three purposes for the shared storage:
            - store S3 meta, that is a list of object files parts of the original MCS file
            - store journal, that is a text file contains <offset, byte array> pairs that alters contents of S3 object files copies stored locally.
            - dbroot ownernership mechanism

            There is [a write-up|https://docs.google.com/document/d/1USO3iXosBIv-jFOQNd820KSXdNPkOnlGDOPcfRkw1rA/edit] on S3 implementation in MCS.

            The suggested solution is to leverage an existing distributed KeyValueStorage to fulfill the purposes mentioned earlier. Distributed KVS chosen is FoundationDB.
            drrtuy Roman made changes -
            Description When using storagemanager/S3 with Columnstore we want to avoid requiring additional shared storage: NFS or GlusterFS.
            There are three purposes for the shared storage:
            - store S3 meta, that is a list of object files parts of the original MCS file
            - store journal, that is a text file contains <offset, byte array> pairs that alters contents of S3 object files copies stored locally.
            - dbroot ownernership mechanism

            There is [a write-up|https://docs.google.com/document/d/1USO3iXosBIv-jFOQNd820KSXdNPkOnlGDOPcfRkw1rA/edit] on S3 implementation in MCS.

            The suggested solution is to leverage an existing distributed KeyValueStorage to fulfill the purposes mentioned earlier. Distributed KVS chosen is FoundationDB.
            When using storagemanager/S3 with Columnstore we want to avoid requiring additional shared storage: NFS or GlusterFS.
            There are three purposes for the shared storage:
            - store S3 meta, that is a list of object files parts of the original MCS file
            - store journal, that is a text file contains <offset, byte array> pairs that alters contents of S3 object files copies stored locally.
            - dbroot ownernership mechanism

            There is [a write-up|https://docs.google.com/document/d/1USO3iXosBIv-jFOQNd820KSXdNPkOnlGDOPcfRkw1rA/edit] on some S3 implementation details in MCS.

            The suggested solution is to leverage an existing distributed KeyValueStorage to fulfill the purposes mentioned earlier. Distributed KVS chosen is FoundationDB.
            julien.fritsch Julien Fritsch made changes -
            Assignee Leonid Fedorov [ JIRAUSER48443 ]
            leonid.fedorov Leonid Fedorov made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10, 2023-11, 2023-12 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 734, 737, 748 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10, 2023-11, 2023-12, 2024-2 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 734, 737, 748, 764 ]
            julien.fritsch Julien Fritsch made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10, 2023-11, 2023-12, 2024-2 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 734, 737, 748, 764 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10, 2023-11, 2023-12, 2024-2, 2024-3 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 734, 737, 748, 764, 784 ]
            leonid.fedorov Leonid Fedorov made changes -
            Rank Ranked lower
            allen.herrera Allen Herrera made changes -
            julien.fritsch Julien Fritsch made changes -
            Sprint 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10, 2023-11, 2023-12, 2024-2, 2025-1 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 734, 737, 748, 764, 784 ] 2021-3, 2021-4, 2021-5, 2021-6, 2021-7, 2021-8, 2021-9, 2021-10, 2021-11, 2021-12, 2021-16, 2021-17, 2022-22, 2022-23, 2023-4, 2023-5, 2023-6, 2023-7, 2023-8, 2023-10, 2023-11, 2023-12, 2024-2 [ 498, 499, 504, 509, 514, 521, 541, 549, 567, 569, 598, 614, 672, 686, 698, 702, 706, 726, 728, 734, 737, 748, 764 ]
            julien.fritsch Julien Fritsch made changes -
            Labels rm_stability triage rm_stability

            People

              leonid.fedorov Leonid Fedorov
              maxmether Max Mether
              Votes:
              3 Vote for this issue
              Watchers:
              15 Start watching this issue

              Dates

                Created:
                Updated:

                Git Integration

                  Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.