For the first iteration of Vector search, we will implement HNSW algorithm.
The implementation will only support Euclidean distance initially.
Basic plan:
Graph construction will be done according to HNSW paper.
Storage wise, we'll store the graph as part of a subtable (MDEV-33404).
The table's definition will be something along these lines:
CREATETABLE i (
levelint unsigned notnull,
src varbinary(255) notnull,
dst varbinary(255) notnull,
index (level,src),
index (level,dst));
For each link in the graph, there will be a corresponding entry in the table.
src and dst will store handler::position, a quick link to the actual vector blob in the main table.
The index (level,src) will allow for quick jumping between nodes.
To go deeper in search, one just needs to decrement the level and search using the same "src" value.
If src is found on level n, then it is also found on level n - 1 and so on. Level 0 is the base level with all the nodes.
Performance considerations:
Storing the vector in the subtable might be required. Looking up the blob value in the base table might be too costly.
Hi cvicentiu, serg, I'm doing some research on the DELETE algorithm for the HNSW index, I have summarized my current findings as following.
I'll try to create a PoC for option 3, but before dive deep into the implementation, I would like to seek early feedback on the feasibility of the options and any potential concerns regarding the preferred solution options.
In addition, should we create a separate Jira for this the DELETE/UPDATE task?
HNSW UPDATE/DELETE
Many HNSW implementations do not support update or delete vector. When users need to update or delete the vectors, they need to recreate the whole index. It introduces overly high costs.
The original HNSW paper does not provide any information or guidance regarding the update or deletion.
pgvector supports UPDATE/DELETE as summarized at the end of this comment.
High-level Options:
Option 1: Mark graph nodes as source-invalid instead of deleting or rebuilding the graph index upon DELETE/UPDATE operations. These invalid nodes can still be used during search or insertion, but they will not be included in the results or added to the neighbor lists of new nodes.
pros: easier maintenance.
cons: index size continues to grow, and query speed and recall may degrade if too many updates/deletes occur.
Option 2: Upon DELETE/UPDATE operations, traverse all nodes in the graph and delete/recreate the related connections.
pros: minimal impact on recall, and the index is always up-to-date.
cons: extremely slow process, as it requires traversing all nodes in the graph and rebuilding them for complete cleanup.
Option 3: Combine Options 1 and 2. [ preferred ]\
Mark records as source-invalid and use them for search but exclude them from results. Update the index following Option 2 only when the ANALYSIS table is called (investigate the possibility of triggering this cleanup during ANALYSIS).
pros: minimizes performance impact, and users can update the index only when needed (during ANALYSIS).
Option 4: Simply do not support index maintenance during DELETE/UPDATE operations. Require a complete index rebuild after UPDATE/DELETE.
Option 5: Mark graph nodes as source-invalid and simply skip all those nodes as if they do not exist during search or insertion.
this workaround will undoubtedly impact the recall.
Implementation if above option 3 selected:
MariaDB does not have an existing way to identify if one element in the index points to a valid or invalid record. MariaDB does immediately updates the index if the record is updated or deleted.
To save the invalid state:\
Option 2 is preferred as it aligns with the high-level index mechanism by using the same secondary table without introducing too much complexity.
Option 1: Save another list of invalid references. This needs maintenance of an additional list and an extra logic when searching.
Option 2: Add a column on the secondary table to mark the index records' state (invalid/valid).
On DELETE
load the secondary table
for each layer from layer_max to 0, mark the corresponding nodes to the source_invalid state.
On UPDATE
changes of the graph in the secondary table are similar to INSERT + DELETE
On SEARCH/INSERT
use all (invalid and valid) nodes for search.
do not add invalid nodes to the results list
do not count the invalid node in ef_search or ef_construct
do not add invalid nodes to the neighbor lists of new nodes
One possible trigger to start cleaning up all invalid nodes from the graph could be the ANALYSIS table.
Haven't checked much, needs more investigation.
On DELETE in pgvector
Term Explanations:
heap: In PostgreSQL, the term "heap" refers to the main storage structure used by PostgreSQL to store the actual data of a table.\ TID: In PostgreSQL, TID stands for "Tuple ID". It is a low-level identifier that uniquely identifies a specific row (tuple) within a table's heap (storage).
In pgvector HNSW index, it saves not only the TID of the data but also a copy of the vector value.
When the DELETE query is executed:
The heap TID is marked as invalid so later index search can check if the corresponding row is still valid or not.
But at that moment, nothing has changed in the index. I did not find an index interface called during the DELETE query either.
When a row is deleted but before vacuum happens:
If a search or insert happens, it still use those to-be-deleted elements in the index for search, but not add them in return results.
It uses the copy of the vector data in the index element to compare the distance but does not read from the heap data.
It uses the heaptidsLength to identify if it's a valid record to be added in the final results or ef counts. (I haven't figure out what triggers the heaptidsLength to be 0 when the DELETE query happens)
It traverses through all index pages, gets a list of to-be-deleted nodes
Then it traverses through all index pages again, and for each node, it checks if its neighbor list contains any to-be-deleted nodes.
if no, skip
if yes, empty the neighbor list and rebuild similar to a new insert
Hugo Wen
added a comment - - edited Hi cvicentiu , serg , I'm doing some research on the DELETE algorithm for the HNSW index, I have summarized my current findings as following.
I'll try to create a PoC for option 3, but before dive deep into the implementation, I would like to seek early feedback on the feasibility of the options and any potential concerns regarding the preferred solution options.
In addition, should we create a separate Jira for this the DELETE/UPDATE task?
HNSW UPDATE/DELETE
Many HNSW implementations do not support update or delete vector. When users need to update or delete the vectors, they need to recreate the whole index. It introduces overly high costs.
The original HNSW paper does not provide any information or guidance regarding the update or deletion.
pgvector supports UPDATE/DELETE as summarized at the end of this comment.
High-level Options:
Option 1: Mark graph nodes as source-invalid instead of deleting or rebuilding the graph index upon DELETE/UPDATE operations. These invalid nodes can still be used during search or insertion, but they will not be included in the results or added to the neighbor lists of new nodes.
pros: easier maintenance.
cons: index size continues to grow, and query speed and recall may degrade if too many updates/deletes occur.
Option 2: Upon DELETE/UPDATE operations, traverse all nodes in the graph and delete/recreate the related connections.
pros: minimal impact on recall, and the index is always up-to-date.
cons: extremely slow process, as it requires traversing all nodes in the graph and rebuilding them for complete cleanup.
Option 3: Combine Options 1 and 2. [ preferred ] \
Mark records as source-invalid and use them for search but exclude them from results. Update the index following Option 2 only when the ANALYSIS table is called (investigate the possibility of triggering this cleanup during ANALYSIS).
pros: minimizes performance impact, and users can update the index only when needed (during ANALYSIS).
Option 4: Simply do not support index maintenance during DELETE/UPDATE operations. Require a complete index rebuild after UPDATE/DELETE.
Option 5: Mark graph nodes as source-invalid and simply skip all those nodes as if they do not exist during search or insertion.
this workaround will undoubtedly impact the recall.
Implementation if above option 3 selected:
MariaDB does not have an existing way to identify if one element in the index points to a valid or invalid record. MariaDB does immediately updates the index if the record is updated or deleted.
To save the invalid state:\
Option 2 is preferred as it aligns with the high-level index mechanism by using the same secondary table without introducing too much complexity.
Option 1: Save another list of invalid references. This needs maintenance of an additional list and an extra logic when searching.
Option 2: Add a column on the secondary table to mark the index records' state (invalid/valid).
On DELETE
load the secondary table
for each layer from layer_max to 0, mark the corresponding nodes to the source_invalid state.
On UPDATE
changes of the graph in the secondary table are similar to INSERT + DELETE
On SEARCH/INSERT
use all (invalid and valid) nodes for search.
do not add invalid nodes to the results list
do not count the invalid node in ef_search or ef_construct
do not add invalid nodes to the neighbor lists of new nodes
One possible trigger to start cleaning up all invalid nodes from the graph could be the ANALYSIS table.
Haven't checked much, needs more investigation.
On DELETE in pgvector
Term Explanations:
heap: In PostgreSQL, the term "heap" refers to the main storage structure used by PostgreSQL to store the actual data of a table.\
TID: In PostgreSQL, TID stands for "Tuple ID". It is a low-level identifier that uniquely identifies a specific row (tuple) within a table's heap (storage).
In pgvector HNSW index, it saves not only the TID of the data but also a copy of the vector value.
When the DELETE query is executed:
The heap TID is marked as invalid so later index search can check if the corresponding row is still valid or not.
But at that moment, nothing has changed in the index. I did not find an index interface called during the DELETE query either.
When a row is deleted but before vacuum happens:
If a search or insert happens, it still use those to-be-deleted elements in the index for search, but not add them in return results.
It uses the copy of the vector data in the index element to compare the distance but does not read from the heap data.
It uses the heaptidsLength to identify if it's a valid record to be added in the final results or ef counts. (I haven't figure out what triggers the heaptidsLength to be 0 when the DELETE query happens)
When vacuum executes :
It traverses through all index pages, gets a list of to-be-deleted nodes
Then it traverses through all index pages again, and for each node, it checks if its neighbor list contains any to-be-deleted nodes.
if no, skip
if yes, empty the neighbor list and rebuild similar to a new insert
I suggest we do the simple approach that doesn't take much time to implement and is known to work. After that we can improve, using the existing implementation as a baseline.
So, let's start from doing 1, marking deleted rows. This could be done by adding a new column to the table, like vector BLOB. It'll be empty for rows that are present in the table, and if a row is deleted, it'll store the vector that used to be in the table. These deleted nodes should be used normally by the algorithm for searches, except that they cannot be added to the result set.
Sergei Golubchik
added a comment - Thanks, wenhug !
I suggest we do the simple approach that doesn't take much time to implement and is known to work. After that we can improve, using the existing implementation as a baseline.
So, let's start from doing 1, marking deleted rows. This could be done by adding a new column to the table, like vector BLOB . It'll be empty for rows that are present in the table, and if a row is deleted, it'll store the vector that used to be in the table. These deleted nodes should be used normally by the algorithm for searches, except that they cannot be added to the result set.
Thanks for the suggestion, Sergei. I agree that starting with option 1 of marking deleted rows is a good way to go. It is not a one-way door, we can introduce the cleanup part of Option 3 later on.
I'll start to add a new column for marking and saving the deleted vector. Now, I'm looking into how to load the secondary table during a DELETE operation.
Hugo Wen
added a comment - Thanks for the suggestion, Sergei. I agree that starting with option 1 of marking deleted rows is a good way to go. It is not a one-way door, we can introduce the cleanup part of Option 3 later on.
I'll start to add a new column for marking and saving the deleted vector. Now, I'm looking into how to load the secondary table during a DELETE operation.
Hi serg There's one issue with using the following second table and the vec blob column to store deleted values and identify whether the source was deleted.
CREATE TABLE i (\
layer int not null,\
src varbinary(255) not null, // ref of the source\
neighbors varbinary(1000) not null, // ref of the neighbors\
vec blob defaultnull, // vector value of the source if source deleted\
index (layer, src))
Currently, when retrieving neighbors during the search or insert operation, we get the reference of all neighbors and then obtain the actual vector values of the neighbors by using source->file->ha_rnd_pos to directly read the source record.\
The logic needs to change because the source ref may be invalid if the record was deleted. (correct me if I was wrong but I don't think MariaDB knows if the position is still valid or not)\
The logic would be as follows:
select vec from i where layer=0 and src=neigh_ref , using graph->file->ha_index_read_map in the code and check if vec is null.
If vec is null, read from the primary table data using ha_rnd_pos.
If vec is not null, the original data was deleted, and the value will be used for calculation.
This additional query will reduce the performance for normal search operations.\
If we have to perform ha_index_read_map anyway, another option is to always save the vector value in the second table and include another column to mark the source state. For example:
CREATE TABLE i (\
layer int not null,\
src varbinary(255) not null, // ref of the source\
neighbors varbinary(1000) not null, // ref of the neighbors\
vec blob defaultnull, // vector value of the source\
src_state tinyint default0, // 0 if valid, 1 if source deleted\
index (layer, src))
With this approach, during the search, it would only need to access the index.
It also makes it possible to perform some preprocessing, like quantization, during the index build to improve the search performance.
However, as my understanding is that ha_index_read_map could be slower than ha_rnd_pos, this approach might degrade performance compared to the current implementation without considering deletions.
I was thinking about to try this approach and test the performance change. What's your opinion about it? Does this approach worth a try or do you have other suggestions?
Hugo Wen
added a comment - Hi serg There's one issue with using the following second table and the vec blob column to store deleted values and identify whether the source was deleted.
CREATE TABLE i (\
layer int not null ,\
src varbinary( 255 ) not null , // ref of the source\
neighbors varbinary( 1000 ) not null , // ref of the neighbors\
vec blob default null , // vector value of the source if source deleted\
index (layer, src))
Currently, when retrieving neighbors during the search or insert operation, we get the reference of all neighbors and then obtain the actual vector values of the neighbors by using source->file->ha_rnd_pos to directly read the source record.\
The logic needs to change because the source ref may be invalid if the record was deleted. (correct me if I was wrong but I don't think MariaDB knows if the position is still valid or not)\
The logic would be as follows:
select vec from i where layer=0 and src=neigh_ref , using graph->file->ha_index_read_map in the code and check if vec is null.
If vec is null, read from the primary table data using ha_rnd_pos .
If vec is not null, the original data was deleted, and the value will be used for calculation.
This additional query will reduce the performance for normal search operations.\
If we have to perform ha_index_read_map anyway, another option is to always save the vector value in the second table and include another column to mark the source state. For example:
CREATE TABLE i (\
layer int not null ,\
src varbinary( 255 ) not null , // ref of the source\
neighbors varbinary( 1000 ) not null , // ref of the neighbors\
vec blob default null , // vector value of the source\
src_state tinyint default 0 , // 0 if valid, 1 if source deleted\
index (layer, src))
With this approach, during the search, it would only need to access the index.
It also makes it possible to perform some preprocessing, like quantization, during the index build to improve the search performance.
However, as my understanding is that ha_index_read_map could be slower than ha_rnd_pos , this approach might degrade performance compared to the current implementation without considering deletions.
I was thinking about to try this approach and test the performance change. What's your opinion about it? Does this approach worth a try or do you have other suggestions?
It is definitely worth a try. I always thought it would a useful tradeoff (time vs space) to try. But first I thought we needed to establish a baseline to compare against. If you think the current code is a good baseline — sure, please, go ahead and try it.
One of the benefits of this structure is that the vector in the index doesn't have to be the same as in the table. It could be preprocessed, e.g. converted to a smaller size floats or have less dimensions.
Sergei Golubchik
added a comment - - edited It is definitely worth a try. I always thought it would a useful tradeoff (time vs space) to try. But first I thought we needed to establish a baseline to compare against. If you think the current code is a good baseline — sure, please, go ahead and try it.
One of the benefits of this structure is that the vector in the index doesn't have to be the same as in the table. It could be preprocessed, e.g. converted to a smaller size floats or have less dimensions.
Thank you serg for the quick feedback. I don't have a great baseline but at least I have previous implementation in my pull request that I can compare to. I'll test how it impacts the performance.
One of the benefits of this structure is that the vector in the index doesn't have to be the same as in the table. It could be preprocessed, e.g. converted to a smaller size floats or have less dimensions.
Exactly. While adding the vector data introduces overhead and could impact performance, it has the potential to improve search speed with the normalized data.
Hugo Wen
added a comment - Thank you serg for the quick feedback. I don't have a great baseline but at least I have previous implementation in my pull request that I can compare to. I'll test how it impacts the performance.
The benchmark for bb-11.4-vec-preview , which is the source of cvicentiu 's pull request https://github.com/MariaDB/server/pull/3257 , is not performing as expected for some reason( the insert is as slow as 2 records per second) . So, at the moment, I can't use it as a baseline.
One of the benefits of this structure is that the vector in the index doesn't have to be the same as in the table. It could be preprocessed, e.g. converted to a smaller size floats or have less dimensions.
Exactly. While adding the vector data introduces overhead and could impact performance, it has the potential to improve search speed with the normalized data.
(Not related to the delete algorithm) I've rewritten the select_neighbours function to match the Algorithm 4 from paper, I can now get very good recall with the benchmark tool.
This was a bit of duplication of work, unfortunately. I've fixed it yesterday and pushed into the corresponding 11.6 branch
Sergei Golubchik
added a comment - This was a bit of duplication of work, unfortunately. I've fixed it yesterday and pushed into the corresponding 11.6 branch
Besides the logic fix, my select_neighbours implementation comparing to the updated version in your branch are:
I intentionally did not run EXTEND_CANDIDATES. It does not significantly improve recall but impacts the speed a lot. ( this is the key issue in your branches which leads to super slow insert with the benchmarking tool. )
another small improvement is pq_discard does not need initialization or data insertion if KEEP_PRUNED_CONNECTIONS is not set. And it does not need to be implemented as a queue since the elements were already sorted before insertion.
Hugo Wen
added a comment - > the corresponding 11.6 branch
Is it https://github.com/MariaDB/server/tree/bb-11.6-MDEV-32887-vector ?
Besides the logic fix, my select_neighbours implementation comparing to the updated version in your branch are:
I intentionally did not run EXTEND_CANDIDATES. It does not significantly improve recall but impacts the speed a lot. ( this is the key issue in your branches which leads to super slow insert with the benchmarking tool. )
another small improvement is pq_discard does not need initialization or data insertion if KEEP_PRUNED_CONNECTIONS is not set. And it does not need to be implemented as a queue since the elements were already sorted before insertion.
My apologies, I've googled this to death but have not found the answer. Is this feature available in the 11.6 preview release? If not, I saw some comments about developer preview releases by the end of May, but I can't seem to find a link to those. Or is building https://github.com/MariaDB/server/tree/bb-11.6-MDEV-32887-vector from source the correct approach? Thanks!
BJ Quinn
added a comment - My apologies, I've googled this to death but have not found the answer. Is this feature available in the 11.6 preview release? If not, I saw some comments about developer preview releases by the end of May, but I can't seem to find a link to those. Or is building https://github.com/MariaDB/server/tree/bb-11.6-MDEV-32887-vector from source the correct approach? Thanks!
Both missing features already exist in some form, they need to be merged into the bb-11.6-MDEV-32887-vector branch and then we'll release a preview.
Sergei Golubchik
added a comment - It is not part of the 11.6 preview. There will be a separate preview with this feature only,
At the moment you can indeed build bb-11.6- MDEV-32887 -vector to see what's there. It lacks https://github.com/MariaDB/server/pull/3321 (support for updates and deletes) and MDEV-33413 (the cache exists in my private branch at the moment).
Both missing features already exist in some form, they need to be merged into the bb-11.6- MDEV-32887 -vector branch and then we'll release a preview.
Got it, thanks! So I built bb-11.6-MDEV-32887-vector, and the build process seemed to go fine. In the log I can see at startup:
2024-07-03 15:45:53 0 [Note] Starting MariaDB 11.6.0-MariaDB source revision 77a016686ec2a2617dd6489a756b1f9f11a78d9f as process 27924
Which seems to be the latest commit on that branch as far as I can tell, so it looks like I've gotten the proper source. But when I run "ALTER TABLE data ADD COLUMN embedding VECTOR(100);", I get "SQL Error (4161): Unknown data type: 'VECTOR'". Is there something else I need to enable to test?
BJ Quinn
added a comment - Got it, thanks! So I built bb-11.6- MDEV-32887 -vector, and the build process seemed to go fine. In the log I can see at startup:
2024-07-03 15:45:53 0 [Note] Starting MariaDB 11.6.0-MariaDB source revision 77a016686ec2a2617dd6489a756b1f9f11a78d9f as process 27924
Which seems to be the latest commit on that branch as far as I can tell, so it looks like I've gotten the proper source. But when I run "ALTER TABLE data ADD COLUMN embedding VECTOR(100);", I get "SQL Error (4161): Unknown data type: 'VECTOR'". Is there something else I need to enable to test?
No, nothing. VECTOR data type is MDEV-33410, and it's open no work done on it yet.
We're going to implement it, of course, but it's not the first priority — it's a convenience feature that helps to avoid mistakes, but an application does not really need it, one can store and search embedding without a dedicated data type. We're prioritizing features that an application cannot work without. Functions VEC_FromText() and VEC_AsText() are also not a priority.
See the test file mysql-test/main/vector.test — that's how one uses it now, store in blob, insert as binary.
In python I do it like
Sergei Golubchik
added a comment - No, nothing. VECTOR data type is MDEV-33410 , and it's open no work done on it yet.
We're going to implement it, of course, but it's not the first priority — it's a convenience feature that helps to avoid mistakes, but an application does not really need it, one can store and search embedding without a dedicated data type. We're prioritizing features that an application cannot work without. Functions VEC_FromText() and VEC_AsText() are also not a priority.
See the test file mysql-test/main/vector.test — that's how one uses it now, store in blob, insert as binary.
In python I do it like
c.execute( 'INSERT kb (emb) VALUES (?)' , array.array( 'f' ,resp.data.embedding).tobytes())
Benchmarks indicate that using _Float16 instead of floats results in a 40-60% reduction in insertion speed and a 15-20% reduction in search speed. There is also a minor decrease in recall of less than 1%.
There are two issues with this solution (more research needed):
Converting 4-byte floats to 2-byte floats results in precision loss and a reduced range. Proportional scaling is necessary, but there is no simple method to define a proportion that works for all cases. Scaling must be done during transformation, and the best approach depends on the specific dataset and range of values.
_Float16 range is -65504 ~ 65504
If original floats and distance squares are all below this value the direct transform from float to _Float16 will work perfectly. e.g. [1, 2, 222], [0, -1, 0]
However if original floats or distance squares are all out of the range, then scaling must be done during transformation. Otherwise the distance makes no sense at all as they are out of range. e.g. [6789, 1234], [-6789, -1234]
for dataset of mnist-784-euclidean, without scaling, the distance are bigger than FLT16_MAX and recall is 0. If divided the float by 1000 during transformation, then the recall becomes 0.978.
One possible solution could be to allow for configuring a "proportion" parameter when they choose scalar quantization, which would enable the user to specify the appropriate scaling factor for their specific use case.
Another possible solution might be define the corresponding data type (half-vector) and let the users to do the scaling before inserting the data.
_Float16 requires CPU instruction set support, otherwise it will revert to float and not utilize SIMD, leading to performance issues. In the commit I’m using -mf16c but looks it could be improved further.
Hugo Wen
added a comment - - edited Hi serg , I summarize some findings regarding the scalar quantization using _Float16 that we discussed during our meeting today.
Draft commit to test _Float16 (2-byte float) in HNSW index: https://github.com/HugoWenTD/server/commit/9656b6c0d
Benchmarks indicate that using _Float16 instead of floats results in a 40-60% reduction in insertion speed and a 15-20% reduction in search speed. There is also a minor decrease in recall of less than 1%.
There are two issues with this solution (more research needed):
Converting 4-byte floats to 2-byte floats results in precision loss and a reduced range. Proportional scaling is necessary, but there is no simple method to define a proportion that works for all cases. Scaling must be done during transformation, and the best approach depends on the specific dataset and range of values.
_Float16 range is -65504 ~ 65504
If original floats and distance squares are all below this value the direct transform from float to _Float16 will work perfectly. e.g. [1, 2, 222], [0, -1, 0]
However if original floats or distance squares are all out of the range, then scaling must be done during transformation. Otherwise the distance makes no sense at all as they are out of range. e.g. [6789, 1234], [-6789, -1234]
for dataset of mnist-784-euclidean, without scaling, the distance are bigger than FLT16_MAX and recall is 0. If divided the float by 1000 during transformation, then the recall becomes 0.978.
One possible solution could be to allow for configuring a "proportion" parameter when they choose scalar quantization, which would enable the user to specify the appropriate scaling factor for their specific use case.
Another possible solution might be define the corresponding data type (half-vector) and let the users to do the scaling before inserting the data.
_Float16 requires CPU instruction set support, otherwise it will revert to float and not utilize SIMD, leading to performance issues. In the commit I’m using -mf16c but looks it could be improved further.
People
Vicențiu Ciorbaru
Sergei Golubchik
Votes:
3Vote for this issue
Watchers:
12Start watching this issue
Dates
Created:
Updated:
Resolved:
Git Integration
Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.
{"report":{"fcp":1678,"ttfb":714.2999999523163,"pageVisibility":"visible","entityId":127813,"key":"jira.project.issue.view-issue","isInitial":true,"threshold":1000,"elementTimings":{},"userDeviceMemory":8,"userDeviceProcessors":64,"apdex":0.5,"journeyId":"32afd6e8-73b3-45e1-b36f-7daa18d7d446","navigationType":0,"readyForUser":1781.7000000476837,"redirectCount":0,"resourceLoadedEnd":2257.5999999046326,"resourceLoadedStart":721,"resourceTiming":[{"duration":407.7999999523163,"initiatorType":"link","name":"https://jira.mariadb.org/s/2c21342762a6a02add1c328bed317ffd-CDN/lu2cib/820016/12ta74/0a8bac35585be7fc6c9cc5a0464cd4cf/_/download/contextbatch/css/_super/batch.css","startTime":721,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":721,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":1128.7999999523163,"responseStart":0,"secureConnectionStart":0},{"duration":407.7999999523163,"initiatorType":"link","name":"https://jira.mariadb.org/s/7ebd35e77e471bc30ff0eba799ebc151-CDN/lu2cib/820016/12ta74/494e4c556ecbb29f90a3d3b4f09cb99c/_/download/contextbatch/css/jira.browse.project,project.issue.navigator,jira.view.issue,jira.general,jira.global,atl.general,-_super/batch.css?agile_global_admin_condition=true&jag=true&jira.create.linked.issue=true&slack-enabled=true&whisper-enabled=true","startTime":721.2999999523163,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":721.2999999523163,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":1129.0999999046326,"responseStart":0,"secureConnectionStart":0},{"duration":416.60000014305115,"initiatorType":"script","name":"https://jira.mariadb.org/s/0917945aaa57108d00c5076fea35e069-CDN/lu2cib/820016/12ta74/0a8bac35585be7fc6c9cc5a0464cd4cf/_/download/contextbatch/js/_super/batch.js?locale=en","startTime":721.5999999046326,"connectEnd":721.5999999046326,"connectStart":721.5999999046326,"domainLookupEnd":721.5999999046326,"domainLookupStart":721.5999999046326,"fetchStart":721.5999999046326,"redirectEnd":0,"redirectStart":0,"requestStart":721.5999999046326,"responseEnd":1138.2000000476837,"responseStart":1138.2000000476837,"secureConnectionStart":721.5999999046326},{"duration":508.59999990463257,"initiatorType":"script","name":"https://jira.mariadb.org/s/2d8175ec2fa4c816e8023260bd8c1786-CDN/lu2cib/820016/12ta74/494e4c556ecbb29f90a3d3b4f09cb99c/_/download/contextbatch/js/jira.browse.project,project.issue.navigator,jira.view.issue,jira.general,jira.global,atl.general,-_super/batch.js?agile_global_admin_condition=true&jag=true&jira.create.linked.issue=true&locale=en&slack-enabled=true&whisper-enabled=true","startTime":721.7000000476837,"connectEnd":721.7000000476837,"connectStart":721.7000000476837,"domainLookupEnd":721.7000000476837,"domainLookupStart":721.7000000476837,"fetchStart":721.7000000476837,"redirectEnd":0,"redirectStart":0,"requestStart":721.7000000476837,"responseEnd":1230.2999999523163,"responseStart":1230.2999999523163,"secureConnectionStart":721.7000000476837},{"duration":519.7999999523163,"initiatorType":"script","name":"https://jira.mariadb.org/s/a9324d6758d385eb45c462685ad88f1d-CDN/lu2cib/820016/12ta74/c92c0caa9a024ae85b0ebdbed7fb4bd7/_/download/contextbatch/js/atl.global,-_super/batch.js?locale=en","startTime":722,"connectEnd":722,"connectStart":722,"domainLookupEnd":722,"domainLookupStart":722,"fetchStart":722,"redirectEnd":0,"redirectStart":0,"requestStart":722,"responseEnd":1241.7999999523163,"responseStart":1241.7999999523163,"secureConnectionStart":722},{"duration":520.5,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2cib/820016/12ta74/1.0/_/download/batch/jira.webresources:calendar-en/jira.webresources:calendar-en.js","startTime":722.2000000476837,"connectEnd":722.2000000476837,"connectStart":722.2000000476837,"domainLookupEnd":722.2000000476837,"domainLookupStart":722.2000000476837,"fetchStart":722.2000000476837,"redirectEnd":0,"redirectStart":0,"requestStart":722.2000000476837,"responseEnd":1242.7000000476837,"responseStart":1242.7000000476837,"secureConnectionStart":722.2000000476837},{"duration":521.3000001907349,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2cib/820016/12ta74/1.0/_/download/batch/jira.webresources:calendar-localisation-moment/jira.webresources:calendar-localisation-moment.js","startTime":722.3999998569489,"connectEnd":722.3999998569489,"connectStart":722.3999998569489,"domainLookupEnd":722.3999998569489,"domainLookupStart":722.3999998569489,"fetchStart":722.3999998569489,"redirectEnd":0,"redirectStart":0,"requestStart":722.3999998569489,"responseEnd":1243.7000000476837,"responseStart":1243.7000000476837,"secureConnectionStart":722.3999998569489},{"duration":634.4000000953674,"initiatorType":"link","name":"https://jira.mariadb.org/s/b04b06a02d1959df322d9cded3aeecc1-CDN/lu2cib/820016/12ta74/a2ff6aa845ffc9a1d22fe23d9ee791fc/_/download/contextbatch/css/jira.global.look-and-feel,-_super/batch.css","startTime":722.5999999046326,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":722.5999999046326,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":1357,"responseStart":0,"secureConnectionStart":0},{"duration":522.0999999046326,"initiatorType":"script","name":"https://jira.mariadb.org/rest/api/1.0/shortcuts/820016/47140b6e0a9bc2e4913da06536125810/shortcuts.js?context=issuenavigation&context=issueaction","startTime":722.7000000476837,"connectEnd":722.7000000476837,"connectStart":722.7000000476837,"domainLookupEnd":722.7000000476837,"domainLookupStart":722.7000000476837,"fetchStart":722.7000000476837,"redirectEnd":0,"redirectStart":0,"requestStart":722.7000000476837,"responseEnd":1244.7999999523163,"responseStart":1244.7999999523163,"secureConnectionStart":722.7000000476837},{"duration":634.2000000476837,"initiatorType":"link","name":"https://jira.mariadb.org/s/3ac36323ba5e4eb0af2aa7ac7211b4bb-CDN/lu2cib/820016/12ta74/d176f0986478cc64f24226b3d20c140d/_/download/contextbatch/css/com.atlassian.jira.projects.sidebar.init,-_super,-project.issue.navigator,-jira.view.issue/batch.css?jira.create.linked.issue=true","startTime":722.8999998569489,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":722.8999998569489,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":1357.0999999046326,"responseStart":0,"secureConnectionStart":0},{"duration":522.7999999523163,"initiatorType":"script","name":"https://jira.mariadb.org/s/5d5e8fe91fbc506585e83ea3b62ccc4b-CDN/lu2cib/820016/12ta74/d176f0986478cc64f24226b3d20c140d/_/download/contextbatch/js/com.atlassian.jira.projects.sidebar.init,-_super,-project.issue.navigator,-jira.view.issue/batch.js?jira.create.linked.issue=true&locale=en","startTime":723.0999999046326,"connectEnd":723.0999999046326,"connectStart":723.0999999046326,"domainLookupEnd":723.0999999046326,"domainLookupStart":723.0999999046326,"fetchStart":723.0999999046326,"redirectEnd":0,"redirectStart":0,"requestStart":723.0999999046326,"responseEnd":1245.8999998569489,"responseStart":1245.8999998569489,"secureConnectionStart":723.0999999046326},{"duration":759.0999999046326,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2cib/820016/12ta74/1.0/_/download/batch/jira.webresources:bigpipe-js/jira.webresources:bigpipe-js.js","startTime":724.7000000476837,"connectEnd":724.7000000476837,"connectStart":724.7000000476837,"domainLookupEnd":724.7000000476837,"domainLookupStart":724.7000000476837,"fetchStart":724.7000000476837,"redirectEnd":0,"redirectStart":0,"requestStart":724.7000000476837,"responseEnd":1483.7999999523163,"responseStart":1483.7000000476837,"secureConnectionStart":724.7000000476837},{"duration":1479,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2cib/820016/12ta74/1.0/_/download/batch/jira.webresources:bigpipe-init/jira.webresources:bigpipe-init.js","startTime":730.8999998569489,"connectEnd":730.8999998569489,"connectStart":730.8999998569489,"domainLookupEnd":730.8999998569489,"domainLookupStart":730.8999998569489,"fetchStart":730.8999998569489,"redirectEnd":0,"redirectStart":0,"requestStart":730.8999998569489,"responseEnd":2209.899999856949,"responseStart":2209.899999856949,"secureConnectionStart":730.8999998569489},{"duration":134.79999995231628,"initiatorType":"xmlhttprequest","name":"https://jira.mariadb.org/rest/webResources/1.0/resources","startTime":1369.0999999046326,"connectEnd":1369.0999999046326,"connectStart":1369.0999999046326,"domainLookupEnd":1369.0999999046326,"domainLookupStart":1369.0999999046326,"fetchStart":1369.0999999046326,"redirectEnd":0,"redirectStart":0,"requestStart":1369.0999999046326,"responseEnd":1503.8999998569489,"responseStart":1503.8999998569489,"secureConnectionStart":1369.0999999046326},{"duration":627.3999998569489,"initiatorType":"link","name":"https://jira.mariadb.org/s/d5715adaadd168a9002b108b2b039b50-CDN/lu2cib/820016/12ta74/be4b45e9cec53099498fa61c8b7acba4/_/download/contextbatch/css/jira.project.sidebar,-_super,-project.issue.navigator,-jira.general,-jira.browse.project,-jira.view.issue,-jira.global,-atl.general,-com.atlassian.jira.projects.sidebar.init/batch.css?agile_global_admin_condition=true&jag=true&jira.create.linked.issue=true&slack-enabled=true&whisper-enabled=true","startTime":1630.2000000476837,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":1630.2000000476837,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":2257.5999999046326,"responseStart":0,"secureConnectionStart":0},{"duration":595.4000000953674,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2cib/820016/12ta74/e65b778d185daf5aee24936755b43da6/_/download/contextbatch/js/browser-metrics-plugin.contrib,-_super,-project.issue.navigator,-jira.view.issue,-atl.general/batch.js?agile_global_admin_condition=true&jag=true&jira.create.linked.issue=true&slack-enabled=true&whisper-enabled=true","startTime":1631.0999999046326,"connectEnd":1631.0999999046326,"connectStart":1631.0999999046326,"domainLookupEnd":1631.0999999046326,"domainLookupStart":1631.0999999046326,"fetchStart":1631.0999999046326,"redirectEnd":0,"redirectStart":0,"requestStart":1631.0999999046326,"responseEnd":2226.5,"responseStart":2226.5,"secureConnectionStart":1631.0999999046326},{"duration":601.5,"initiatorType":"script","name":"https://jira.mariadb.org/s/097ae97cb8fbec7d6ea4bbb1f26955b9-CDN/lu2cib/820016/12ta74/be4b45e9cec53099498fa61c8b7acba4/_/download/contextbatch/js/jira.project.sidebar,-_super,-project.issue.navigator,-jira.general,-jira.browse.project,-jira.view.issue,-jira.global,-atl.general,-com.atlassian.jira.projects.sidebar.init/batch.js?agile_global_admin_condition=true&jag=true&jira.create.linked.issue=true&locale=en&slack-enabled=true&whisper-enabled=true","startTime":1631.5999999046326,"connectEnd":1631.5999999046326,"connectStart":1631.5999999046326,"domainLookupEnd":1631.5999999046326,"domainLookupStart":1631.5999999046326,"fetchStart":1631.5999999046326,"redirectEnd":0,"redirectStart":0,"requestStart":1631.5999999046326,"responseEnd":2233.0999999046326,"responseStart":2233.0999999046326,"secureConnectionStart":1631.5999999046326},{"duration":637.7000000476837,"initiatorType":"script","name":"https://www.google-analytics.com/analytics.js","startTime":1670.8999998569489,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":1670.8999998569489,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":2308.5999999046326,"responseStart":0,"secureConnectionStart":0}],"fetchStart":0,"domainLookupStart":0,"domainLookupEnd":0,"connectStart":0,"connectEnd":0,"requestStart":484,"responseStart":715,"responseEnd":730,"domLoading":718,"domInteractive":2314,"domContentLoadedEventStart":2314,"domContentLoadedEventEnd":2380,"domComplete":2802,"loadEventStart":2802,"loadEventEnd":2803,"userAgent":"Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)","marks":[{"name":"bigPipe.sidebar-id.start","time":2212.2999999523163},{"name":"bigPipe.sidebar-id.end","time":2213.2999999523163},{"name":"bigPipe.activity-panel-pipe-id.start","time":2213.399999856949},{"name":"bigPipe.activity-panel-pipe-id.end","time":2220.899999856949},{"name":"activityTabFullyLoaded","time":2413.899999856949}],"measures":[],"correlationId":"16a4ec70b003d2","effectiveType":"4g","downlink":9.2,"rtt":0,"serverDuration":167,"dbReadsTimeInMs":25,"dbConnsTimeInMs":36,"applicationHash":"9d11dbea5f4be3d4cc21f03a88dd11d8c8687422","experiments":[]}}
Insert and search now are in a functioning state, although some refactoring is needed.