One of the reasons of bad query plans is inadequate cost estimation of individual operations. A cost of reading a row in one engine might be a lot higher than in some other, but optimizer cannot know it. Also, it uses hard-coded constants, assuming, for example, that evaluating a WHERE clause is 5 times cheaper than reading a row from a table (it used to 10 earlier, now it's 5 ).
Obviously, some kind of calibration procedure is needed to get these cost estimates to be relatively correct. It is not easy, because the estimates depend on the actual hardware where MariaDB is run (a cost of a row read is different on HD and SSD), and also - somewhat - on the application (optimizer model isn't perfect, and cost factors vary for different types of queries, thus it's better to calibrate on the queries that the user application will run).
A simple and low-maintenance solution would be to use self-tuning cost coefficients. They measure the timing and adjust automatically to the configuration where MariaDB is run.
Assorted thoughts:
create a tuning coefficient for every low-level cost generator in the optimizer. That is for every function or a handler method that generates a cost value. For every hard-coded constant or a macro too. But not for functions that create cost values from other cost values. For a virtual method - for every implementation of it, that is, one coefficient for every read_time in every handler class. But not for different parameters of it. For example, different tables in some engine might have different costs for reading a row, but we will still have one coefficient for read_time in this engine, not one for each table. The engine is suppose to provide different costs internally, if it needs to. The goal of these coefficients is to normalize the cost between different engines.
measure the time that the query took, split proportionally between tables (according to the number of rows), save the statistics per coefficient.
collect the statistics locally in the THD, add to the global statistics on disconnect. it helps to avoid contention on a shared resource
optimizer will use the global statistics, not thread local. It shouldn't matter, as coefficients will change very slowly.
in splitting the time use the actual number of rows, not the estimated one, that the optimizer used.
per coefficient store - counter (bigint), sum of times (double), sum of times squared (double).
store the results persistently in mysql database
make them available via the I_S table. In this table show two more columns - the average and the standard deviation.
report these data via the feedback plugin. we can adjust built-in constants or initial values of these coefficients. and very large deviation is a sign that an engine estimates the cost incorrectly (e.g. doesn't take the table name into account, see above)
a user can update the table manually, if she wishes so. she can even freeze the coefficients by setting the count column to the very large value.
system load anomalies may introduce undesired changes in the coefficients. is it a problem? should we implement some countermeasures?
Attachments
Issue Links
is blocked by
MDEV-223Encapsulate calculations of the table access cost and make them consistent and tunable
hi sergei, there's a file that constants are declared? i could start changing the constants to variables without problem
roberto spadim
added a comment - hi sergei, there's a file that constants are declared? i could start changing the constants to variables without problem
Part of this task is, exactly, finding and identifying these constants. Some of them are implicit and not easy to find.
Another big part if building the framework for collecting and storing time measurements and solving equations against these constants.
Sergei Golubchik
added a comment - Part of this task is, exactly, finding and identifying these constants. Some of them are implicit and not easy to find.
Another big part if building the framework for collecting and storing time measurements and solving equations against these constants.
ok, what about files that i should start finding? sql_select.cc, sql_select.h, opt_range.cc (i found a constant inside NOT IN that rewrite to IN(), it's a kind of constant that i should look or not?), update and delete should be find too?
they are CAPS_LOCK_ON or they could be CAPS_LOCK_ON_and_off?
roberto spadim
added a comment - ok, what about files that i should start finding? sql_select.cc, sql_select.h, opt_range.cc (i found a constant inside NOT IN that rewrite to IN(), it's a kind of constant that i should look or not?), update and delete should be find too?
they are CAPS_LOCK_ON or they could be CAPS_LOCK_ON_and_off?
I'm afraid, it's something that not even I will be able to do, one of
our optimizer gurus should.
Sometimes there is no constant at all, it is implicitly assumed to be 1.
For example, optimizer compares table->file->read_time() values for
different tables. Here it assumes that results of ::read_time() for
different storage engines are directly comparable. They're not, they
should be multiplied by a storage engine specific factor.
Those places are very difficult to find.
We did have a GSoC-2013 project for this task, but it was assumed that
we will provide a list of constants to tune, and the student will only
work on time measurements and aggregation.
Sergei Golubchik
added a comment - I'm afraid, it's something that not even I will be able to do, one of
our optimizer gurus should.
Sometimes there is no constant at all, it is implicitly assumed to be 1.
For example, optimizer compares table->file->read_time() values for
different tables. Here it assumes that results of ::read_time() for
different storage engines are directly comparable. They're not, they
should be multiplied by a storage engine specific factor.
Those places are very difficult to find.
We did have a GSoC-2013 project for this task, but it was assumed that
we will provide a list of constants to tune, and the student will only
work on time measurements and aggregation.
Anshu Avinash
added a comment - I'll be pushing the code on my github repo: https://github.com/igniting/server/tree/selfTuningOptimizer . @serg: I guess you can assign this issue to me.
So, to summarize: there are two approaches to obtaining timing data:
counting operations, timing the query as a whole
actually timing individual operations, P_S-style
In the first approach we only count number of index lookups (N i), rows read in a table scan (N s), etc. And time the query as a whole (T q). This gives an equation:
T i * N i + T s * N s + ... = T q
Collecting the data from many queries, we have a system of linear equations (may be even over-defined). It can be solved to give durations on individual operations.
In the second approach we directly time individual operations, much like PERFORMANCE_SCHEMA does. Preferably, reusing P_S measurements, not duplicating them.
These approaches both have their benefits:
The first one is potentially much cheaper, it does not add timing overhead to every operation (and with constants like TIME_FOR_COMPARE, the individual operation is a comparison, timing it might be prohibitively expensive).
Also, it uses the complete query execution time — the only time users care about. If we see that values of the coefficients don't converge over time, it means that there's an unaccounted-for part in the query time, and we add another coefficient.
Making the second approach to use P_S when P_S is disabled is not exactly trivial.
But
The second needs less memory: for N coefficients, it'll need — per THD — 4N doubles at most (value, sum, sum squared, counter). The first one needs to store(in addition to the above) N equations, that's N² values. Although for large N this might be a sparse matrix, I don't know.
The second one measures only what we use in the optimizer, directly. No hidden parameters, and most coefficients should converge nicely. Still, it's meaningless from the user point of view.
Sergei Golubchik
added a comment - - edited So, to summarize: there are two approaches to obtaining timing data:
counting operations, timing the query as a whole
actually timing individual operations, P_S-style
In the first approach we only count number of index lookups ( N  i ), rows read in a table scan ( N  s ), etc. And time the query as a whole ( T  q ). This gives an equation:
T i * N i + T s * N s + ... = T q
Collecting the data from many queries, we have a system of linear equations (may be even over-defined). It can be solved to give durations on individual operations.
In the second approach we directly time individual operations, much like PERFORMANCE_SCHEMA does. Preferably, reusing P_S measurements, not duplicating them.
These approaches both have their benefits:
The first one is potentially much cheaper, it does not add timing overhead to every operation (and with constants like TIME_FOR_COMPARE, the individual operation is a comparison, timing it might be prohibitively expensive).
Also, it uses the complete query execution time — the only time users care about. If we see that values of the coefficients don't converge over time, it means that there's an unaccounted-for part in the query time, and we add another coefficient.
Making the second approach to use P_S when P_S is disabled is not exactly trivial.
But
The second needs less memory: for N coefficients, it'll need — per THD — 4N doubles at most (value, sum, sum squared, counter). The first one needs to store(in addition to the above) N equations, that's N² values. Although for large N this might be a sparse matrix, I don't know.
The second one measures only what we use in the optimizer, directly. No hidden parameters, and most coefficients should converge nicely. Still, it's meaningless from the user point of view.
measure only query execution time, after the optimizer, this will also cut off table locks
but row locks happen during execution, they will distort the statistics
they will show up significantly different and can be detected statistically and ignored
second
P_S is disabled in MariaDB by default
the hypothesys: most of the P_S overhead comes from housekeeping, storing the data, particularly in the shared data structures, calculating aggregations.
that is, simply timing the waits is cheap
so even with the disabled P_S we can enable the timing instrumentation
third - combined method
measure all waits directly, using P_S instrumentation
TIME_FOR_COMPARE and other internal stuff - indirectly
this gives exact data for longer operations
but doesn't introduce timing overhead for short operations
and reduces the number of coefficients that we put into a system of equations, thus reducing memory footprints
Also, the one index read might have vastly different cost even for the same engine, same table, and the same index. For example, depending on whether it's cached or not. This is the job of the engine to recognize this situation and adjust the read_time() accordingly. The problem is - over time we'll see different timings for the same number of index reads, so the corresponding factor won't converge. For it to converge we need to take engine's read_time() value into account. That is, we'll have not
(where Nrows is how many rows were actually read from the index, while Nrows_estimated is how many rows optimizer expected to read, the number of rows that read_time() got as an argument)
Sergei Golubchik
added a comment - - edited more comments:
first approach
time spent on locks, in the optimizer, etc
measure only query execution time, after the optimizer, this will also cut off table locks
but row locks happen during execution, they will distort the statistics
they will show up significantly different and can be detected statistically and ignored
second
P_S is disabled in MariaDB by default
the hypothesys: most of the P_S overhead comes from housekeeping, storing the data, particularly in the shared data structures, calculating aggregations.
that is, simply timing the waits is cheap
so even with the disabled P_S we can enable the timing instrumentation
third - combined method
measure all waits directly, using P_S instrumentation
TIME_FOR_COMPARE and other internal stuff - indirectly
this gives exact data for longer operations
but doesn't introduce timing overhead for short operations
and reduces the number of coefficients that we put into a system of equations, thus reducing memory footprints
Also, the one index read might have vastly different cost even for the same engine, same table, and the same index. For example, depending on whether it's cached or not. This is the job of the engine to recognize this situation and adjust the read_time() accordingly. The problem is - over time we'll see different timings for the same number of index reads, so the corresponding factor won't converge. For it to converge we need to take engine's read_time() value into account. That is, we'll have not
total_index_read_time = factor * Nrows
but
total_index_read_time = factor * Nrows * read_time() / Nrows_estimated
(where Nrows is how many rows were actually read from the index, while Nrows_estimated is how many rows optimizer expected to read, the number of rows that read_time() got as an argument)
I can see this going horribly wrong both when system load changes and due to hardware anomalies. For example when a hard drive hits a bad sector it may do an i/o which takes up to 8 seconds by default which may throw off all of your measurement for that query. If that query ends up updating the global statistics then all future queries may be dramatically punished all because of a single isolated problem.
cpu overload can also impact timing issues. For example if you have a burst of threads running then certain parts of a query may take longer as they fight for locks or cpu time.
It would be nice to run a sample workload to gather these statistics then be able to freeze them on an instance. This would take care of the cases where the optimizer makes assumptions about disk seek time that were true on hard drives but aren't true on flash hardware but without having to worry about hosts all the sudden wildly changing their query plans due to hardware blips.
Eric Bergen
added a comment - I can see this going horribly wrong both when system load changes and due to hardware anomalies. For example when a hard drive hits a bad sector it may do an i/o which takes up to 8 seconds by default which may throw off all of your measurement for that query. If that query ends up updating the global statistics then all future queries may be dramatically punished all because of a single isolated problem.
cpu overload can also impact timing issues. For example if you have a burst of threads running then certain parts of a query may take longer as they fight for locks or cpu time.
It would be nice to run a sample workload to gather these statistics then be able to freeze them on an instance. This would take care of the cases where the optimizer makes assumptions about disk seek time that were true on hard drives but aren't true on flash hardware but without having to worry about hosts all the sudden wildly changing their query plans due to hardware blips.
A specific value (say, "read_time") will be measured many times and the on-disk table will store the sum of all measurements and the number of them. The optimizer will use the average (sum/count). This means:
hopefully, short spikes won't disrupt the average too much
one can effectively "freeze" the average by updating the table and putting a huge "count" value.
Sergei Golubchik
added a comment - A specific value (say, "read_time") will be measured many times and the on-disk table will store the sum of all measurements and the number of them. The optimizer will use the average (sum/count). This means:
hopefully, short spikes won't disrupt the average too much
one can effectively "freeze" the average by updating the table and putting a huge "count" value.
Eric, yes that's a problem, but... that's a nice solution to many problems
The statistic is the main part here, if the standard mean (or any other usable variable) become too distance from "current" measures we could alert DBA about disk problems or too much queries/second, think about a usefull feature to alert DBA and developers/engeneer about a possible problem =)
about increasing robustness of the self-tunning cost coefficients algorithm we can develop with more time and use cases, for the first version, a useless (no changes to cost coefficients/optimizer) but measurable feature is ok, with time we get the experience to write a good (robust/inteligent) control =]
roberto spadim
added a comment - Eric, yes that's a problem, but... that's a nice solution to many problems
The statistic is the main part here, if the standard mean (or any other usable variable) become too distance from "current" measures we could alert DBA about disk problems or too much queries/second, think about a usefull feature to alert DBA and developers/engeneer about a possible problem =)
about increasing robustness of the self-tunning cost coefficients algorithm we can develop with more time and use cases, for the first version, a useless (no changes to cost coefficients/optimizer) but measurable feature is ok, with time we get the experience to write a good (robust/inteligent) control =]
I would also give +1 for a global statistic approach more that gathering fixe constants like in a benchmark
But i would add that a per table set of constant would be better because in any workload some specifics tables may cause more often non determinist cache overflow
Data Cache
Index Cache
FS Cache
Controller Cache
CPU Caches
Non determinism's is also driven by the workload itself, like :
writing reading from last time range ,
number of secondary indexes in a write,
storage engine
randomness data access pattern
lock time
column and row size fetching
looks like such metrics are mostly per table dependent , and can help Query Plan to better estimate join cost at each depth
VAROQUI Stephane
added a comment - I would also give +1 for a global statistic approach more that gathering fixe constants like in a benchmark
But i would add that a per table set of constant would be better because in any workload some specifics tables may cause more often non determinist cache overflow
Data Cache
Index Cache
FS Cache
Controller Cache
CPU Caches
Non determinism's is also driven by the workload itself, like :
writing reading from last time range ,
number of secondary indexes in a write,
storage engine
randomness data access pattern
lock time
column and row size fetching
looks like such metrics are mostly per table dependent , and can help Query Plan to better estimate join cost at each depth
Went to MariaDB road Show yesterday at London, I suggested:
1). look at top slow queries in performance_schema table
2). record the query execution plan
3). try different index (especially compound index)
4). if new index works better, use it for the same query in future.
Hope this makes sense
james wang
added a comment - Hi All,
Went to MariaDB road Show yesterday at London, I suggested:
1). look at top slow queries in performance_schema table
2). record the query execution plan
3). try different index (especially compound index)
4). if new index works better, use it for the same query in future.
Hope this makes sense
1). look at top slow queries in performance_schema table
not only slow queries, but stats about these queries, they should be grouped by hash, like percona do "select * from table where column=? and column1>=? and column2>=?" constants are changed to "?" and operations (graph and/or/>/</=/etc...) are ordered
3). try different index (especially compound index)
gridsearching index is ok to do, consume resources but find many solutions, it's import to record not only time to execute, but resources and metadata information to create a good metric function, for example: one index with (int,int) is "better" than an index with (double,double,string,double) with the same time or not? a performace metric should be created to evaluate the best search path
4). if new index works better, use it for the same query in future.
at some point the problem is how to "same query" (the same of 1) at parser/optimizer and don't penalize global performace
roberto spadim
added a comment - 1). look at top slow queries in performance_schema table
not only slow queries, but stats about these queries, they should be grouped by hash, like percona do "select * from table where column=? and column1>=? and column2>=?" constants are changed to "?" and operations (graph and/or/>/</=/etc...) are ordered
3). try different index (especially compound index)
gridsearching index is ok to do, consume resources but find many solutions, it's import to record not only time to execute, but resources and metadata information to create a good metric function, for example: one index with (int,int) is "better" than an index with (double,double,string,double) with the same time or not? a performace metric should be created to evaluate the best search path
4). if new index works better, use it for the same query in future.
at some point the problem is how to "same query" (the same of 1) at parser/optimizer and don't penalize global performace
People
Sergei Golubchik
Sergei Golubchik
Votes:
2Vote for this issue
Watchers:
12Start watching this issue
Dates
Created:
Updated:
Git Integration
Error rendering 'com.xiplink.jira.git.jira_git_plugin:git-issue-webpanel'. Please contact your Jira administrators.
{"report":{"fcp":1392.3000000715256,"ttfb":448,"pageVisibility":"visible","entityId":12125,"key":"jira.project.issue.view-issue","isInitial":true,"threshold":1000,"elementTimings":{},"userDeviceMemory":8,"userDeviceProcessors":64,"apdex":0.5,"journeyId":"c44cf27e-aaab-406d-906b-367670b0aa8e","navigationType":0,"readyForUser":1454.8999999761581,"redirectCount":0,"resourceLoadedEnd":1454.6000000238419,"resourceLoadedStart":453.7000000476837,"resourceTiming":[{"duration":403.1999999284744,"initiatorType":"link","name":"https://jira.mariadb.org/s/2c21342762a6a02add1c328bed317ffd-CDN/lu2bu7/820016/12ta74/0a8bac35585be7fc6c9cc5a0464cd4cf/_/download/contextbatch/css/_super/batch.css","startTime":453.7000000476837,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":453.7000000476837,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":856.8999999761581,"responseStart":0,"secureConnectionStart":0},{"duration":403.3000000715256,"initiatorType":"link","name":"https://jira.mariadb.org/s/7ebd35e77e471bc30ff0eba799ebc151-CDN/lu2bu7/820016/12ta74/8679b4946efa1a0bb029a3a22206fb5d/_/download/contextbatch/css/jira.browse.project,project.issue.navigator,jira.view.issue,jira.general,jira.global,atl.general,-_super/batch.css?agile_global_admin_condition=true&jag=true&jira.create.linked.issue=true&slack-enabled=true","startTime":453.89999997615814,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":453.89999997615814,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":857.2000000476837,"responseStart":0,"secureConnectionStart":0},{"duration":412.2999999523163,"initiatorType":"script","name":"https://jira.mariadb.org/s/fbf975c0cce4b1abf04784eeae9ba1f4-CDN/lu2bu7/820016/12ta74/0a8bac35585be7fc6c9cc5a0464cd4cf/_/download/contextbatch/js/_super/batch.js?locale=en","startTime":454.10000002384186,"connectEnd":454.10000002384186,"connectStart":454.10000002384186,"domainLookupEnd":454.10000002384186,"domainLookupStart":454.10000002384186,"fetchStart":454.10000002384186,"redirectEnd":0,"redirectStart":0,"requestStart":454.10000002384186,"responseEnd":866.3999999761581,"responseStart":866.3999999761581,"secureConnectionStart":454.10000002384186},{"duration":477.7999999523163,"initiatorType":"script","name":"https://jira.mariadb.org/s/099b33461394b8015fc36c0a4b96e19f-CDN/lu2bu7/820016/12ta74/8679b4946efa1a0bb029a3a22206fb5d/_/download/contextbatch/js/jira.browse.project,project.issue.navigator,jira.view.issue,jira.general,jira.global,atl.general,-_super/batch.js?agile_global_admin_condition=true&jag=true&jira.create.linked.issue=true&locale=en&slack-enabled=true","startTime":454.3000000715256,"connectEnd":454.3000000715256,"connectStart":454.3000000715256,"domainLookupEnd":454.3000000715256,"domainLookupStart":454.3000000715256,"fetchStart":454.3000000715256,"redirectEnd":0,"redirectStart":0,"requestStart":454.3000000715256,"responseEnd":932.1000000238419,"responseStart":932.1000000238419,"secureConnectionStart":454.3000000715256},{"duration":481.39999997615814,"initiatorType":"script","name":"https://jira.mariadb.org/s/94c15bff32baef80f4096a08aceae8bc-CDN/lu2bu7/820016/12ta74/c92c0caa9a024ae85b0ebdbed7fb4bd7/_/download/contextbatch/js/atl.global,-_super/batch.js?locale=en","startTime":454.5,"connectEnd":454.5,"connectStart":454.5,"domainLookupEnd":454.5,"domainLookupStart":454.5,"fetchStart":454.5,"redirectEnd":0,"redirectStart":0,"requestStart":454.5,"responseEnd":935.8999999761581,"responseStart":935.8999999761581,"secureConnectionStart":454.5},{"duration":481.89999997615814,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2bu7/820016/12ta74/1.0/_/download/batch/jira.webresources:calendar-en/jira.webresources:calendar-en.js","startTime":454.7000000476837,"connectEnd":454.7000000476837,"connectStart":454.7000000476837,"domainLookupEnd":454.7000000476837,"domainLookupStart":454.7000000476837,"fetchStart":454.7000000476837,"redirectEnd":0,"redirectStart":0,"requestStart":454.7000000476837,"responseEnd":936.6000000238419,"responseStart":936.5,"secureConnectionStart":454.7000000476837},{"duration":482.40000009536743,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2bu7/820016/12ta74/1.0/_/download/batch/jira.webresources:calendar-localisation-moment/jira.webresources:calendar-localisation-moment.js","startTime":454.89999997615814,"connectEnd":454.89999997615814,"connectStart":454.89999997615814,"domainLookupEnd":454.89999997615814,"domainLookupStart":454.89999997615814,"fetchStart":454.89999997615814,"redirectEnd":0,"redirectStart":0,"requestStart":454.89999997615814,"responseEnd":937.3000000715256,"responseStart":937.3000000715256,"secureConnectionStart":454.89999997615814},{"duration":556.5999999046326,"initiatorType":"link","name":"https://jira.mariadb.org/s/b04b06a02d1959df322d9cded3aeecc1-CDN/lu2bu7/820016/12ta74/a2ff6aa845ffc9a1d22fe23d9ee791fc/_/download/contextbatch/css/jira.global.look-and-feel,-_super/batch.css","startTime":455.3000000715256,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":455.3000000715256,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":1011.8999999761581,"responseStart":0,"secureConnectionStart":0},{"duration":482.2999999523163,"initiatorType":"script","name":"https://jira.mariadb.org/rest/api/1.0/shortcuts/820016/47140b6e0a9bc2e4913da06536125810/shortcuts.js?context=issuenavigation&context=issueaction","startTime":455.60000002384186,"connectEnd":455.60000002384186,"connectStart":455.60000002384186,"domainLookupEnd":455.60000002384186,"domainLookupStart":455.60000002384186,"fetchStart":455.60000002384186,"redirectEnd":0,"redirectStart":0,"requestStart":455.60000002384186,"responseEnd":937.8999999761581,"responseStart":937.8999999761581,"secureConnectionStart":455.60000002384186},{"duration":556.1999999284744,"initiatorType":"link","name":"https://jira.mariadb.org/s/3ac36323ba5e4eb0af2aa7ac7211b4bb-CDN/lu2bu7/820016/12ta74/d176f0986478cc64f24226b3d20c140d/_/download/contextbatch/css/com.atlassian.jira.projects.sidebar.init,-_super,-project.issue.navigator,-jira.view.issue/batch.css?jira.create.linked.issue=true","startTime":455.8000000715256,"connectEnd":0,"connectStart":0,"domainLookupEnd":0,"domainLookupStart":0,"fetchStart":455.8000000715256,"redirectEnd":0,"redirectStart":0,"requestStart":0,"responseEnd":1012,"responseStart":0,"secureConnectionStart":0},{"duration":482.90000009536743,"initiatorType":"script","name":"https://jira.mariadb.org/s/3339d87fa2538a859872f2df449bf8d0-CDN/lu2bu7/820016/12ta74/d176f0986478cc64f24226b3d20c140d/_/download/contextbatch/js/com.atlassian.jira.projects.sidebar.init,-_super,-project.issue.navigator,-jira.view.issue/batch.js?jira.create.linked.issue=true&locale=en","startTime":455.89999997615814,"connectEnd":455.89999997615814,"connectStart":455.89999997615814,"domainLookupEnd":455.89999997615814,"domainLookupStart":455.89999997615814,"fetchStart":455.89999997615814,"redirectEnd":0,"redirectStart":0,"requestStart":455.89999997615814,"responseEnd":938.8000000715256,"responseStart":938.8000000715256,"secureConnectionStart":455.89999997615814},{"duration":765.2999999523163,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2bu7/820016/12ta74/1.0/_/download/batch/jira.webresources:bigpipe-js/jira.webresources:bigpipe-js.js","startTime":461.8000000715256,"connectEnd":461.8000000715256,"connectStart":461.8000000715256,"domainLookupEnd":461.8000000715256,"domainLookupStart":461.8000000715256,"fetchStart":461.8000000715256,"redirectEnd":0,"redirectStart":0,"requestStart":461.8000000715256,"responseEnd":1227.1000000238419,"responseStart":1227.1000000238419,"secureConnectionStart":461.8000000715256},{"duration":992.2999999523163,"initiatorType":"script","name":"https://jira.mariadb.org/s/d41d8cd98f00b204e9800998ecf8427e-CDN/lu2bu7/820016/12ta74/1.0/_/download/batch/jira.webresources:bigpipe-init/jira.webresources:bigpipe-init.js","startTime":462.3000000715256,"connectEnd":462.3000000715256,"connectStart":462.3000000715256,"domainLookupEnd":462.3000000715256,"domainLookupStart":462.3000000715256,"fetchStart":462.3000000715256,"redirectEnd":0,"redirectStart":0,"requestStart":462.3000000715256,"responseEnd":1454.6000000238419,"responseStart":1454.6000000238419,"secureConnectionStart":462.3000000715256},{"duration":203.29999995231628,"initiatorType":"xmlhttprequest","name":"https://jira.mariadb.org/rest/webResources/1.0/resources","startTime":1024.2000000476837,"connectEnd":1024.2000000476837,"connectStart":1024.2000000476837,"domainLookupEnd":1024.2000000476837,"domainLookupStart":1024.2000000476837,"fetchStart":1024.2000000476837,"redirectEnd":0,"redirectStart":0,"requestStart":1024.2000000476837,"responseEnd":1227.5,"responseStart":1227.5,"secureConnectionStart":1024.2000000476837}],"fetchStart":0,"domainLookupStart":0,"domainLookupEnd":0,"connectStart":0,"connectEnd":0,"requestStart":255,"responseStart":448,"responseEnd":459,"domLoading":451,"domInteractive":1550,"domContentLoadedEventStart":1550,"domContentLoadedEventEnd":1601,"domComplete":1986,"loadEventStart":1986,"loadEventEnd":1987,"userAgent":"Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; +claudebot@anthropic.com)","marks":[{"name":"bigPipe.sidebar-id.start","time":1519.8000000715256},{"name":"bigPipe.sidebar-id.end","time":1520.7000000476837},{"name":"bigPipe.activity-panel-pipe-id.start","time":1521.2000000476837},{"name":"bigPipe.activity-panel-pipe-id.end","time":1525.5},{"name":"activityTabFullyLoaded","time":1641.1000000238419}],"measures":[],"correlationId":"2f7041bf7e20c7","effectiveType":"4g","downlink":9.7,"rtt":0,"serverDuration":130,"dbReadsTimeInMs":10,"dbConnsTimeInMs":19,"applicationHash":"9d11dbea5f4be3d4cc21f03a88dd11d8c8687422","experiments":[]}}
hi sergei, there's a file that constants are declared? i could start changing the constants to variables without problem