Status: Closed (View Workflow)
Resolution: Not a Bug
Debian 8 amd64
*** 10.2.21+maria~jessie 0
500 http://ams2.mirrors.digitalocean.com/mariadb/repo/10.2/debian/ jessie/main amd64 Packages
We have several MariaDB galera clusters on Debian 8. On all these clusters, we've noticed some pretty significant performance issues that appear to be related to the index cardinality being incorrect.
I have managed to reproduce this issue. For this I have attached "data.sql" which has the table schema where I generated dummy data to reproduce it.
Steps are like this:
3 MariaDB 10.2 Galera nodes
1. Log in to MySQL and `create database cardtest02;`
2. Import the attached `data.sql` file into `cardtest02`: `mysql --defaults-file=/etc/mysql/mysql-root.cnf cardtest02 < ~/data.sql`
3. Verify index cardinality:
As you can see, in this case, on node 1, Cardinality is correct (301). However, if we now check this on another , for example node 2, we see the following:
Because of this issue, indexes aren't properly being used and the performance is very bad, especially on the larger tables.
In addition, I have also noticed the `information_schema.tables` table being incorrect:
A temporary workaround to fix this is running an `analyze table cardtest_tbl`; on one of the nodes and it is synced across all of them automatically after that, but it is of course only temporary. Even if I add more rows later (with the same .sql file for example when I only change IDs and skip the drop table if exists) the cardinality is only ever updated on the node where I added the rows and not on node 2 / 3.
I also tried to see what happens if I add new rows on node 2 instead after the initial dump, and it does increase the cardinality there with the amount of rows inserted on node 2, but it never reflects anywhere near the actual amount of rows.
Interesting to note is the following:
1. Import data.sql on node 1, which recreates an empty table and adds rows, bringing cardinality to 301 in node 1 and 0 on node 2/3
2. Run analyze table on on any node, resulting in a synchronisation of the cardinality; all nodes now have 301
3. Running single queries on node 1 and/node 2 do increase cardinality correctly by 1:
After this, cardinality is 303 on all nodes.
3. Import data2.sql (see attachment) through node 1, bringing cardinality to 604 on node 1, and back to 301 on node 2
I have also attached mysql_settings.txt which contains the output of `show variables`, which may be helpful.
Does anyone have any idea of how to fix / prevent this? Is it a bug or intended behaviour we're unaware of? Running analyze table on all tables is not something we like to do, especially as it is locking on the entire cluster and usually kills the entire cluster (which is another issue entirely I believe we can work around with by using wsrep_desync and RSU options)