Details
-
Bug
-
Status: Closed (View Workflow)
-
Critical
-
Resolution: Fixed
-
10.4.12
-
None
-
Ubuntu Server 18.04 x64
Description
I've two mysql server 'G0' and 'G1'. And there is a one row test table on them respectively.
G0:
mysql> select * from test.t1; |
+----+--------+ |
| id | node |
|
+----+--------+ |
| 1 | DB-G0 |
|
+----+--------+ |
1 rows in set (0.00 sec) |
G1:
mysql> select * from test.t1; |
+----+--------+ |
| id | node |
|
+----+--------+ |
| 2 | DB-G1 |
|
+----+--------+ |
1 rows in set (0.00 sec) |
I'm then using the spider engine offered by MariaDB 10.4 (mysqld Ver 10.4.12-MariaDB-1:10.4.12+maria~bionic-log for debian-linux-gnu on x86_64 (mariadb.org binary distribution)) to combine them like this:
CREATE TABLE t1 ( |
id BIGINT PRIMARY KEY, |
node TEXT
|
) ENGINE=Spider
|
COMMENT='wrapper "mysql"' |
PARTITION BY HASH (id) ( |
PARTITION par0 COMMENT = 'srv "G0", tbl "t1"', |
PARTITION par1 COMMENT = 'srv "G1", tbl "t1"' |
);
|
It's working well for a simple query like this:
MariaDB [(none)]> select * from test.t1; |
+-----+--------+ |
| id | node |
|
+-----+--------+ |
| 1 | DB-G0 |
|
| 2 | DB-G1 |
|
+-----+--------+ |
2 rows in set (0.013 sec) |
But it doubled the results when using this condition:
MariaDB [(none)]> select * from test.t1 where id != 0; |
+-----+--------+ |
| id | node |
|
+-----+--------+ |
| 1 | DB-G0 |
|
| 2 | DB-G1 |
|
| 1 | DB-G0 |
|
| 2 | DB-G1 |
|
+-----+--------+ |
4 rows in set (0.017 sec) |
Here is my Spider related settings:
#
|
# * Spider
|
#
|
spider_bulk_update_mode = 2
|
spider_bulk_update_size = 512K
|
spider_bulk_size = 512K
|
spider_conn_wait_timeout = 5
|
spider_connect_timeout = 3
|
spider_direct_dup_insert = 1
|
spider_direct_order_limit = 1
|
spider_multi_split_read = 1
|
spider_net_read_timeout = 5
|
spider_net_write_timeout = 5
|
spider_quick_mode = 3
|
spider_quick_page_size = 1000
|
spider_remote_trx_isolation = 1
|
spider_support_xa = 0
|
|
spider_remote_time_zone = +00:00
|
spider_remote_access_charset = utf8mb4
|
spider_remote_autocommit = 1
|
spider_remote_default_database = 0
|
|
spider_general_log = 1
|
spider_internal_sql_log_off = 0
|
spider_log_result_errors = 4
|
I commented out all the above settings and tried again, the results are same.
I've also changed the PARTITION BY HASH (id) to PARTITION BY KEY(id). And the result is exactly same, the duplicated rows still there.