[MDEV-16247] loading a large (greater than 100GB) dumpfile into spider fails with timeouts Created: 2018-05-22  Updated: 2023-09-20

Status: Open
Project: MariaDB Server
Component/s: Storage Engine - Spider
Affects Version/s: 10.3
Fix Version/s: 10.4

Type: Bug Priority: Major
Reporter: Eric Herman Assignee: Yuchen Pei
Resolution: Unresolved Votes: 0
Labels: None
Environment:

CentOS release 6.9, 2.6.32-696.16.1.el6.x86_64
MariaDB 10.3 branch
spider node with 4 data nodes



 Description   

When loading large dumpfiles into the spider node, I see errors like:

2018-05-16 16:39:33 139510542624512 [Warning] Aborted connection 69488566 to db: 'sptest' user: 'spider_test' host: 'spider-head.example.com' (Got timeout reading communication packets)

I am unsure what timeouts we need to adjust to work around this issue, and I do not see documentation to guide me.

While I can not provide a test-case with my real data, I hope that this is reproducible with a large table of pseudo-random data.



 Comments   
Comment by Eric Herman [ 2018-05-23 ]

We have worked around this by dumping the data using --where such that we create a separate dump file per data node and loaded each file on the corresponding data nodes directly.

This work-around may even have some advantages for speed, however we would expect this to "just work" like other storage engines.

Generated at Thu Feb 08 08:27:29 UTC 2024 using Jira 8.20.16#820016-sha1:9d11dbea5f4be3d4cc21f03a88dd11d8c8687422.