[MCOL-1056] Internal error: InetStreamSocket::readToMagic: Remote is closed Created: 2017-11-29 Updated: 2019-07-10 Resolved: 2019-07-10 |
|
| Status: | Closed |
| Project: | MariaDB ColumnStore |
| Component/s: | ? |
| Affects Version/s: | 1.1.2 |
| Fix Version/s: | Icebox |
| Type: | Bug | Priority: | Major |
| Reporter: | hiller1 | Assignee: | Unassigned |
| Resolution: | Incomplete | Votes: | 0 |
| Labels: | None | ||
| Environment: |
MariaDB-log Columnstore 1.1.2-1 1um,2pm |
||
| Attachments: |
|
| Description |
|
MariaDB [(none)]> SELECT w.withdraw_apply_no, w.state, ir.* FROM holmes_analysis.risk_assess_flow_index_result ir LEFT JOIN nirvana.withdraw_apply w on w.withdraw_risk_assess_apply_id = ir.risk_asess_flow_state_id where ir.channel_code = 'qianzhan' and ir.risk_assess_flow_id = 103 and ir.`key1` REGEXP 'TSHATFR001|TSHATFR002|TSHATFR003|TSHATFR004|TSHATFR005' and ir.create_time >= '2017-11-03';
-------------------
------------------- select ir.* is crash!!! select ir.id is no problem. |
| Comments |
| Comment by David Thompson (Inactive) [ 2017-11-29 ] |
|
If i was to take a guess adding ir.* results in a too large intermediate result set causing out of memory error. You should be able to determine the cause of crash from the logs in /var/log/mariadb/columnstore or send us over a columnstoreSupportReport. |
| Comment by hiller1 [ 2017-11-29 ] |
|
thanks!!! |
| Comment by Muhammad Abbas [ 2018-04-18 ] |
|
Same error is appearing at my end , what i have seen is that all the data is trying to get into RAM. I have 64B of RAM on my VM. So when i see the result with free -h on VM i can see RAM getting drop from 62GB to 200M and then it crashes. Can you please provide a solution that it does not write over RAM all the results and use disk. I have also made below changes as well: /usr/local/mariadb/columnstore/bin/configxml.sh setconfig SystemConfig SwapAction none /usr/local/mariadb/columnstore/bin/configxml.sh setconfig SystemConfig AllowDiskBasedJoin Y set global infinidb_um_mem_limit=2500; |
| Comment by Andrew Hutchings (Inactive) [ 2019-07-10 ] |
|
Probably fixed by now |