[MCOL-803] Internal error: IDB-2003: Aggregation/Distinct memory limit is exceeded. Created: 2017-07-05 Updated: 2017-11-27 Resolved: 2017-11-27 |
|
| Status: | Closed |
| Project: | MariaDB ColumnStore |
| Component/s: | MariaDB Server |
| Affects Version/s: | 1.0.7 |
| Fix Version/s: | Icebox |
| Type: | Bug | Priority: | Major |
| Reporter: | Emannuel Roque | Assignee: | Unassigned |
| Resolution: | Duplicate | Votes: | 0 |
| Labels: | None | ||
| Environment: |
aws ubuntu server 16.04.2 LTS |
||
| Issue Links: |
|
||||||||
| Description |
|
I`m using pentaho mondrian saiku (OLAP solution) and I can't not get results with multi dimensions set. I get the error "Internal error: IDB-2003: Aggregation/Distinct memory limit is exceeded." I know that disk based agrregation is not supported yet then I tried do increase the memoryum and use swap (the swap memory now are about 100GB) as work arround. I can see swap increasing but I still get that message. Any other variable should be edited? Thanks |
| Comments |
| Comment by David Hill (Inactive) [ 2017-07-06 ] |
|
The Columnstore/xml setting that is for Aggregation/Distinct memory limit is TotalUmMemory. But Im curious why you are using up all the local memory and getting into swap to start with. If this is a single node system, you can increase TotalUmMemory, but would need to decrease the NumBlocksPct to match that the combination of these 2 arent over 75%. If single server, you can set or increase TotalUmMemory to 75%. But as stated, curious why you are using 100% of the memory to start with. You really dont want to be hitting swap space. |
| Comment by Emannuel Roque [ 2017-07-06 ] |
|
I'm using swap because this specific aggregation did not fit into memory(even increasing the totalummemory). This query does not execute frequently so it does not justify upgrading the server that is idle most of the time. |
| Comment by David Thompson (Inactive) [ 2017-07-12 ] |
|
If the aggregation is being executed at the PM level which it should be normally, then you may need to decrease NumBlocksPct to reduce the size of the block cache to free up more memory to support the hash buckets for the group by. Also look into disabling the auto process restart: |
| Comment by David Thompson (Inactive) [ 2017-11-27 ] |
|
Closing as duplicate of related enhancement to support disk based join which is the only solution if your aggregate buckets exceed available memory |