We've recently run into a couple cases where the disk join was triggered, but the data distribution was such that there were so many rows with a single value in the join column that it overflowed a partition in the loading stage. We currently reject the query when that happens, but it should be easy to handle as a special case.
For reference, the disk join algorithm is called 'GRACE', you can google 'grace join algorithm' to get more understanding of what it's doing.
In the code, to find the place where it detects the data distribution problem, grep for 'ERR_DBJ_DATA_DISTRIBUTION' in joinpartition.cpp.
My initial thoughts.
1) We can let partitions with a very small range of join values grow unbounded up to the total disk usage limit.
2) Set a flag on such partitions to indicate they shouldn't be loaded into a hash table when the join phase start.
3) Instead, it should be possible to stream those rows from disk when a row in the 'other' table matches, doing effectively a nested loop join instead of a hash join for those partitions.
It'll take some understanding of the partitioning structures & behavior; whoever gets this one, feel free to ask me.