Is there any way to control realtime segment sizes beyond just rolling smaller increments of time? Our segments are currently 10-250MB on disk rolling at 15 minutes now (query granularity is also 15min). Occasionally doing a group by with four or five dimensions over one segments interval will cause large heap sizes or run into memory issues with GC pauses, etc. How does one determine the best segment size to use? Would upgrading help here with the improvements to groupBy queries?
We are running druid 0.6.171 with historical nodes running on r3.xlarge instances, jvm heaps are set to 28GB.