Justifying Broker nodes' high memory requirements

The sample ‘production config’ in druid docs shows a broker node with insane amounts of memory.
However, unless you are using the ‘caching’ feature of the node, is all that memory really necessary?

If all the broker does is merge results from the historical/realtime nodes, it probably doesnt need more than a few megabytes per query, since you can only show so many data points on a chart.

Am I correct that if caching is not used on broker one can get by with a much smaller broker instance, eg 4GB ram?

If not, then what is all that memory used for?

The sample production configs are overkill on broker memory. You can get away with much smaller memory usages. It depends on the size of results you have to merge and also the number of concurrent queries you see.