Shrinking Indexing task memory usage

Hi all, we have a staging environment with a druid setup which runs a few kafka indexing tasks. As its staging, we have very minimal data flow (5MB in one data source from the past five months). Having our indexing task use 300MB is entirely overkill. However, I haven’t been able to find the right memory settings to shrink it to less than that.

There is only one merge buffer and 1 thread i believe,

druid.processing.buffer.sizeBytes is 25MB and the max memory in our JVM is 100m (druid.indexer.runner.javaOpts=-server -Xms100m -Xmx100m).

Any tips here? I feel like I’m missing something in the config.

Thanks!

If you have one merge buffer, 1 thread, and a processing buffer size of 25MB, direct buffers would use ~50-75MB, so with the 100MB initial/max heap size, that accounts for ~150-175MB of fixed memory allocations.

How big is the Java metaspace (permgen if you’re running a Druid version that uses java 7) for your indexing task process?