Hi all, we have a staging environment with a druid setup which runs a few kafka indexing tasks. As its staging, we have very minimal data flow (5MB in one data source from the past five months). Having our indexing task use 300MB is entirely overkill. However, I haven’t been able to find the right memory settings to shrink it to less than that.
There is only one merge buffer and 1 thread i believe,
druid.processing.buffer.sizeBytes is 25MB and the max memory in our JVM is 100m (druid.indexer.runner.javaOpts=-server -Xms100m -Xmx100m).
Any tips here? I feel like I’m missing something in the config.