Understanding Broker Memory Management

Hi,
Could someone explain who the memory is managed in Broker, JVM [Heap, Off Heap] ?

No detailed information in docs as compared to historicals.

We are running into Broker being crashed on more query load , we are suspecting its because of the memory configurations, so we want to understand how the memory is distributed and what all attributes govern memory for broker node.

Following is our configuration :

-Xmx22g -Xms22g -XX:NewSize=7g -XX:MaxNewSize=7g -XX:MaxDirectMemorySize=20g

druid.server.http.numThreads=200

druid.broker.http.numConnections=100

druid.processing.numThreads=31

druid.processing.buffer.sizeBytes=1073741823 (1 GB)

druid.broker.http.readTimeout=PT5M

druid.broker.balancer.type=connectionCount

druid.broker.cache.populateCache=false

druid.broker.cache.useCache=false

#setting following value to 2GB -1, as setting the value to 2GB is causing a runtime exception

druid.cache.sizeInBytes=2147483647

total used free shared buff/cache available

Mem: 29G 12G 15G 1.5G 1.8G 15G

Swap: 0B 0B 0B

What is wrong here ?

NOTE : If reduce the druid.processing.buffer.sizeBytes to hundred MB, same set of queries seems to be working.

Thanks,

Pravesh Gupta

Adobe

Hi Pravesh,

For broker, its the same with historicals plus the needed heap memory requirement for segment metadata. So what I usually do and this is depending on the number of segments in the cluster, is to wait until the JVM used memory is already stable. Then we can decide if we set the correct heap for broker. Use jvm/mem/used and jvm/mem/max metrics to help you out.