OutOfMemory in tasks

Hi Guys,

I am seeing this error in my task logs:

log.txt (7.36 MB)

Hey Ankur,

Counter-intuitively, you may want to increase maxRowsInMemory. An OutOfMemoryError during the segment packaging / publishing stage can happen if there are too many spills created in the initial ingestion phase, which can happen if maxRowsInMemory is too low. Another thing you could do is get Druid to make segments that are somewhat smaller. You’d do this by increasing taskCount (w/ kafka indexing) or task.partitions (w/ tranquility).

Gian

Hey Gian,

Thanks for the reply. Really appreciate it.

I tried multiple things but none of them seems to work. Here are the things I tried:

  1. Set the max rows in memory to really high to 200k and then to 500k but still results in OOM. I set the number of partitions to 25.

I see ‘Terminating due to java.lang.OutOfMemoryError: Java heap space’ as the last line of the log. I do not see any other error in log. The log is attached.

  1. I moved the maxRowsInMemory to opposite side of the spectrum to 25k. Still the same problem.

The stream produces about 100m records per hour and has a size of ~15-20g. I thought 25 partitions should be enough.

I also checked the task directory and each split is about 120mb and there is a total of 7 splits. This is when maxrows in memory is set to 500k.

Thanks

Ankur

log.txt (547 KB)

Hi Ankur,

what was the jvm max memory for peon?

Also, maxRowsInMemory looks too low. I think you can increase it to 1m or so.

Jihoon

Hi Jihoon,

It’s 3gb.

Here is the peon jvm command - java -server -Xmx3g -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+ExitOnOutOfMemoryError

Thanks

Ankur

Thanks Ankur,

Per your description, each segment can be around 1GB at the peak. If this is the physical size on disk, it can be bigger in memory. If this the case, 3GB looks too low to me.

Can you try 4 - 6GB?

Jihoon

Thanks, Jihoon. I will try it out.
I have other tasks(for different datasource) in the same cluster which have segment size of 1gb and they do work with the same config.

Thanks