Long GC pauses

Hi,

and coordination/historical disconnection.We’re seeing long GC pauses on the historical nodes, which cause zookeeper disconnection (Client session timed out, have not heard from server).

2019-02-14T00:55:43.309+0000: 3032761.969: [Full GC (Allocation Failure) 31G->26G(32G), 54.4008452 secs]

[Eden: 0.0B(2592.0M)->0.0B(2592.0M) Survivors: 0.0B->0.0B Heap: 31.9G(32.0G)->26.3G(32.0G)], [Metaspace: 95580K->95535K(98304K)]

[Times: user=87.03 sys=0.00, real=54.39 secs]

What’s strange is that this happens every day at the same time (around 1AM UTC).

Operational wise, it correlates with reducing a datasource’s retention rule by a full year, but we cannot prove this is the root cause.

Can anyone offer a bit of advice on how to further debug this issue?

Thanks!

Eyal.

Hi Eyal:

Anything else is schedule to run on your Druid cluster a 1AM ? like compaction etc ?

Thanks

What are your historical server sizes (Number of CPUs and RAM)?

32 GB of heap is very large for a historical node. The recommended is generally around (250-500MB * number of CPU cores).

Thanks,

Ben

Those are pretty large machines. We have two tiers, one with 768GB, the other 256GB.
We’ve experienced gc pauses on both tiers.

Both machines have 24 cores (48 threads).

According to the GC log it looks like there heap is always full, so I wouldn’t imagine reducing it’s size would improve performance.

I’m still investigating with a profiler (Flight Recorder) to see what keeps the heap full.

How about you enable GC logging and look at the big picture, like allocation rate, possible spikes, memory leaks …etc
Something like -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution -XX:+UseGCLogFileRotation

You can use tools like https://gceasy.io/ or Jclarity to analyze the full GC log.

JMC TLAB Allocations shows a large lookup cache (NamespaceExtractionCacheManager ~6GB), smaller kafka-consumer, and many other 50-100MB objects which are probably intermediate processing and buffer merges (groupBy-XXX, processing-XXX).

But actually, I’m not sure if we should be confined to just 32GB, as these machines have much more RAM than that.

We will try 48GB or more to see the effect.

I wonder if anyone uses such large JVM heaps on historicals?

Anyways, I’ll try that and report back!