Druid cluster issues

Hi,

We have successfully setup a 2 node cluster. We use druid rc-2 and tranquility to submit data.

We are struggling with following issues.

  1. Event loss: 94%-96% events are recorded and we have observed loss in almost every iteration. We concluded this based on the input count and number reported by count aggregator.

  2. Each tasks takes additional 5-8 minutes to complete after the window period is over. How can we reduce this interval? See attached log. Please advise if you see anything odd with the task spec.

  3. Event after setting -XX:MaxDirectMemorySize=5g “everywhere”, Peon tasks error out with

Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, or druid.processing.numThreads: maxDirectMemory[3,491,758,080], memoryNeeded[4,900,000,000] = druid.processing.buffer.sizeBytes[700,000,000] * ( druid.processing.numThreads[6] + 1 )

The command that gets fired is attached.

4.Some of the tasks end successfully but show following exception in the tail part of attached log

  • java.lang.IllegalStateException: instance must be started before calling this method
    at com.google.common.base.Preconditions.checkState(Preconditions.java:176) ~[guava-16.0.1.jar:?]
    at org.apache.curator.framework.imps.CuratorFrameworkImpl.delete(CuratorFrameworkImpl.java:347) ~[curator-framework-2.8.0.jar:?]
    at org.apache.curator.x.discovery.details.ServiceDiscoveryImpl.internalUnregisterService(ServiceDiscoveryImpl.java:505) ~[curator-x-discovery-2.8.0.jar:?]
    at org.apache.curator.x.discovery.details.ServiceDiscoveryImpl.close(ServiceDiscoveryImpl.java:155) [curator-x-discovery-2.8.0.jar:?]
    at io.druid.curator.discovery.DiscoveryModule$5.stop(DiscoveryModule.java:222) [druid-server-0.8.1-rc2.jar:0.8.1-rc2]
    at com.metamx.common.lifecycle.Lifecycle.stop(Lifecycle.java:267) [java-util-0.27.0.jar:?]
    at io.druid.cli.CliPeon$2.run(CliPeon.java:220) [druid-services-0.8.1-rc2.jar:0.8.1-rc2]
    at java.lang.Thread.run(Thread.java:745) [?:1.8.0_60]*
  1. At one point, we observed that timeseries query reporting higher count, then decrease the count and stabilize at lesser count than original as other tasks finished. We noticed this as we were continuously submitting the timeseries query to get the running count.

Please advise.

Thanks,

Dave

task.log (36.5 KB)

peon.txt (6.14 KB)

Anyone?

Hey Dave,

For #1, first make sure you’re counting events properly. Druid does partial aggregation (“rollup”) at ingestion time, and so it can store somewhat fewer rows than your number of input messages. See “Counting the number of ingested events” on: http://druid.io/docs/latest/ingestion/schema-design.html. If you’re counting events properly, then your loss may be due to late data older than your “windowPeriod”.

For #2, it’s normal for tasks to need a few minutes for cleanup. After the windowPeriod ends, tasks must build the final version of the segment, upload it to deep storage, and wait for a historical node to load it.

For #3, on your middleManagers, you’ll also need to add that to the druid.indexer.runner.javaOpts. These are the jvm args that peons will get launched with. For example,

druid.indexer.runner.javaOpts=-server -Xmx3g -XX:MaxDirectMemorySize=5g

For #4, this is probably related to #1. The “count” aggregator is actually counting the number of Druid rows. It’s normal for this to shrink after the final segment build, as the realtime task will compact partial segments into a final segment.