Tranquility to fail new realtime segment creation after every one other segmentgranularity period


I have setup single server for PoC to test druid, I am using default configurations of quicksetup of Tranquility kafka setup is done. It consume from kafka topic and writes to druid. segmentgranularity is “hour”. It works ever one hour, and fails to create realtime index job for the next hour then it success to create realtime index job again. As summary it work one hour then fails to create index task then stop, then works in next hour.

What i am missing in settings?

Below is the tranquility kafka log during creation of new realtime index task. ( # of record per hour is ~1.2m)

“exceptionMessage”:“Tasks are all gone: index_realtime_igwkafka_2016-08-27T09:00:00.000Z_0_0”}}]

2016-08-27 09:02:36,193 [Hashed wheel timer #1] WARN c.m.tranquility.druid.TaskClient - Emitting alert: [anomaly] Loss of Druid redundancy: igwkafka


“dataSource” : “igwkafka”,

“task” : “index_realtime_igwkafka_2016-08-27T09:00:00.000Z_0_0”,

“status” : “TaskFailed”


2016-08-27 09:02:36,194 [Hashed wheel timer #1] INFO c.metamx.emitter.core.LoggingEmitter - Event [{“feed”:“alerts”,“timestamp”:“2016-08-27T09:02:36.194Z”,“service”:“tranquility”,“host”:“localhost”,“severity”:“anomaly”,“description”:“Loss of Druid redundancy: igwkafka”,“data”:{“dataSource”:“igwkafka”,“task”:“index_realtime_igwkafka_2016-08-27T09:00:00.000Z_0_0”,“status”:“TaskFailed”}}]

2016-08-27 09:02:36,196 [Hashed wheel timer #1] WARN c.m.tranquility.beam.ClusteredBeam - Emitting alert: [anomaly] Beam defunct: druid:overlord/igwkafka


“eventCount” : 1,

“timestamp” : “2016-08-27T09:00:00.000Z”,

“beam” : “MergingPartitioningBeam(DruidBeam(interval = 2016-08-27T09:00:00.000Z/2016-08-27T10:00:00.000Z, partition = 0, tasks = [index_realtime_igwkafka_2016-08-27T09:00:00.000Z_0_0/igwkafka-009-0000-0000]))”


com.metamx.tranquility.beam.DefunctBeamException: Tasks are all gone: index_realtime_igwkafka_2016-08-27T09:00:00.000Z_0_0

at com.metamx.tranquility.druid.DruidBeam$$anonfun$sendAll$2$$anonfun$6$$anonfun$apply$6.apply(DruidBeam.scala:115) ~[io.druid.tranquility-core-0.8.2.jar:0.8.2]

at com.metamx.tranquility.druid.DruidBeam$$anonfun$sendAll$2$$anonfun$6$$anonfun$apply$6.apply(DruidBeam.scala:115) ~[io.druid.tranquility-core-0.8.2.jar:0.8.2]

at scala.Option.getOrElse(Option.scala:121) ~[org.scala-lang.scala-library-2.11.7.jar:na]

at com.metamx.tranquility.druid.DruidBeam$$anonfun$sendAll$2$$anonfun$6.apply(DruidBeam.scala:112) ~[io.druid.tranquility-core-0.8.2.jar:0.8.2]

at com.metamx.tranquility.druid.DruidBeam$$anonfun$sendAll$2$$anonfun$6.apply(DruidBeam.scala:109) ~[io.dru

First make sure you have enough capacity on your middlemanagers to run the tasks for consecutive hours parallely.
i.e number of middlemanagers * druid.worker.capacity > 2 * (partitions + replication factor )

also, when you see DefunctBeamExceptions, check overlord console if any of the tasks FAILED, task logs would have more info on the root cause behind the failure.



I saw that there is lack of memory error in middlemanager peons once creating realtime index task.
I increased the settings and reduce the segment to 15min but still it works for one it fails for others.

At the end, I give up on tranquility kafka

I used kafka indexing service that works without problem with default settings that means middlemanager settings was correct.