Beam defunct and all messages dropped

Getting these kind of errors in tranquility and it appears that all messages are dropped. The coordinator console shows that both ‘index_realtime_usersync_2017-02-08T23:00:00.000Z_0_0’ and ‘index_realtime_usersync_2017-02-09T00:00:00.000Z_0_0’ tasks have failed. What could be wrong here? Appreciate your help/

c.metamx.emitter.core.LoggingEmitter - Event [{“feed”:“alerts”,“timestamp”:“2017-02-09T00:26:11.434Z”,“service”:"tran

quility",“host”:“localhost”,“severity”:“anomaly”,“description”:“Beam defunct: druid:overlord/usersync”,“data”:{“exceptionType”:"com.metamx.tranquility.beam.DefunctBeamExcep

tion",“exceptionStackTrace”:"com.metamx.tranquility.beam.DefunctBeamException: Tasks are all gone: index_realtime_usersync_2017-02-08T23:00:00.000Z_0_0\n\tat com.metamx.tra

at com.twitter.util.Promise$Transformer.apply(Promise.scala:122) ~[com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Promise$Transformer.apply(Promise.scala:103) ~[com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Promise$$anon$1.run(Promise.scala:366) ~[com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.concurrent.LocalScheduler$Activation.run(Scheduler.scala:178) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.concurrent.LocalScheduler$Activation.submit(Scheduler.scala:136) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.concurrent.LocalScheduler.submit(Scheduler.scala:207) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.concurrent.Scheduler$.submit(Scheduler.scala:92) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Promise.runq(Promise.scala:350) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Promise.updateIfEmpty(Promise.scala:726) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Promise.link(Promise.scala:793) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Promise.become(Promise.scala:658) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.metamx.tranquility.finagle.FutureRetry$$anonfun$onErrors$1$$anonfun$applyOrElse$2$$anonfun$apply$1.apply$mcV$sp(FutureRetry.scala:62) [io.druid.tranquility-c

ore-0.8.3-SNAPSHOT.jar:0.8.3-SNAPSHOT]

at com.twitter.util.Monitor$$anonfun$apply$1.apply$mcV$sp(Monitor.scala:38) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Monitor$$anonfun$apply$1.apply(Monitor.scala:38) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Monitor$$anonfun$apply$1.apply(Monitor.scala:38) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Monitor$$anonfun$using$1.apply(Monitor.scala:110) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Monitor$.restoring(Monitor.scala:117) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Monitor$.using(Monitor.scala:108) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Monitor$class.apply(Monitor.scala:37) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.NullMonitor$.apply(Monitor.scala:167) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Timer$$anonfun$schedule$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(Timer.scala:33) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Timer$$anonfun$schedule$1$$anonfun$apply$mcV$sp$1.apply(Timer.scala:33) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Timer$$anonfun$schedule$1$$anonfun$apply$mcV$sp$1.apply(Timer.scala:33) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Local$.let(Local.scala:71) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.util.Timer$$anonfun$schedule$1.apply$mcV$sp(Timer.scala:33) [com.twitter.util-core_2.10-6.30.0.jar:6.30.0]

at com.twitter.finagle.util.HashedWheelTimer$$anon$3.run(HashedWheelTimer.scala:16) [com.twitter.finagle-core_2.10-6.31.0.jar:6.31.0]

at org.jboss.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:556) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:632) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:369) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [io.netty.netty-3.10.5.Final.jar:na]

at java.lang.Thread.run(Thread.java:745) [na:1.7.0_121]

tranquility/kafka json:

{

“dataSources” : {

“usersync” : {

“spec” : {

“dataSchema” : {

“dataSource” : “usersync”,

“parser” : {

“type” : “smile”,

“parseSpec” : {

“timestampSpec” : {

“column” : “originTimestamp”,

“format” : “auto”

},

“dimensionsSpec” : {

“dimensions” : [“messageType”, …]

},

“format” : “json”

}

},

“granularitySpec” : {

“type” : “uniform”,

“segmentGranularity” : “hour”,

“queryGranularity” : “none”

},

“metricsSpec” : [

{

“type” : “count”,

“name” : “count”

}

]

},

“ioConfig” : {

“type” : “realtime”

},

“tuningConfig” : {

“type” : “realtime”,

“maxRowsInMemory” : “200000”,

“intermediatePersistPeriod” : “PT10M”,

“windowPeriod” : “PT30M”,

“rejectionPolicy”: {

“type”: “serverTime”

}

}

},

“properties” : {

“task.partitions” : “1”,

“task.replicants” : “1”,

“topicPattern” : “topic”

}

}

},

“properties” : {

“zookeeper.connect” : “zookeeperip:2181”,

“druid.discovery.curator.path” : “/druid/discovery”,

“druid.selectors.indexing.serviceName” : “druid/overlord”,

“commit.periodMillis” : “15000”,

“consumer.numThreads” : “4”,

“kafka.zookeeper.connect” : “ip:2181”,

“kafka.group.id” : “tranquility-usersync”,

“reportDropsAsExceptions” : “false”

}

}

Nevermind. It’s a problem creating the indexing tasks.

Hello,

I am facing a similar kind of issue with no resolution yet. Were you able to resolve this ?