No host exception in tranquility kafka log

After posting the data to kafka, druid get below exception while consuming the data.

“eventCount” : 2000,

“timestamp” : “2016-09-26T20:00:00.000Z”,

“beams” : “MergingPartitioningBeam(DruidBeam(interval = 2016-09-26T20:00:00.000Z/2016-09-26T21:00:00.000Z, partition = 0, tasks = [index_realtime_test-prachi_2016-09-26T20:00:00.000Z_0_0/test-prachi-020-0000-0000; index_realtime_test-prachi_2016-09-26T20:00:00.000Z_0_1/test-prachi-020-0000-0001]), DruidBeam(interval = 2016-09-26T20:00:00.000Z/2016-09-26T21:00:00.000Z, partition = 1, tasks = [index_realtime_test-prachi_2016-09-26T20:00:00.000Z_1_0/test-prachi-020-0001-0000; index_realtime_test-prachi_2016-09-26T20:00:00.000Z_1_1/test-prachi-020-0001-0001]))”

}

com.twitter.finagle.NoBrokersAvailableException: No hosts are available for disco!firehose:druid:overlord:test-prachi-020-0000-0001, Dtab.base=, Dtab.local=

at com.twitter.finagle.NoStacktrace(Unknown Source) ~[na:na]

2016-09-26 21:18:14,933 [Hashed wheel timer #1] INFO c.metamx.emitter.core.LoggingEmitter - Event [{“feed”:“alerts”,“timestamp”:“2016-09-26T21:18:14.932Z”,“service”:“tranquility”,“host”:“localhost”,“severity”:“anomaly”,“description”:“Failed to propagate events: druid:overlord/test-prachi”,“data”:{“exceptionType”:“com.twitter.finagle.NoBrokersAvailableException”,“exceptionStackTrace”:"com.twitter.finagle.NoBrokersAvailableException: No hosts are available for disco!firehose:druid:overlord:test-prachi-020-0000-0001, Dtab.base=, Dtab.local=\n\tat com.twitter.finagle.NoStacktrace(Unknown Source)\n",“timestamp”:“2016-09-26T20:00:00.000Z”,“beams”:“MergingPartitioningBeam(DruidBeam(interval = 2016-09-26T20:00:00.000Z/2016-09-26T21:00:00.000Z, partition = 0, tasks = [index_realtime_test-prachi_2016-09-26T20:00:00.000Z_0_0/test-prachi-020-0000-0000; index_realtime_test-prachi_2016-09-26T20:00:00.000Z_0_1/test-prachi-020-0000-0001]), DruidBeam(interval = 2016-09-26T20:00:00.000Z/2016-09-26T21:00:00.000Z, partition = 1, tasks = [index_realtime_test-prachi_2016-09-26T20:00:00.000Z_1_0/test-prachi-020-0001-0000; index_realtime_test-prachi_2016-09-26T20:00:00.000Z_1_1/test-prachi-020-0001-0001]))”,“eventCount”:2000,“exceptionMessage”:“No hosts are available for disco!firehose:druid:overlord:test-prachi-020-0000-0001, Dtab.base=, Dtab.local=”}}]

2016-09-26 21:18:22,631 [Hashed wheel timer #1] WARN c.m.tranquility.beam.ClusteredBeam - Emitting alert: [anomaly] Failed to propagate events: druid:overlord/test-prachi

{

This exception is generally thrown when the indexing task couldn’t start on time.
I have observed this exception only on segment granularity boundaries (i.e if segment granularity is hour then at the start of the hour).

Can you verify if that is the case with you as well.

Also does this exception eventually goes away?

Another reason of this exception could be if you don’t have enough capacity on middle manager to run all the tasks.

PS:In my case i have not observed any data loss due to this exception.