Tranquality stuck with spark streaming.

Hi,

I am using druid quick start for my development purposes. I am using my local to run all druid services. I am using spark streaming and tranquality to send realtime events to druid. While starting fresh I am able to send data to druid. But when I restart spark streaming server and try to send event to druid it got struck with following log

16/07/11 12:21:34 TRACE Tranquilizer: Sending message: CandidateMovementAggregated(2016-07-11T06:51:32.000Z,6,2016-07-11,P_14eDCF4pe9,14eDCDXVmJ,P_14eOnNdNc8,INT,ACT,OFF,ACT,HT,P_6fDich9NJ,10)

16/07/11 12:21:34 DEBUG Tranquilizer: Swapping out buffer with 1 messages, 1 batches now pending.

16/07/11 12:21:34 DEBUG Tranquilizer: Sending buffer with 1 messages.

16/07/11 12:21:34 DEBUG Tranquilizer: Flushing 1 batches.

16/07/11 12:21:34 INFO ClusteredBeam: Merged beam already created for identifier[druid:overlord/candidateMovement] timestamp[2016-07-11T00:00:00.000Z], with sufficient partitions (target = 1, actual = 1)

16/07/11 12:21:34 INFO ClusteredBeam: Adding beams for identifier[druid:overlord/candidateMovement] timestamp[2016-07-11T00:00:00.000Z]: List(Map(interval -> 2016-07-11T00:00:00.000Z/2016-07-12T00:00:00.000Z, partition -> 0, tasks -> List(Map(id -> index_realtime_candidateMovement_2016-07-11T00:00:00.000Z_0_0, firehoseId -> candidateMovement-011-0000-0000)), timestamp -> 2016-07-11T00:00:00.000Z))

I have attached my code for creation of tranquality beam.

Please recommend what could be the problem.

Beam.scala (2.16 KB)

Hey Anshul,

Do you see any additional errors after waiting a few minutes for retries to happen?

Hi Gian,

No, there is not any other error. It remains in stuck state. Also there is realtime task on druid console in running state for the same segment. This task is also not completing. And if I submit any other tasks they are always in waiting for lock state.

I am using druid-0.9.0, tranquality 0.8.1 and spark 1.6.0 with scala version 2.10.6.

Hi Anshul, can you attack the task log as well?

Hi Fangjin,

PFA with task logs. I have seen this issue only when Merged beam already created for the segment. On starting fresh this works fine but on restart i.e. when beam is already created my spark streaming is stucking everytime.

log (129 KB)