Druid 0.9.2-rc1 - exception in tasks

Hi all,

I’ve been using druid-0.9.2-rc1 to ingest data from Kafka using the Kafka indexing service.

Although data is being ingested by druid, I don’t find any segments being stored in deep storage (I’m using HDFS as deep storage).

I found the exceptions pasted below in the task logs

Questions:

(a) Are these exceptions the reason segments are not being stored in HDFS? (My guess is yes - wanted to confirm it though)

(b) Are these due to known issues in druid-0.9-rc2 or have I missed something in my configuration?

Exception

2016-11-07T23:08:13,534 WARN [appenderator_merge_0] io.druid.segment.realtime.appenderator.AppenderatorImpl - Failed to push merged index for segment[conn_recs_2016-11-07T23:00:00.000Z_2016-11-07T23:15:00.000Z_2016-11-07T23:04:31.605Z_1].
java.io.IOException: Failed to rename temp directory[hdfs://172.31.8.224:54310/druid/segments/4567316111594cf09901b42ea8b22b40/index.zip] and segment directory[hdfs://172.31.8.224:54310/druid/segments/conn_recs/20161107T230000.000Z_20161107T231500.000Z/2016-11-07T23_04_31.605Z/1] is not present.
at io.druid.storage.hdfs.HdfsDataSegmentPusher.push(HdfsDataSegmentPusher.java:125) ~[?:?]
at io.druid.segment.realtime.appenderator.AppenderatorImpl.mergeAndPush(AppenderatorImpl.java:571) [druid-server-0.9.2-rc1.jar:0.9.2-rc1]
at io.druid.segment.realtime.appenderator.AppenderatorImpl.access$600(AppenderatorImpl.java:93) [druid-server-0.9.2-rc1.jar:0.9.2-rc1]
at io.druid.segment.realtime.appenderator.AppenderatorImpl$3.apply(AppenderatorImpl.java:467) [druid-server-0.9.2-rc1.jar:0.9.2-rc1]
at io.druid.segment.realtime.appenderator.AppenderatorImpl$3.apply(AppenderatorImpl.java:455) [druid-server-0.9.2-rc1.jar:0.9.2-rc1]
at com.google.common.util.concurrent.Futures$1.apply(Futures.java:713) [guava-16.0.1.jar:?]
at com.google.common.util.concurrent.Futures$ChainingListenableFuture.run(Futures.java:861) [guava-16.0.1.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_101]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]
2016-11-07T23:08:13,536 WARN [task-runner-0-priority-0] io.druid.segment.realtime.appenderator.FiniteAppenderatorDriver - Failed publishAll (try 6), retrying in 83,020ms.
java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.io.IOException: Failed to rename temp directory[hdfs://172.31.8.224:54310/druid/segments/4567316111594cf09901b42ea8b22b40/index.zip] and segment directory[hdfs://172.31.8.224:54310/druid/segments/conn_recs/20161107T230000.000Z_20161107T231500.000Z/2016-11-07T23_04_31.605Z/1] is not present.
at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:299) ~[guava-16.0.1.jar:?]
at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:286) ~[guava-16.0.1.jar:?]
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116) ~[guava-16.0.1.jar:?]
at io.druid.segment.realtime.appenderator.FiniteAppenderatorDriver.publishAll(FiniteAppenderatorDriver.java:417) [druid-server-0.9.2-rc1.jar:0.9.2-rc1]
at io.druid.segment.realtime.appenderator.FiniteAppenderatorDriver.finish(FiniteAppenderatorDriver.java:256) [druid-server-0.9.2-rc1.jar:0.9.2-rc1]
at io.druid.indexing.kafka.KafkaIndexTask.run(KafkaIndexTask.java:503) [druid-kafka-indexing-service-0.9.2-rc1.jar:0.9.2-rc1]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2-rc1.jar:0.9.2-rc1]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2-rc1.jar:0.9.2-rc1]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_101]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_101]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_101]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]

Thanks,

Jithin

There was a fix to the hdfs pusher in 0.9.2-rc2, are you using that or are you using the older 0.9.2-rc1? If this happens on 0.9.2-rc2, could you please attach some more context on the failure (fuller task logs, including all other exceptions that happened).

Hi Gian,

I’m using the 0.9.2-rc1 version of druid - I found druid-hdfs-storage-0.9.2-rc1.jar in the druid-hdfs-storage extension folder. Hence, the conclusion.

I noticed that rc2 has been released. Is 0.9.2-rc2 expected to fix this issue?

Thanks,

Jithin

I’m not sure if rc2 fixes this issue in particular, but it did have patches to the temp file handling stuff in the hdfs pusher.

Using 0.9.2-rc2 fixed the issue!!

Thanks for the help,

Jithin