Not allocate segment for row with timestamp[xxxxxxx]

I found one kafka indexing task stop by could not allocate segment for row with timestamp[xxxxxxx] exception. and the task failed until the next task start.

2017-08-08T19:17:51,600 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[KafkaIndexTask{id=index_kafka_datainfra_hadoop_hdfs_editlog_572744800ba8968_fpkhcgpi, type=index_kafka, dataSource=datainfra_hadoop_hdfs_editlog}] Could not allocate segment for row with timestamp[2017-08-08T10:59:53.000Z]
	at ~[?:?]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ [druid-indexing-service-0.10.0-101.jar:0.10.0-101]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ [druid-indexing-service-0.10.0-101.jar:0.10.0-101]
	at [?:1.8.0_77]
	at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:1.8.0_77]
	at java.util.concurrent.ThreadPoolExecutor$ [?:1.8.0_77]
	at [?:1.8.0_77]
2017-08-08T19:17:51,605 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_kafka_datainfra_hadoop_hdfs_editlog_572744800ba8968_fpkhcgpi] status changed to [FAILED].
2017-08-08T19:17:51,607 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_kafka_datainfra_hadoop_hdfs_editlog_572744800ba8968_fpkhcgpi",
  "status" : "FAILED",
  "duration" : 304476

Anybody who knows what's the matter?

There is a bug in 0.10.0 that could potentially cause this ( which is fixed in 0.10.1. You can find a release candidate here: and expect a final release soon.

Hi, Gian

this bug happend by accident, and then the task will be success. In which case will start the bug?

在 2017年8月9日星期三 UTC+8上午2:29:06,Gian Merlino写道: