Error: Could not allocate segment

Hello, I’m having this error when I try to start a kafka-indexing service task

ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[KafkaIndexTask{id=index_kafka_sep-druid_e7dd9410a20390d_fknochmf, type=index_kafka, dataSource=sep-druid}]
com.metamx.common.ISE: Could not allocate segment for row with timestamp[2016-12-26T11:00:10.000-03:00]
	at ~[?:?]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ [druid-indexing-service-0.9.2.jar:0.9.2]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ [druid-indexing-service-0.9.2.jar:0.9.2]
	at [?:1.8.0_111]
	at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:1.8.0_111]
	at java.util.concurrent.ThreadPoolExecutor$ [?:1.8.0_111]
	at [?:1.8.0_111]
2016-12-26T17:36:09,055 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_kafka_sep-druid_e7dd9410a20390d_fknochmf] status changed to [FAILED].
2016-12-26T17:36:09,057 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_kafka_sep-druid_e7dd9410a20390d_fknochmf",
  "status" : "FAILED",
  "duration" : 5530

The task was working fine until I restarted all the services (coordinator, broker, middleManager, overlord and hisrotical).

This is my spec:

Druid version: 0.9.2


Joaquín Silva

Hi Joaquin,

This has happened to me after running a hadoop indexing job on the same data source. Unless you are using 0.9.2 and have forceExtendableShardSpecs in the tuningConfig of your spec the segments produced by hadoop jobs cannot be expanded and any data arriving for that time period will cause the kafka indexing task to fail. If you are still on then you can configure the lateMessageRejectionPeriod in the ioConfig of your supervisor spec to drop messages older than ‘x’. In this way you can always be sure that it is safe to re-index data older than ‘x’ without breaking your real time ingestion.



yes, reading through the source code:

if (existingChunks
    .flatMap(holder ->, false))
    .anyMatch(chunk -> !chunk.getObject().getShardSpec().isCompatible(shardSpecFactory.getShardSpecClass()))) {
  // All existing segments should have a compatible shardSpec with shardSpecFactory.
  return null;

this will force the segments to have the same shardSpec.
and I’ve noticed that in our case, we used a different shardSpec during reindexing.
which caused the problem.

make sure you use the same shardSpec for your kafka ingestion and reindexing.
otherwise this will cause ingestion to fail and it’s hard to fix.