Kafka Indexing Service error occurs: "Cannot use existing pending segment"

We have run Kafka Indexing Service on Druid for nearly a month, and it ran basically well. But a few days ago all the tasks went down.

Our ingestion specs are in attached file “ingest.json”.

The overlord logs are as follows:

2016-08-17T15:14:58,564 WARN [qtp622043416-164] io.druid.metadata.IndexerSQLMetadataStorageCoordinator - Cannot use existing pending segment

[test3_mobileDictClient.android_2016-08-16T21:00:00.000+08:00_2016-08-16T22:00:00.000+08:00_2016-08-16T21:00:00.459+08:00_65] for sequence [index_kafka_test3_mobileDictClient.android_af5c785c81ca0d4_5]

(previous = [test3_mobileDictClient.android_2016-08-16T20:00:00.000+08:00_2016-08-16T21:00:00.000+08:00_2016-08-16T20:00:00.322+08:00_56] ) in DB, does not match requested interval


The peon logs are as follows:

2016-08-17T07:12:05,987 INFO [task-runner-0-priority-0] io.druid.indexing.common.actions.RemoteTaskActionClient - Performing action for task[index_kafka_test3_mobileDictClient.android_ee40ef7fb2836ce_pmlheiab]: SegmentAllocateAction{dataSource=‘test3_mobileDictClient.android’, timestamp=2016-08-15T22:08:36.000+08:00, queryGranularity=DurationGranularity{length=3600000, origin=0}, preferredSegmentGranularity=HOUR, sequenceName=‘index_kafka_test3_mobileDictClient.android_ee40ef7fb2836ce_1’, previousSegmentId=‘test3_mobileDictClient.android_2016-08-15T22:00:00.000+08:00_2016-08-15T23:00:00.000+08:00_2016-08-15T23:38:55.238+08:00_160’}

2016-08-17T07:12:05,987 INFO [task-runner-0-priority-0] io.druid.indexing.common.actions.RemoteTaskActionClient - Submitting action for task[index_kafka_test3_mobileDictClient.android_ee40ef7fb2836ce_pmlheiab] to overlord[http://hd020.corp.yodao.com:8195/druid/indexer/v1/action]: SegmentAllocateAction{dataSource=‘test3_mobileDictClient.android’, timestamp=2016-08-15T22:08:36.000+08:00, queryGranularity=DurationGranularity{length=3600000, origin=0}, preferredSegmentGranularity=HOUR, sequenceName=‘index_kafka_test3_mobileDictClient.android_ee40ef7fb2836ce_1’, previousSegmentId=‘test3_mobileDictClient.android_2016-08-15T22:00:00.000+08:00_2016-08-15T23:00:00.000+08:00_2016-08-15T23:38:55.238+08:00_160’}

2016-08-17T07:12:05,997 WARN [task-runner-0-priority-0] io.druid.segment.realtime.appenderator.FiniteAppenderatorDriver - Cannot allocate segment for timestamp[2016-08-15T22:08:36.000+08:00]

  • Please note that the peon log and the overlord log do not from at the same run, the time stamp on the log shows they are from different runs. Though I believe they both point to the same problem.

From our debug results, we suspect that the problem is located in line 401 of IndexerSQLMetadataStorageCoordinator.java, which caused the warning log in line 413.

part of the IndexerSQLMetadataStorageCoordinator.java are as follows( line 359-line423):

public SegmentIdentifier allocatePendingSegment(
    final String dataSource,
    final String sequenceName,
    final String previousSegmentId,
    final Interval interval,
    final String maxVersion
) throws IOException
  Preconditions.checkNotNull(dataSource, "dataSource");
  Preconditions.checkNotNull(sequenceName, "sequenceName");
  Preconditions.checkNotNull(interval, "interval");
  Preconditions.checkNotNull(maxVersion, "maxVersion");

  final String previousSegmentIdNotNull = previousSegmentId == null ? "" : previousSegmentId;

  return connector.retryTransaction(
      new TransactionCallback<SegmentIdentifier>()
        public SegmentIdentifier inTransaction(Handle handle, TransactionStatus transactionStatus) throws Exception
          final List<byte[]> existingBytes = handle
                      "SELECT payload FROM %s WHERE "
                      + "dataSource = :dataSource AND "
                      + "sequence_name = :sequence_name AND "
                      + "sequence_prev_id = :sequence_prev_id",
              ).bind("dataSource", dataSource)
              .bind("sequence_name", sequenceName)
              .bind("sequence_prev_id", previousSegmentIdNotNull)

          if (!existingBytes.isEmpty()) {
            final SegmentIdentifier existingIdentifier = jsonMapper.readValue(

            if (existingIdentifier.getInterval().getStartMillis() == interval.getStartMillis()
                && existingIdentifier.getInterval().getEndMillis() == interval.getEndMillis()) {
                  "Found existing pending segment [%s] for sequence[%s] (previous = [%s]) in DB",

              return existingIdentifier;
            } else {
                  "Cannot use existing pending segment [%s] for sequence[%s] (previous = [%s]) in DB, "
                  + "does not match requested interval[%s]",

              return null;

We found that the interval of existingBytes dose not match the interval. Could it be a logical bug?

We run this query our SQL database:

SELECT * FROM druid_pendingSegments WHERE dataSource =“test3_mobileDictClient.android” AND sequence_name=“index_kafka_test3_mobileDictClient.android_af5c785c81ca0d4_5”
And the SQL results are in attached file “sql_query.htm”.

*By the way, our SQL Table druid_pendingSegments is very large, we have 48100 row in druid_pendingSegments for a single data source, and it stores the pending segments form a month ago. Is it normal or a sign of misconfiguring?

sql_query.htm (71.7 KB)

ingest.json (1.63 KB)

Thank you for the detailed report! Are you seeing this failure all the time now or only sometimes? Can you post the full peon log from startup until the error occurs so we can try to reproduce the issue?

In our latest run of Kafka Indexing Service (on 8-17), all the tasks were failed. We are not sure if there are any failure before 8-17, though we have run this for a month, and it went seemingly well until 8-16 and 8-17.

And here is the peon log.

在 2016年8月18日星期四 UTC+8上午7:29:24,David Lim写道:

log.txt (1.56 MB)

I work with daiso… in the same team, and also working on this issue.

After investigating in the code, I found below code may shows the cause of the problem:

 * Return a segment usable for "timestamp". May return null if no segment can be allocated.

Hey guys,

I’ve been looking into this issue for a bit but still haven’t determined the root cause. To answer your question about late events: late events should definitely be handled by the Kafka indexing task. What is supposed to happen is that if a message comes in and there isn’t a segment covering the time interval it belongs to (possibly because it’s late and the previous segment covering that range has already been removed from the active segments), allocatePendingSegment() will generate a new segment identifier for a segment with a time interval covering the late event and this event will be put into that segment (while all the other current events will be put into a different segment for the current time interval).

What seems to be happening in your case is that when the task goes to generate a new segment identifier for the late event, it fails because another segment identifier has already been created for the same sequenceName/previousSegmentId - which is the unique key used to generate a segment identifier which is necessarily deterministic so that replica tasks create segments with the same ID - but the segment identifier that was already created was for a different time interval which would mean a loss of determinism in how segments are created.

This, for example, could happen if the same Kafka messages were being read by two tasks but they were reading them in different order, but Kafka messages should be ordered within a partition so hopefully this is not what’s happening.

Question for you: were you running this task with more than one replica? I ask because the ingest.json you posted shows only a single task (no replication), but this block of code would normally be hit by replica tasks. If so, could you also post the peon log of all the other replica tasks that were running when this happens? If you’re able to provide the full overlord logs that would be helpful too. If you weren’t actually running replica tasks, let us know because that’ll also help narrow down what’s going on.

The expert at this section of code is currently on holidays so we might need to wait until he gets back if we still can’t figure out what’s going on after looking at the rest of the logs.

Hi david,
Thanks very much for detailed answer for our question.

I agree that the failure may be related to concurrent running tasks (may be by some accidently killing middle manager or other reason)

I dig to more failed log, and found that we have mis-configured “druid.selectors.coordinator.serviceName” for middele manager. So it cannot find coordinator and reports in some peon’s log, and may triggered our problem.

After we correctly set “druid.selectors.coordinator.serviceName”, it seems no real error occurs.

BTW: we may have found a bug on kafka ingest task’s status: https://github.com/druid-io/druid/issues/3374

Thanks again for your kindly reply.


在 2016年8月19日星期五 UTC+8下午3:00:52,David Lim写道:

Thank you for the update and for filing the issue. Let us know how things go and if this behavior comes up again.

Hi david,
If one task process multiple partitions, can kafka-index assure deterministic?

The order between different partitions are not deterministic, so may this be an issue?

在 2016年8月23日星期二 UTC+8上午1:49:49,David Lim写道:

Yes, ingestion will be deterministic even if there are multiple partitions being ingested by a task. This is handled by each task writing events from different partitions into different segments so that each segment is deterministic (since as you mentioned, Kafka events are ordered within a partition but are not ordered across partitions).

The tradeoff to this is that if you have a large number of Kafka partitions, you will get a large number of segments which may be sub-optimal for query performance and may need to be merged together to get optimally-sized segments.

hi david, thanks for the explanation!
We’ll look deeper into it if we see this problem again.

在 2016年8月25日星期四 UTC+8上午1:26:43,David Lim写道: