Segment optimization issue: Can't append data to compacted segments

Hello,

I have time intervals with DAY granularity with 1 segment (shard) each of approx 100MB. So I run periodically a compaction task to compact 5 days into 1 time interval with 1 shard. Then I tried to append data to the compacted segment and it doesn’t work. See the task log below:

2019-02-01T01:08:40,272 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[AbstractTask{id='index_DataProduction_2019-02-01T01:08:34.796Z', groupId='index_append_DataProduction', taskResource=TaskResource{availabilityGroup='index_DataProduction_2019-02-01T01:08:34.796Z', requiredCapacity=1}, dataSource='DataProduction', context={priority=50}}]
io.druid.java.util.common.ISE: Failed to add a row with timestamp[2019-01-25T06:40:01.000Z]
        at io.druid.indexing.common.task.IndexTask.generateAndPublishSegments(IndexTask.java:700) ~[druid-indexing-service-0.12.3.jar:0.12.3]
        at io.druid.indexing.common.task.IndexTask.run(IndexTask.java:264) ~[druid-indexing-service-0.12.3.jar:0.12.3]
        at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444) [druid-indexing-service-0.12.3.jar:0.12.3]
        at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416) [druid-indexing-service-0.12.3.jar:0.12.3]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_201]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_201]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_201]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
2019-02-01T01:08:40,283 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_DataProduction_2019-02-01T01:08:34.796Z] status changed to [FAILED].
2019-02-01T01:08:40,288 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_DataProduction_2019-02-01T01:08:34.796Z",
  "status" : "FAILED",
  "duration" : 432
}

``

When the DAY segments have:

“forceExtendableShardSpecs” : true

``

and also I compacted the old DAY segments with this:

{

“type”: “compact”,

“dataSource”: “DataProduction”,

“keepSegmentGranularity”: false,

“interval”: “2019-01-25/2019-01-30”,

“tuningConfig” : {

“type” : “index”,

“numShards”: 1,

“forceExtendableShardSpecs” : true

}

}

``

Any help is very much appreciated!

thank you!

Sergio