Segments not overwritten with Minor Compaction

We are looking to run Minor Compaction in parallel mode. Currently we are running version 0.16 where this is not supported yet, so I have applied the PR [Auto compaction based on parallel indexing by jihoonson · Pull Request #8570 · apache/druid · GitHub] on top of version 0.16
When running the compaction I am seeing that there are multiple sub-tasks running to work on the compaction. The new segments are correctly created with the numbered_overwrite shard spec, however the old segments are still existing as well and still set to used=1 in mySql db.

In ParallelIndexSupervisorTask.publishSegments() are no oldSegments known, so therefore nothing is being overwritten. I’ve checked SinglePhaseSubTask.runTask() which calls VersionedIntervalTimeline.findFullyOvershadowed() to find overshadowed segments and reports back, however findFullyOvershadowed() does not return anything.

I have checked the code to see if I’m missing anything in between version 0.16 and the PR that may be causing this, but not finding anything. This is the spec used for minor compaction.

“type”: “compact”,
“dataSource”: “event_TEST”,
“interval”: “2021-11-05T02:00:00.000Z/2021-11-05T03:00:00.000Z”,
“resource” : {
“availabilityGroup”: “mirror”,
“requiredCapacity”: 2
“dimensionsSpec”: {
“dimensions”: [

“metricsSpec” : [
{ “type” : “count”, “name” : “ct” }
“tuningConfig”: {
“type”: “index_parallel”,
“appendToExisting”: true,
“maxNumConcurrentSubTasks”: 4
“keepSegmentGranularity”: true,
“context”: {
“forceTimeChunkLock” : false

Any suggestions would be greatly appreciated!

Hey @isabel!

I think you may need to send this query onto the devlist