Deleting orphan realtime segments

Hello everyone!

I was working on dev cluster experimenting with a Kafka topic and supervisor specs. I created two distinct datasources (with different datasource names) which had as a source the same Kafka topic.

Once I was done with my testing I deleted the first supervisor and then dropped its segments to delete the datasource and then I did the same for the second datasource. The first datasource was removed as expected, but the second datasource hasn’t dropped the segments yet. The fact is that those segments were never published they are still in realtime mode and the kill task is in WAITING status forever.

I tried recreating the supervisor of the second datasource in order to send some new dummy messages and then kill supervisor expecting that this time it would publish all the segments, new and old ones. The outcome was that only the new segment was published and the old ones are still in realtime mode and cannot be dropped.

Please, can someone help me with an explanation why would this happened and suggest what I can do to get rid of those orphan realtime segments?

Relates to Apache Druid 0.22.1

I want to try to reproduce. How did you go about stopping the supervisor? Did you let its tasks complete after stopping the supervisor?

I terminated supervisor through router UI which normally fires task stopping before termination.