I’m seeing that irrelevant segments, like:
ones created by real-time pipeline that were replaced by batch indexing process
ones created by batch indexer that were replaced by re-indexing process
are still present in the deep storage (s3 in our case).
Is there any process that removes them?
Sorry for flooding this forum with questions, but I think Druid is really cool, and I’m eager to learn to use it properly.