Still can query after drop a segment by create a rule drop interval

Hi everyone,

I just wondering why I already deleted and the process was done but I am still can query from the interval time.
I am using Kafka indexing service so after I put data to replace to these intervals it keeps raising the number like the previous segment is still there.

Any helps would be appreciated.



Hi Chanh,

I’m not exactly sure what behavior you’re seeing and what you’re expecting. When you drop a segment, that makes it unqueryable but the segment still exists and still has an entry in the metadata table. If you receive another event in that time interval through the Kafka indexing service, it will allocate a new partition for it that is a continuation of the latest set of partitions, and will not ‘overwrite’ the segments for that interval. A user could at some time in the future change the load rules to make that time interval queryable again and the results will include the data previously received + the new data.

If you want to replace the data in a set of segments with new data, you can try issuing kill tasks to delete those segments entirely instead of just making them unqueryable. Then you can ingest the new set of data and a new set of segments should be created instead of extending the existing set.

I did notice a bug here though - if a particular time interval is not being loaded because of the load rules and the KafkaIndexTask creates a partition in that interval, the task will remain running awaiting handoff until it is killed by a timeout. I filed an issue for this here: