In our current setup, we have files in s3 that we ingest on an hourly basis and create segments.
We also use tranquility to ingest kafka and create a seperate segments, call it “realtime” for this conversation.
Everything is working per design. S3 is used as deep storage.
This realtime segment is useless to us beyond few minutes. I am ok keeping an hour around if necessary because the segment granularity is an hour.
What is the ideal way to tell the historicals to drop the older segments locally and also delete it from s3, preferably not put it in s3 in the first place?
Any help is appreciated.