We assumed that 1 minute segment and query granularity would give us better query performance compared to 1 hour segment and 1 minute query granularity. Is that true? We would like to know more on the query performance differences in those two options( (1 minute segments vs 1 hour segments) having 1 minute query granularity).
See it creates confusion when programatically Segment granularity can be equal to Query granularity. It looks like an obvious choice for use-cases like ours. I am wondering why Tranquility allows to create 1 minute segments when data can still be rolled up at 1 minute level?
I think Query granularity should always be less than Segment granularity. From our experience 1 minute segments create more operational problems for end users.
b. Due to issue #1, we set rules on datasources to purge the segments after x days. We keep reducing x when a new datasource is added.
c. For each task, one folder is created. Since we used ext3, we hit the OS limit of max number of folders that can be created under a particular folder. This applies to many configurable directories like task.baseTaskDir, logs.directory, java.io.tmpdir and many more.
c. We had serious doubts about scaling the system since we wish to add more datasources.
d. Too many workers needed and you need more middle managers due to less single node capacity.
Here is a formula we often use to scale the system:
NUM_WORKERS = (num_minutes_for_one_task * num_partitions * 1replicant_per_datasource * num_DataSources)
**NUM_minutes_for_one_task = 7 to 8 minutes (no matter what is the segment or query granularity or memory capacity or cpu cores)
What you think about suggestion #2 and how can I submit it for consideration? Let me know in case of any questions.