A few questions about ingestion data

Hi all,

We are investigating to apply Druid on our company's products. 

I have some questions about ingestion data into Druid:

    1. Can I set rollup parameter of queryGranularity to customized value such as 10s, 30s instead of “minute”, and how?

    2. We use the Kafka Index Service to ingest data. How to consider the moderate value of segmentGranularity?

        (1) The required data latency from business is less than 30 seconds. So it seems we must set this attribute to smaller than 30s, right?

        (2) If we set segmentGranularity to 30s, I am worry about the chunk is too small, the segment is too small as well, so that the performance is low efficiency(so as metadata store), right?

     3. Our application which is the upstream of Druid can be aware of new schema(such as adding a new dimension) automatically without any manual operations. But it seems that we need to update ingestion spec to Druid manually whenever the schema is change, because Druid can not be aware of the schema change automatically, right? Is that any good solution to improve that?

Please see inline answers below.

Rommel Garcia

Thanks Rommel Garcia. They are really helpful for us.

For the question of customizing queryGranularity, we want to control the roll-up granularity to 30s instead of 1s so that we can aggregate data at the stage of ingesting data rather than query stage, so that it has significant efficiency improvement. Is it possible to set queryGranularity to 10s, 30s instead of 1s, 1 minutes?