Ad-hoc queries are great, but we also need to serve predefined queries in millis: and for those, pre-aggregations are king (like group by on certain dimensions combinations done in advance).
Does Druid support also predefined queries by means of preaggregation in the process of ingestion? (i.e. realtime preaggregation on realtime nodes and batch preaggregation on a batch import).
I am thinking on a workflow like, we would use an HTTP endpoint to which we send requests like create new preaggregate or index with this json definition, or delete one preaggregate.
If this is possible, is it also possible to create some preaggregations from the input stream and then dump the granular events (the input stream itself)?
This would allow us to skip the Spark Streaming and just use one system, one metrics definitions/implementations, so on.