Partition by dimension during re-indexing


I am aware that druid cannot partition by a specific dimension during real-time ingestion ( correct me if my understanding is wrong).

I am also aware that batch ingestion is needed to use partitonSpec and hence dimension sharding.

Is there any way to do this using ingestSegment firehose? I tried using partitionSpec with ingestSegment but druid task won’t honor that field?

How do I modify below to get dimension sharding, if its possible using ingestSegment.

Hi Ankur,

Currently that feature is only supported in Hadoop-based indexing, see “partitionsSpec” here:

You are currently using native, non-Hadoop indexing, which only supports hash-based partitioning.

Hi Gian,

Thanks for the reply.

Is there any plan to support dimension based partitioning in near future for native indexing or realtime indexing?