How to deal with high cardinality dimension

In some use cases, a high cardinality dimension seems unavoidable, like a db table with “primary key”, some tracing events with “trace id”. And it’s very common to query these datasources with a pk filter or id filter, the performance will be great in theory. But the index and merge is quite slow, in a 500M rows merging, the cardinality of “id” will reach 490M, merge this dimension will take one hour.

io.druid.segment.IndexMerger - Starting dimension[id] with cardinality[4,888,029]

io.druid.segment.IndexMerger - Completed dimension[id] in 3,790,755 millis.

I think maybe it is because of the bitmap index, it’s time complexity is almost O(N*N).

We have tried some way to reduce the row number, like reducing segment granularity to 10 minutes, and use liner shard of 4 partitions. But the incoming message is so fast and this problem seems unavoidable.

Is there any thing we can do to fix this? Like a simple invert index (instead of bitmap) will be great helpful.


Hi Kurt, is this slowness from real-time or batch ingestion? It seems like because you are using linear shard spec that you are doing realtime ingestion. Typically we recommend that if your indexing time is starting to take long, to shard more aggressively. We recommend Druid segments to have ~5M rows, so you should have 10 partitions.

Let me know if this helps.

Hi Fangjin, this is from real-time ingestion. Sorry for the mistake of my original post, it should be 5M rows instead of 500M (4.9M of cardinality instead of 490M). We already tried liner shard with 4 partitions and 10minute segment granularity. The segment is ~5M rows like you suggested, but we observed there is a “merge” stage before druid upload the segment to deep storage. During the merge stage, there was 100+ small segment, and the merge is quite slow, it took almost one hour to finish. Should we increase the partition number or decrease the segment granularity(btw, is there any difference between these two approaches, it seems they didn’t have much difference)?

Hi Kurt,

100+ merge segments means that you might have set the maxRowsInMemory to be too small or intermediatePersistPeriod to be much smaller.

Can you try setting maxRowsInMemory to be 500,000 ?

500,000 is the default maxRowsInMemory in druid, with this you should have around 10 segments to be merged at the end ?

Hi Nishant, I will try this first, thank you

In general, when merging time takes too long because too many intermediate chunks need to be merged, we recommend creating more partitions or decreasing segment granularity. Both should help with minimizing the number of chunks that have to be merged. Although, I am surprised that for 5M rows, merging would take 1 hr. Looking into the configuration of how things are set up is another way to see what is going on.

The main reason of merging 5M rows take 1hour is the “high cardinality” of one dimension, its values are nearly uniq. And we have found some code hotspots, by modifying these, the performance can increase 10x~100x depends on the cardinality and the number of small segments during the merge. Wish to fire a PR soon.

Hi Kurt, that’s awesome, looking forward to seeing the PR.

That’s awesome. High cardinality dimension indexing has been a pain for a while. It is possible to get stuck doing utf8 conversions for string compares way too much. I’ve considered getting a utf8 comparator for bytebuffer. But that is a lot more complex than it sounds because utf8 has some interesting corner cases.

I’m really looking forward to see what you came up with.

hi all, The PR is

在 2015年11月8日星期日 UTC+8上午2:07:00,Fangjin Yang写道: