Re: [druid-user] Help Optimizing Kafka Indexing

I’m not sure about the druid configuration, but my first thought was to ask whether you’ve tried increasing the number of kafka partitions (if possible), so you can get more parallelism? I think each partition might get a single reader/ingestor task (off the top of my head, could be wrong).

But that’s without really analyzing the druid config, and without knowing exactly where the bottleneck is. Just a guess you might try. If not that, it’s back to the druid configuration and setup.

I forgot to add, if you do try increasing the numbers of partitions, with your cores, you might well try, eg, 16 partitions per topic, if that is possible. (Not sure of your partitioning scheme, and whether it’s open to change.)

I don’t think it’s possible, but I could be wrong.

Flying blind here, but with the size of your RAM, you might try, eg

druid.indexer.fork.property.druid.processing.buffer.sizeBytes=1000000000

(eg, set it to 1G as a test) and see whether that makes a difference. It’s a shot in the dark, though. If it doesn’t help, revert it. If it does, start thinking about how many concurrent tasks and queries you want to plan for, and whether RAM might get exhausted - maybe try dialing it down to find a sweet spot.

Oh, our notes crossed paths - glad to hear that increasing partitions worked! That’s really the best approach to start with.

late to the party again… but just noting it’s good to have a nice ratio between partitions and workers - something that divides well - like if you have 8 partitions, have 4 taskCount - that will put two partitions in each worker. (:stuck_out_tongue:

oh and it can also help to think about 1 worker doing, like, 10,000 events per second. Though the only way to really see throughput is to look at the metrics that get emitted. then you can kinda work backwards to work out how many workers you need to have running to keep up with the velocity of the data.