Issues in scaling Druid for high date rates

We are facing an issue with data ingestion into the Druid cluster using Tranquility API. Below are the details of the scenario.

  1. In our test setup, our ingestion rate into the cluster is around 130 million rows per hour. In production, our rates would be much higher than this.

  2. After constructing the list of events (payload for Tranquility), we use the Tranquility API (druidService.apply()) to send data to Druid. We measure the time it takes to return from the call. On average, the API call takes around around 36 milli seconds to push the data into Druid.

  3. We initially thought that there is network latency. But when we bench marked each stage of processing, we found that the step to ingest the records into Druid cluster is taking most of time.

  4. Our Druid cluster is a cluster of high end machines (around 32 r3.8xlarge instances).

  5. We are trying this with 3 and 10 partitions.

  6. We tried both batching the records and sending them as individual ones. But we didn’t see any big difference in terms of ingestion rate.

My question is whether we are missing anything in configuring tranquility API? Is there anything else that we are missing from the configuration that is causing issues with ingestion at these rates?



Hey Narayan,

The first thing I would look at is whether you are batching and pipelining your data. You can take a look at the BeamPacketizer class that is part of tranquility- it does batching and pipelining internally, so you could use that class directly or you could use it as a model for your own code.

The next is whether your connection pools are set up appropriately. Tranquility Beams are thread-safe and by default keep 2 connections open to each Druid task. If you have a large number of threads using the same underlying Beam, you may want to increase the size of the connection pool. (NB: BeamPacketizers are not thread safe, but you could write a thread-safe version modeled after it, if you wanted)

Hi Gian,

Thank you for the quick response. We will try that and update the thread with the results. We are also getting to familiarize ourselves with Druid source.

I recently saw a deck that you co-presented in the Hadoop and Strata World 2015 that had some data points on the message ingestion rates. When we hit production, we would be around the same scale. Can you share any other thing that we may want to watch out for to sustain ingestion at these rates.



Hey Narayan,

On the Tranquility side, batching and pipelining everything, and having appropriately sized connection pools, are usually the two big things to look out for. On the Druid task side, the most important tunings are probably the number of partitions, heap size, number of query processing threads, maxRowsInMemory, intermediatePersistPeriod, and segmentGranularity.

It’s also helpful to partition rows by grouping key with a custom beamMergeFn. would make that possible without using a custom beamMergeFn.