Segment cache deep storage load performance

Hello everyone,

Our datasources don’t yet receive full traffic so, for the moment, a significant part of our segments are extremely small (1-10MB, well below recommended size) and we can squeeze allot of them on each historical node (around 20K)

When spinning up a new tier of servers (empty segment cache), loading data from deep storage happens really slowly (2-3 segments per second per historical node), despite the fact that the Coordinator shows several thousand segments to load.

We played around with increasing druid.segmentCache.numLoadingThreads (to each 1,4,8,16,32) and maxSegmentsToMove (to 250), as well as setting druid.coordinator.period=PT5S but haven’t managed to increase load speed beyond 2-3 segments per second. Historical nodes aren’t overloaded, CPU and network bandwith are only lightly utilized.

Would anything else be holding back segment loading?

Thanks,
Adrian

Hi Adrian,

with regard to numLoadingThreads, there is a bug in Druid which prevents this parallization to happen:

The maxSegmentsToMove property exclusively regulates to what extend Druid is allowed to re-balance segments, so it means how many segments the optimizer can move from one historical to another one.

One property that might speed things up is replicationThrottleLimit This property regulates with which speed Druid is allowed to load replicated segments. Usually you would have in-tier or cross-tierreplication enabled. This property only regulates the loading speed of the replicas, not the “original” segments, which always load at maximum speed.

Whenever I like segment loading to go faster, I increase these two dynamic properties, namely replicationThrottleLimit and maxSegmentsToMove.

That’s about how far my knowledge goes.
Besides that, I found it strange to read that your segments are still smaller than what you target them to eventually be because the cluster is not receiving full data yet. I don’t know which method you use for ingestion. The ingestion methods that are appropriate for big data give you the choice over whether you want to specify how many records you would like to have per segment or how many segments you want to have per segment-granularity period. It seems that you might have chosen the latter. I would recommend letting Druid figure out how many segments (or more accurately shards) to produce for a given segment-granularity period. This way, all your segments end up having roughly the same number of records in them.
This info is available here: http://druid.io/docs/latest/ingestion/tasks.html in section TuningConfig (use targetPartitionSize instead of numShards). As one usually has preprocessing jobs one can also let them count how many records each batch of data to be ingested has and then use this number to compute the appropriate numShards value within the preprocessing job. This way, one would forgo loosing time within the indexing process which triggers a separate pass over the data to determine the numShards value based on the targetPartitionSize value otherwise.

best
Sascha

Hello Sascha,

Thank you for your response, this clears things up quite a bit.

Regarding out current segment size, it’s a byproduct of our current data model.

We have one datasource per client at hourly granularity, but we are currently sending only about 5% production traffic to each datasource, resulting in tiny segments. Once we enable everything, each datasource should receive enough traffic to reach the recommended segment size.

Thanks,

Adrian