Is there any optimize option for cost model

Hi guys,
In my team, we used kafka index service do data ingestion, the cluster contains 5 historical nodes with MM service, and multiple peons to take the tasks each hour.

So at the end of the day, there were more than 10000 shards generated, and then we started a task to merge these shards.

But an issue emerged, historical nodes took too long time to load the merged segments, especially, when the number of the segments/shards

exceed 10000 in a tier, the procedure might take more than 1 hour.

And we tested an extreme case, there were more than 300000 shards in a cluster, the historical node can even not startup.

We investigated the code, found the root cause was the cost model used in coordinator, took too long time to do calculation,

it fetched all segments definition from meta data base, and the algorithm’s time complexity was O(n exp m).

and once we switched to the deprecated “random model” all historical nodes startup right now.

So the questions are:

  1. Does the cost model is critical for Druid?

  2. Is there any optimize option can speed up the cost model calculation process?

  3. Why Druid dropped the “random model” rather than as an optional? How worse does this model result in?

Duplicated by https://groups.google.com/forum/#!topic/druid-development/4aGtCxnBOcI