[druid-user] Segment Load Scaling issue

HI guys,

I’ve the below setup with Druid cluster.
2 Coordinator and overlords (m5.4xlarge)
3 Query servers
25 Data servers (i3en.24Xlarge)

Coordinator config:

druid.service=druid/coordinator
druid.plaintextPort=8081

druid.coordinator.startDelay=PT10S
druid.coordinator.period=PT5S

#Run the overlord service in the coordinator process
druid.coordinator.asOverlord.enabled=false
#druid.coordinator.asOverlord.overlordService=druid/overlord

druid.indexer.queue.startDelay=PT5S

druid.indexer.runner.type=remote
druid.indexer.storage.type=metadata
druid.indexer.storage.recentlyFinishedThreshold=PT15M
druid.indexer.runner.pendingTasksRunnerNumThreads=50
druid.coordinator.loadqueuepeon.http.batchSize=10
druid.manager.config.pollDuration=PT10M

Historical config:

druid.service=druid/historical
druid.plaintextPort=8083

#HTTP server threads
druid.server.http.numThreads=150

#Processing threads and buffers
druid.processing.buffer.sizeBytes=500000000
druid.processing.numMergeBuffers=4
druid.processing.numThreads=15
druid.processing.tmpDir=/mnt/disk2/var/druid/processing

#Segment storage
druid.segmentCache.locations=[{“path”:"/mnt/disk2/var/druid/druidSegments", “maxSize”: 6500000000000},{“path”:"/mnt/disk3/var/druid/druidSegments", “maxSize”: 6500000000000},{“path”:"/mnt/disk4/var/druid/druidSegments", “maxSize”: 6500000000000},{“path”:"/mnt/disk5/var/druid/druidSegments", “maxSize”: 6500000000000},{“path”:"/mnt/disk6/var/druid/druidSegments", “maxSize”: 6500000000000},{“path”:"/mnt/disk7/var/druid/druidSegments", “maxSize”: 6500000000000},{“path”:"/mnt/disk8/var/druid/druidSegments", “maxSize”: 6500000000000},{“path”:"/mnt/disk9/var/druid/druidSegments", “maxSize”: 6500000000000}]
druid.server.maxSize=50000000000000

#Query cache
druid.historical.cache.useCache=true
druid.historical.cache.populateCache=true
druid.cache.type=caffeine
druid.cache.sizeInBytes=256000000
druid.segmentCache.numLoadingThreads=50

My ingestion jobs would be generating 7k segments in every 15 mins. I was able to scale my ingestion jobs but segments taking lot of time while being available. I am using S3 as deep storage.
Currently its loading only 320+ segments/minute.

What can be done to speed up the segment load? Any idea?

Segment load scaling Druid forum thread.