Query to historical data becomes really slow after changing the size of disk

We increased the disk size of historical from 700GB to 4TB per node. And the query becomes really slow. We use r3.4xlarge EC2 instance and we have 8 boxes.

This is the configuration.

druid.service=druid/historical

General Configuration

druid.server.maxSize=3900000000000
druid.server.tier=_default_tier
druid.server.priority=0

Storing Segments

druid.segmentCache.locations=[{“path”:"/etc/druid/historical/druid_cache"},{“maxSize”:3900000000000}]
druid.segmentCache.deleteOnRemove=true
druid.segmentCache.dropSegmentDelayMillis=30000
druid.segmentCache.announceIntervalMillis=5000
druid.segmentCache.numLoadingThreads=6

Query Configs

druid.server.http.numThreads=50
druid.server.http.maxIdleTime=PT5m

Processing

druid.processing.buffer.sizeBytes=1073741824
druid.processing.formatString=processing-%s
druid.processing.numThreads=7
druid.processing.columnCache.sizeBytes=0

General Query Configuration

druid.query.groupBy.maxIntermediateRows=50000
druid.query.groupBy.maxResults=500000

Search Query Config

druid.query.search.maxSearchLimit=1000

Caching

druid.historical.cache.useCache=true
druid.historical.cache.populateCache=true
druid.historical.cache.unCacheable=[“groupBy”, “select”]

``

This is the JVM we use to start the node

java -server -Xmx12g -Xms12g -XX:NewSize=6g -XX:MaxNewSize=6g -XX:MaxDirectMemorySize=9g -XX:+UseConcMarkSweepGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+HeapDumpOnOutOfMemoryError -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.io.tmpdir=/tmp -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager -Dcom.sun.management.jmxremote.port=17071 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -classpath .:/usr/local/lib/druid/lib/* io.druid.cli.Main server historica

``

We have hourly granularity and each segment is around ~400-600MB depends on the peak time.

This seems more like the new disk read/write speed is the issue. maybe the new disk are slower like SSD versus spinning disks you can have a factor X10.
Keep in mind your segment will be memory mapped to files and handled by OS paging so if you have to swap pages disk read speed is very important.

Again this is a guess but looking at the druid metrics log should confirm/bust this.