Historical doesn't download segments from S3

Hi,
I
have some indexes setup in S3. Now I would like to query on top of them. I have setup a simple 1 node historical and 1 node coordinator. I am unsure why the segments are not downloaded from S3 to historical nodes. I see no exceptions in the logs either in coordinator or in historical. How can I ensure that historical node is communicating with S3 and starts downloading segments?
Here are the configurations:
common.runtime.properties

Extensions (no deep storage model is listed - using local fs for deep storage - not recommended for production)

Also, for production to use mysql add, “io.druid.extensions:mysql-metadata-storage”

druid.extensions.coordinates=[“io.druid.extensions:druid-s3-extensions”]
druid.extensions.localRepository=extensions-repo

Zookeeper

druid.zk.service.host=localhost

Metadata Storage (use something like mysql in production by uncommenting properties below)

by default druid will use derby

druid.metadata.storage.type=mysql

druid.metadata.storage.connector.connectURI=jdbc:mysql://localhost:3306/druid

druid.metadata.storage.connector.user=druid

druid.metadata.storage.connector.password=diurd

Deep storage (local filesystem for examples - don’t use this in production)

#druid.storage.type=local
#druid.storage.storageDirectory=/tmp/druid/localStorage

Deep storage

druid.storage.type=s3
druid.s3.accessKey=XXX
druid.s3.secretKey=XXX
druid.storage.bucket=tpch
druid.storage.baseKey=data

Query Cache (we use a simple 10mb heap-based local cache on the broker)

druid.cache.type=local
druid.cache.sizeInBytes=10000000

Indexing service discovery

druid.selectors.indexing.serviceName=overlord

Monitoring (disabled for examples, if you enable SysMonitor, make sure to include sigar jar in your cp)

druid.monitoring.monitors=[“com.metamx.metrics.SysMonitor”,“com.metamx.metrics.JvmMonitor”]

Metrics logging (disabled for examples - change this to logging or http in production)

druid.emitter=logging

Historical (runtime.properties):

Hi Suraj, there’s a few things to check:

  1. Make sure the segment is actually in S3

  2. Search for exceptions in logs of historical

Usually when we see this problem it is because deep storage is incorrectly configured in the common.runtime.properties.

By incorrectly configured, I mean, the thing that created the segment wrote it to the local filesystem instead of S3.

Also, any exceptions on the coordinator will help.

Hi Fangjin,

You are right. My S3 configuration has issues. Once I configured my S3 correctly, it started pulling segments from S3. Thanks for the help!

Regards,
Suraj