Unable to use s3 for deep storage

Hi,

i have been following document on setting up my druid local cluster to point to s3.

my AWS region is us-east-1.

  • Add -Daws.region=us-east-1 to the jvm.config file for all Druid services.
  • Add -Daws.region=us-east-1 to druid.indexer.runner.javaOpts in middleManager/runtime.properties so that the property will be passed to peon (worker) processes.

_commong/common.runtime.properties I made the following changes: (all works fine until I made the changes below)

druid.extensions.loadList=[“druid-s3-extensions”,“druid-hdfs-storage”, “druid-kafka-indexing-service”]

#druid.storage.type=local
#druid.storage.storageDirectory=var/druid/segments

druid.storage.type=s3

druid.storage.bucket=druid-qa

druid.storage.baseKey=druid/segments

druid.s3.fileSessionCredentials=gov_ec2_role_default

#druid.indexer.logs.type=file

#druid.indexer.logs.directory=var/druid/indexing-logs

druid.indexer.logs.type=s3

druid.indexer.logs.s3Bucket=druid-qa

druid.indexer.logs.s3Prefix=druid/indexing-logs

``

I use druid.s3.fileSessionCredentials for assume role over using secret and access key.

https://groups.google.com/forum/#!topic/druid-user/Lu_3XDi2l4w

After the cluster startup. I push some data via tranqulity service. Requests return response 200, but the data wasn’t inserted. Any pointers would be helpful. Thanks.

Notice the errors in the following logs:

broker.log:2019-04-04T06:27:28,337 ERROR [main] org.apache.druid.curator.discovery.ServerDiscoverySelector - No server instance found for [druid/coordinator]

historical.log:2019-04-04T06:27:28,229 ERROR [main] org.apache.druid.curator.discovery.ServerDiscoverySelector - No server instance found for [druid/coordinator]

``

Hi

Could you please add the missing values from the following parameters too?

druid.storage.type=s3

druid.storage.bucket=test-us-east-1

druid.storage.baseKey=druid/segments

druid.s3.accessKey=XXXXXXXXXXX

druid.s3.secretKey=XXXXXXXXXXX

Thanks for the response, Venkat.

I followed your recommendation on putting access key and secret key to try bringing the cluster up. Still facing the same error on historical log and broker log.

There are warnings find on the coordinator log.

Enter code here…2019-04-04T17:08:42,782 WARN [main] org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform… using builtin-java classes where applicable

``
2019-04-04T17:08:48,550 WARN [main] org.apache.curator.retry.ExponentialBackoffRetry - maxRetries too large (30). Pinning to 29

2019-04-04T17:08:57,466 WARN [main] org.apache.druid.java.util.common.RetryUtils - Retrying (1 of 9) in 1,123ms.

2019-04-04T17:08:58,721 WARN [DatabaseSegmentManager-Exec–0] org.apache.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!

2019-04-04T17:08:59,037 WARN [Curator-Framework-0] org.apache.curator.utils.ZKPaths - The version of ZooKeeper being used doesn’t support Container nodes. CreateMode.PERSISTENT will be used instead.

2019-04-04T17:09:58,726 WARN [DatabaseSegmentManager-Exec–0] org.apache.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!

2019-04-04T17:10:58,729 WARN [DatabaseSegmentManager-Exec–0] org.apache.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!

2019-04-04T17:11:58,731 WARN [DatabaseSegmentManager-Exec–0] org.apache.druid.metadata.SQLMetadataSegmentManager - No segments found in the database!