S3 deep storage using IAM role instead of access key and secret key on the config

Hi,

Have anyone successfully setup S3 for deep storage without using access key and secret key? If you did could you please provide more info how it’s been done?

https://groups.google.com/forum/#!topic/druid-user/Lu_3XDi2l4w I follow this thread but it did not work for me.

Appreciate any input.

Thanks.

With recent versions of Druid (0.13+) we use the AWS SDK, so you can omit the access/secret key properties and use any of the standard methods for specifying credentials (environment variables, a file, instance roles).

Thanks Gian. Looks like this approach still required a temporary credential saved on the credential file? Am I correct? Sorry I may have ask some questions every developers on AWS would know. I am a newbie in that area. Thanks again.

You have to create an IAM role with all access to the S3 bucket. Attach that role to the ec2 instances that you are running druid cluster on. AWS sdk will automatically work since the user (saml or IAM) running the service will assume the role attached to the ec2 instance.

I have set it up. Let me know if this didn’t help. We can debug.

Thanks Karthik. It helps me to get further. Now I am seeing index logs on the s3 bucket.

However, I am still unable to load to the deep storage location. I have started a new subject on that problem

https://groups.google.com/forum/#!msg/druid-user/D_2-JFLNy74/xcXtffaXCgAJ

Do you have any idea? Thanks.