How to read from S3 buckets in different regions

Hi,

One of the source buckets for my data is in us-east-2 region. While ingesting data I was getting 400 BAD request error.

I googled and found that we need to add a jets3t.properties files in _common folder. The file looks like this -

s3service.s3-endpoint=s3.us-east-2.amazonaws.com

storage-service.request-signature-version=AWS4-HMAC-SHA256

I was able to read from this bucket, however, now when I try to read from a bucket in another region (us-west-2) my indexing task fails with -

2017-06-02T14:23:57,401 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[HadoopIndexTask{id=index_hadoop_tracker_json_2017-06-02T14:23:51.379Z, type=index_hadoop, dataSource=tracker_json}]
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
	at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:211) ~[druid-indexing-service-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:176) ~[druid-indexing-service-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.10.0-iap5.jar:0.10.0-iap5]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:208) ~[druid-indexing-service-0.10.0-iap5.jar:0.10.0-iap5]
	... 7 more
Caused by: java.lang.RuntimeException: org.apache.http.client.ClientProtocolException
	at io.druid.indexer.hadoop.FSSpideringIterator.spiderPathPropagateExceptions(FSSpideringIterator.java:45) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexer.hadoop.FSSpideringIterator$1.iterator(FSSpideringIterator.java:55) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexer.path.GranularityPathSpec.addInputPaths(GranularityPathSpec.java:144) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexer.HadoopDruidIndexerConfig.addInputPaths(HadoopDruidIndexerConfig.java:389) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexer.JobHelper.ensurePaths(JobHelper.java:337) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:55) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:306) ~[druid-indexing-service-0.10.0-iap5.jar:0.10.0-iap5]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:208) ~[druid-indexing-service-0.10.0-iap5.jar:0.10.0-iap5]
	... 7 more
Caused by: org.apache.http.client.ClientProtocolException
	at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:886) ~[httpclient-4.5.1.jar:4.5.1]
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[httpclient-4.5.1.jar:4.5.1]
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55) ~[httpclient-4.5.1.jar:4.5.1]
	at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:328) ~[jets3t-0.9.4.jar:0.9.4]
	at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:279) ~[jets3t-0.9.4.jar:0.9.4]
	at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRestHead(RestStorageService.java:1052) ~[jets3t-0.9.4.jar:0.9.4]
	at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectImpl(RestStorageService.java:2264) ~[jets3t-0.9.4.jar:0.9.4]
	at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectDetailsImpl(RestStorageService.java:2193) ~[jets3t-0.9.4.jar:0.9.4]
	at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:1120) ~[jets3t-0.9.4.jar:0.9.4]
	at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:575) ~[jets3t-0.9.4.jar:0.9.4]
	at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:121) ~[?:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[?:?]
	at org.apache.hadoop.fs.s3native.$Proxy197.retrieveMetadata(Unknown Source) ~[?:?]
	at org.apache.hadoop.fs.s3native.NativeS3FileSystem.listStatus(NativeS3FileSystem.java:468) ~[?:?]
	at io.druid.indexer.hadoop.FSSpideringIterator.spiderPathPropagateExceptions(FSSpideringIterator.java:38) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexer.hadoop.FSSpideringIterator$1.iterator(FSSpideringIterator.java:55) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexer.path.GranularityPathSpec.addInputPaths(GranularityPathSpec.java:144) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexer.HadoopDruidIndexerConfig.addInputPaths(HadoopDruidIndexerConfig.java:389) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexer.JobHelper.ensurePaths(JobHelper.java:337) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:55) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:306) ~[druid-indexing-service-0.10.0-iap5.jar:0.10.0-iap5]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:208) ~[druid-indexing-service-0.10.0-iap5.jar:0.10.0-iap5]
	... 7 more
Caused by: org.apache.http.ProtocolException: Received redirect response HTTP/1.1 301 Moved Permanently but no location header
	at org.apache.http.impl.client.DefaultRedirectStrategy.getLocationURI(DefaultRedirectStrategy.java:136) ~[httpclient-4.5.1.jar:4.5.1]
	at org.apache.http.impl.client.DefaultRedirectStrategy.getRedirect(DefaultRedirectStrategy.java:220) ~[httpclient-4.5.1.jar:4.5.1]
	at org.apache.http.impl.client.DefaultRequestDirector.handleResponse(DefaultRequestDirector.java:1084) ~[httpclient-4.5.1.jar:4.5.1]
	at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:515) ~[httpclient-4.5.1.jar:4.5.1]
	at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882) ~[httpclient-4.5.1.jar:4.5.1]
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) ~[httpclient-4.5.1.jar:4.5.1]
	at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55) ~[httpclient-4.5.1.jar:4.5.1]
	at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:328) ~[jets3t-0.9.4.jar:0.9.4]
	at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:279) ~[jets3t-0.9.4.jar:0.9.4]
	at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRestHead(RestStorageService.java:1052) ~[jets3t-0.9.4.jar:0.9.4]
	at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectImpl(RestStorageService.java:2264) ~[jets3t-0.9.4.jar:0.9.4]
	at org.jets3t.service.impl.rest.httpclient.RestStorageService.getObjectDetailsImpl(RestStorageService.java:2193) ~[jets3t-0.9.4.jar:0.9.4]
	at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:1120) ~[jets3t-0.9.4.jar:0.9.4]
	at org.jets3t.service.StorageService.getObjectDetails(StorageService.java:575) ~[jets3t-0.9.4.jar:0.9.4]
	at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:121) ~[?:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) ~[?:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[?:?]
	at org.apache.hadoop.fs.s3native.$Proxy197.retrieveMetadata(Unknown Source) ~[?:?]
	at org.apache.hadoop.fs.s3native.NativeS3FileSystem.listStatus(NativeS3FileSystem.java:468) ~[?:?]
	at io.druid.indexer.hadoop.FSSpideringIterator.spiderPathPropagateExceptions(FSSpideringIterator.java:38) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexer.hadoop.FSSpideringIterator$1.iterator(FSSpideringIterator.java:55) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexer.path.GranularityPathSpec.addInputPaths(GranularityPathSpec.java:144) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexer.HadoopDruidIndexerConfig.addInputPaths(HadoopDruidIndexerConfig.java:389) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexer.JobHelper.ensurePaths(JobHelper.java:337) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:55) ~[druid-indexing-hadoop-0.10.0-iap5.jar:0.10.0-iap5]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:306) ~[druid-indexing-service-0.10.0-iap5.jar:0.10.0-iap5]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:208) ~[druid-indexing-service-0.10.0-iap5.jar:0.10.0-iap5]
	... 7 more
2017-06-02T14:23:57,421 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_hadoop_tracker_json_2017-06-02T14:23:51.379Z] status changed to [FAILED].
2017-06-02T14:23:57,423 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_hadoop_tracker_json_2017-06-02T14:23:51.379Z",
  "status" : "FAILED",
  "duration" : 2580
}

Is it possible to read from both buckets in a single setup?

Does this workaround still work?

https://groups.google.com/forum/#!topic/druid-user/vpAOj9KIoTg