Hadoop Index Task failed to store segment to S3

Hi Guys,

We have make hadoop idex task worked to get batch ingestion with storage in hdfs.

But, when set druid.storage.type=s3, it seems that not take affect. And the output of segment will be stored in local directory.

  1. Is it available to store the output in s3 after hadoop index task completed?

  2. If not, how about storing the batch ingestion result in hdfs, meanwhile the realtime storing real ingestion result in s3? Can historical node download segments from s3, as well as hdfs in one cluster?

Hope for your opinion.

Best,

Hi,
batch ingestion with s3 as a deep storage should work fine, it seems like a configuration issue.

probably this is caused if you have not included druid-s3-extensions in your runtime.properties.

If you have already included it, can you share your runtime.properties as well for more details.

Yes, it does work. Just close this thread, Thank you Nishant.

在 2015年10月16日星期五 UTC+8下午10:08:03,Nishant Bangarwa写道: