We have make hadoop idex task worked to get batch ingestion with storage in hdfs.
But, when set druid.storage.type=s3, it seems that not take affect. And the output of segment will be stored in local directory.
Is it available to store the output in s3 after hadoop index task completed?
If not, how about storing the batch ingestion result in hdfs, meanwhile the realtime storing real ingestion result in s3? Can historical node download segments from s3, as well as hdfs in one cluster?
Hope for your opinion.