Does druid batch load support exponential backoff or retry on failure

Hi,

We started to do batch load recently, with source json files in AWS S3, batch loading data using AWS EMR. Since we can auto scale AWS EMR, we are able to run over 100 batch load tasks concurrently with druid cluster and EMR setup. However, we started to encounter performance bottleneck on AWS S3. Random batch tasks failed with error: ‘java.io.IOException: s3n://fs-bigdata-dev : 503 : Slow Down’.

I wonder if druid batch load supports exponential backoff or retry on failure? Or what other options can we have to avoid this issue? We are currently running on Druid 0.10.1 and EMR 5.10.0 with hadoop 2.7.3.

Thanks

Hong