Task failed when trying to access S3

Hi guys,

I have a Druid cluster setup running in AWS, with S3 deepstorage. Most of the tasks are successful but some are not, for example one gets this error log below

2018-12-10T03:20:43,258 INFO [task-runner-0-priority-0] io.druid.indexer.JobHelper - Deleting path[/opt/switchdin/druid-work-dir/middleManager/druid/hadoop-tmp/MyDataSource/2018-12-10T031815.958Z_f860e1b05e234f779ba0a77765385496]

2018-12-10T03:20:43,274 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[AbstractTask{id=‘MyDataSource-mqttdate-2018-12-01-datadate-2018-11-29’, groupId=‘MyDataSource-mqttdate-2018-12-01-datadate-2018-11-29’, taskResource=TaskResource{availabilityGroup=‘MyDataSource-mqttdate-2018-12-01-datadate-2018-11-29’, requiredCapacity=1}, dataSource=‘MyDataSource’, context={}}]

java.lang.RuntimeException: java.lang.reflect.InvocationTargetException

at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]

at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:222) ~[druid-indexing-service-0.12.3.jar:0.12.3]

at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:238) ~[druid-indexing-service-0.12.3.jar:0.12.3]

at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444) [druid-indexing-service-0.12.3.jar:0.12.3]

at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416) [druid-indexing-service-0.12.3.jar:0.12.3]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]

Caused by: java.lang.reflect.InvocationTargetException

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]

at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:219) ~[druid-indexing-service-0.12.3.jar:0.12.3]

… 7 more

Caused by: io.druid.java.util.common.ISE: Job[class io.druid.indexer.IndexGeneratorJob] failed!

at io.druid.indexer.JobHelper.runJobs(JobHelper.java:391) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.HadoopDruidIndexerJob.run(HadoopDruidIndexerJob.java:95) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexing.common.task.HadoopIndexTask$HadoopIndexGeneratorInnerProcessing.runTask(HadoopIndexTask.java:293) ~[druid-indexing-service-0.12.3.jar:0.12.3]

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]

at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:219) ~[druid-indexing-service-0.12.3.jar:0.12.3]

… 7 more

2018-12-10T03:20:43,279 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [MyDataSource-mqttdate-2018-12-01-datadate-2018-11-29] status changed to [FAILED].

2018-12-10T03:20:43,281 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {

“id” : “MyDataSource-mqttdate-2018-12-01-datadate-2018-11-29”,

“status” : “FAILED”,

“duration” : 143631

}

Most of the full error log is below.

2018-12-10T03:18:47,225 INFO [pool-29-thread-1] org.apache.hadoop.fs.s3native.NativeS3FileSystem - OutputStream for key ‘myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip.0’ closed. Now beginning upload

2018-12-10T03:18:47,316 INFO [pool-29-thread-1] org.apache.hadoop.fs.s3native.NativeS3FileSystem - OutputStream for key ‘myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip.0’ upload complete

2018-12-10T03:18:47,316 INFO [pool-29-thread-1] io.druid.indexer.JobHelper - Zipped 1,336,714 bytes to [s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip.0]

2018-12-10T03:18:47,375 INFO [pool-29-thread-1] io.druid.indexer.JobHelper - Attempting rename from [s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip.0] to [s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip]

2018-12-10T03:18:47,634 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 14% reduce 0%

2018-12-10T03:18:48,052 WARN [pool-29-thread-1] io.druid.java.util.common.RetryUtils - Failed on try 1, retrying in 1,085ms.

org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Service Error Message. – ResponseCode: 403, ResponseStatus: Forbidden, XML Error Message: <?xml version="1.0" encoding="UTF-8"?>AccessDeniedAccess Denied0B16189A1C213401QAI5RAbD3NizdNdXDvNm2yeNuc0zVaAFLDOnP8gxCyuFKi84NaU3/g8/K/rmzWtyStkuut8Wbrc=

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:464) ~[hadoop-aws-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411) ~[hadoop-aws-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.delete(Jets3tNativeFileSystemStore.java:293) ~[hadoop-aws-2.7.3.jar:?]

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.$Proxy203.delete(Unknown Source) ~[?:?]

at org.apache.hadoop.fs.s3native.NativeS3FileSystem.rename(NativeS3FileSystem.java:708) ~[hadoop-aws-2.7.3.jar:?]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:654) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:615) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:63) [java-util-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:81) [java-util-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.renameIndexFiles(JobHelper.java:613) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.serializeOutIndex(JobHelper.java:443) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:750) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:500) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) [hadoop-mapreduce-client-common-2.7.3.jar:?]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_191]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]

Caused by: org.jets3t.service.S3ServiceException: Service Error Message.

at org.jets3t.service.S3Service.deleteObject(S3Service.java:2380) ~[jets3t-0.9.4.jar:0.9.4]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.delete(Jets3tNativeFileSystemStore.java:291) ~[hadoop-aws-2.7.3.jar:?]

… 25 more

2018-12-10T03:18:49,166 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - reduce > reduce

2018-12-10T03:18:49,203 INFO [pool-29-thread-1] io.druid.indexer.JobHelper - File[s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip / 2018-12-10T03:18:48.000Z / 453947B] existed, but wasn’t the same as [s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip.0 / 2018-12-10T03:18:48.000Z / 453947B]

2018-12-10T03:18:49,257 WARN [pool-29-thread-1] io.druid.java.util.common.RetryUtils - Failed on try 2, retrying in 1,965ms.

org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Service Error Message. – ResponseCode: 403, ResponseStatus: Forbidden, XML Error Message: <?xml version="1.0" encoding="UTF-8"?>AccessDeniedAccess Denied2F8C1B4E89146DF8MU2VYSsUGG6t7uAlTEWbUfHJ1Q122w5+9zZvBsbra516c8scq3/4ze8NbhsqHvytJ5JfCASFgV0=

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:464) ~[hadoop-aws-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411) ~[hadoop-aws-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.delete(Jets3tNativeFileSystemStore.java:293) ~[hadoop-aws-2.7.3.jar:?]

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.$Proxy203.delete(Unknown Source) ~[?:?]

at org.apache.hadoop.fs.s3native.NativeS3FileSystem.delete(NativeS3FileSystem.java:459) ~[hadoop-aws-2.7.3.jar:?]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:637) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:615) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:63) [java-util-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:81) [java-util-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.renameIndexFiles(JobHelper.java:613) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.serializeOutIndex(JobHelper.java:443) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:750) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:500) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) [hadoop-mapreduce-client-common-2.7.3.jar:?]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_191]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]

Caused by: org.jets3t.service.S3ServiceException: Service Error Message.

at org.jets3t.service.S3Service.deleteObject(S3Service.java:2380) ~[jets3t-0.9.4.jar:0.9.4]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.delete(Jets3tNativeFileSystemStore.java:291) ~[hadoop-aws-2.7.3.jar:?]

… 25 more

2018-12-10T03:18:49,717 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:18:51,282 INFO [pool-29-thread-1] io.druid.indexer.JobHelper - File[s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip / 2018-12-10T03:18:48.000Z / 453947B] existed, but wasn’t the same as [s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip.0 / 2018-12-10T03:18:48.000Z / 453947B]

2018-12-10T03:18:51,340 WARN [pool-29-thread-1] io.druid.java.util.common.RetryUtils - Failed on try 3, retrying in 3,823ms.

org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Service Error Message. – ResponseCode: 403, ResponseStatus: Forbidden, XML Error Message: <?xml version="1.0" encoding="UTF-8"?>AccessDeniedAccess Denied8EAF70ABFA37D962y8hWsyNd/Pz0Km3em8mPrOWC6/lINFHNYu0gkXulWlhezCpdqNziOhgOdeNsGIL5vjFj2z7FKfg=

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:464) ~[hadoop-aws-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411) ~[hadoop-aws-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.delete(Jets3tNativeFileSystemStore.java:293) ~[hadoop-aws-2.7.3.jar:?]

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.$Proxy203.delete(Unknown Source) ~[?:?]

at org.apache.hadoop.fs.s3native.NativeS3FileSystem.delete(NativeS3FileSystem.java:459) ~[hadoop-aws-2.7.3.jar:?]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:637) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:615) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:63) [java-util-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:81) [java-util-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.renameIndexFiles(JobHelper.java:613) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.serializeOutIndex(JobHelper.java:443) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:750) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:500) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) [hadoop-mapreduce-client-common-2.7.3.jar:?]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_191]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]

Caused by: org.jets3t.service.S3ServiceException: Service Error Message.

at org.jets3t.service.S3Service.deleteObject(S3Service.java:2380) ~[jets3t-0.9.4.jar:0.9.4]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.delete(Jets3tNativeFileSystemStore.java:291) ~[hadoop-aws-2.7.3.jar:?]

… 25 more

2018-12-10T03:18:52,167 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - reduce > reduce

2018-12-10T03:18:52,718 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:18:55,235 INFO [pool-29-thread-1] io.druid.indexer.JobHelper - File[s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip / 2018-12-10T03:18:48.000Z / 453947B] existed, but wasn’t the same as [s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip.0 / 2018-12-10T03:18:48.000Z / 453947B]

2018-12-10T03:18:55,297 WARN [pool-29-thread-1] io.druid.java.util.common.RetryUtils - Failed on try 4, retrying in 6,266ms.

org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Service Error Message. – ResponseCode: 403, ResponseStatus: Forbidden, XML Error Message: <?xml version="1.0" encoding="UTF-8"?>AccessDeniedAccess DeniedDB50641B56427ADANgjHjQw0oVXQnRwrQm/hySg1WWyhIsPwiKVe2DCxQYhOpooyiolgMvd29bsTCav1Zd96dVVXzIs=

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:464) ~[hadoop-aws-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411) ~[hadoop-aws-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.delete(Jets3tNativeFileSystemStore.java:293) ~[hadoop-aws-2.7.3.jar:?]

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.$Proxy203.delete(Unknown Source) ~[?:?]

at org.apache.hadoop.fs.s3native.NativeS3FileSystem.delete(NativeS3FileSystem.java:459) ~[hadoop-aws-2.7.3.jar:?]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:637) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:615) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:63) [java-util-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:81) [java-util-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.renameIndexFiles(JobHelper.java:613) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.serializeOutIndex(JobHelper.java:443) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:750) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:500) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) [hadoop-mapreduce-client-common-2.7.3.jar:?]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_191]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]

Caused by: org.jets3t.service.S3ServiceException: Service Error Message.

at org.jets3t.service.S3Service.deleteObject(S3Service.java:2380) ~[jets3t-0.9.4.jar:0.9.4]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.delete(Jets3tNativeFileSystemStore.java:291) ~[hadoop-aws-2.7.3.jar:?]

… 25 more

2018-12-10T03:18:55,718 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:18:56,637 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 15% reduce 0%

2018-12-10T03:18:58,719 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:01,689 INFO [pool-29-thread-1] io.druid.indexer.JobHelper - File[s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip / 2018-12-10T03:18:48.000Z / 453947B] existed, but wasn’t the same as [s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip.0 / 2018-12-10T03:18:48.000Z / 453947B]

2018-12-10T03:19:01,720 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:01,732 WARN [pool-29-thread-1] io.druid.java.util.common.RetryUtils - Failed on try 5, retrying in 18,127ms.

org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Service Error Message. – ResponseCode: 403, ResponseStatus: Forbidden, XML Error Message: <?xml version="1.0" encoding="UTF-8"?>AccessDeniedAccess Denied7AE10055A3808FC8TMtJ50yq4cVvYH+WfR5PgSFG9fDpIbUsC5m6rqRZF82VpkE9Ncs5ur2eLSWZVt+Y2MHf2EitF9E=

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:464) ~[hadoop-aws-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411) ~[hadoop-aws-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.delete(Jets3tNativeFileSystemStore.java:293) ~[hadoop-aws-2.7.3.jar:?]

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.$Proxy203.delete(Unknown Source) ~[?:?]

at org.apache.hadoop.fs.s3native.NativeS3FileSystem.delete(NativeS3FileSystem.java:459) ~[hadoop-aws-2.7.3.jar:?]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:637) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:615) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:63) [java-util-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:81) [java-util-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.renameIndexFiles(JobHelper.java:613) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.serializeOutIndex(JobHelper.java:443) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:750) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:500) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) [hadoop-mapreduce-client-common-2.7.3.jar:?]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_191]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]

Caused by: org.jets3t.service.S3ServiceException: Service Error Message.

at org.jets3t.service.S3Service.deleteObject(S3Service.java:2380) ~[jets3t-0.9.4.jar:0.9.4]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.delete(Jets3tNativeFileSystemStore.java:291) ~[hadoop-aws-2.7.3.jar:?]

… 25 more

2018-12-10T03:19:02,639 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 16% reduce 0%

2018-12-10T03:19:04,721 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:07,722 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:08,641 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 17% reduce 0%

2018-12-10T03:19:10,722 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:13,723 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:14,644 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 18% reduce 0%

2018-12-10T03:19:16,724 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:19,724 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:19,979 INFO [pool-29-thread-1] io.druid.indexer.JobHelper - File[s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip / 2018-12-10T03:18:48.000Z / 453947B] existed, but wasn’t the same as [s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip.0 / 2018-12-10T03:18:48.000Z / 453947B]

2018-12-10T03:19:20,023 WARN [pool-29-thread-1] io.druid.java.util.common.RetryUtils - Failed on try 6, retrying in 32,205ms.

org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Service Error Message. – ResponseCode: 403, ResponseStatus: Forbidden, XML Error Message: <?xml version="1.0" encoding="UTF-8"?>AccessDeniedAccess DeniedDD478928AEB8D370TzvofCA7S1Rmf6VY51zoIIdtEYg95mbGnmNIQEBHVRSNgf3x5SlRnZSztS0LgT4uNQC46rGj95k=

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:464) ~[hadoop-aws-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411) ~[hadoop-aws-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.delete(Jets3tNativeFileSystemStore.java:293) ~[hadoop-aws-2.7.3.jar:?]

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.$Proxy203.delete(Unknown Source) ~[?:?]

at org.apache.hadoop.fs.s3native.NativeS3FileSystem.delete(NativeS3FileSystem.java:459) ~[hadoop-aws-2.7.3.jar:?]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:637) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:615) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:63) [java-util-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:81) [java-util-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.renameIndexFiles(JobHelper.java:613) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.serializeOutIndex(JobHelper.java:443) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:750) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:500) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) [hadoop-mapreduce-client-common-2.7.3.jar:?]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_191]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]

Caused by: org.jets3t.service.S3ServiceException: Service Error Message.

at org.jets3t.service.S3Service.deleteObject(S3Service.java:2380) ~[jets3t-0.9.4.jar:0.9.4]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.delete(Jets3tNativeFileSystemStore.java:291) ~[hadoop-aws-2.7.3.jar:?]

… 25 more

2018-12-10T03:19:20,646 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 19% reduce 0%

2018-12-10T03:19:22,725 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:25,726 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:26,649 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 20% reduce 0%

2018-12-10T03:19:28,726 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:31,727 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:32,651 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 21% reduce 0%

2018-12-10T03:19:34,728 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:37,728 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:38,656 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 22% reduce 0%

2018-12-10T03:19:40,729 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:43,730 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:44,662 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 23% reduce 0%

2018-12-10T03:19:46,730 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:49,731 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:50,664 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 24% reduce 0%

2018-12-10T03:19:52,346 INFO [pool-29-thread-1] io.druid.indexer.JobHelper - File[s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip / 2018-12-10T03:18:48.000Z / 453947B] existed, but wasn’t the same as [s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip.0 / 2018-12-10T03:18:48.000Z / 453947B]

2018-12-10T03:19:52,397 WARN [pool-29-thread-1] io.druid.java.util.common.RetryUtils - Failed on try 7, retrying in 49,744ms.

org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Service Error Message. – ResponseCode: 403, ResponseStatus: Forbidden, XML Error Message: <?xml version="1.0" encoding="UTF-8"?>AccessDeniedAccess DeniedDD4E2CFF933A0A85Ih8k0R9EyjHW92V4EEP0lliE3j0wdcWzWLGOWrbipExlI9UhQbxWLx9/9PWC3XVkdRde/CCLZAk=

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:464) ~[hadoop-aws-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411) ~[hadoop-aws-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.delete(Jets3tNativeFileSystemStore.java:293) ~[hadoop-aws-2.7.3.jar:?]

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.$Proxy203.delete(Unknown Source) ~[?:?]

at org.apache.hadoop.fs.s3native.NativeS3FileSystem.delete(NativeS3FileSystem.java:459) ~[hadoop-aws-2.7.3.jar:?]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:637) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:615) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:63) [java-util-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:81) [java-util-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.renameIndexFiles(JobHelper.java:613) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.serializeOutIndex(JobHelper.java:443) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:750) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:500) [druid-indexing-hadoop-0.12.3.jar:0.12.3]

at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) [hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) [hadoop-mapreduce-client-common-2.7.3.jar:?]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_191]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]

Caused by: org.jets3t.service.S3ServiceException: Service Error Message.

at org.jets3t.service.S3Service.deleteObject(S3Service.java:2380) ~[jets3t-0.9.4.jar:0.9.4]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.delete(Jets3tNativeFileSystemStore.java:291) ~[hadoop-aws-2.7.3.jar:?]

… 25 more

2018-12-10T03:19:52,732 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:55,732 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:58,733 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:19:59,667 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 25% reduce 0%

2018-12-10T03:20:01,734 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:20:04,734 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:20:05,669 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 26% reduce 0%

2018-12-10T03:20:07,735 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:20:10,735 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:20:11,671 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 27% reduce 0%

2018-12-10T03:20:13,736 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:20:16,737 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:20:17,673 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 28% reduce 0%

2018-12-10T03:20:19,737 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:20:22,738 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:20:23,674 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 29% reduce 0%

2018-12-10T03:20:25,738 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:20:28,739 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:20:29,677 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 30% reduce 0%

2018-12-10T03:20:31,740 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:20:34,740 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:20:35,679 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 31% reduce 0%

2018-12-10T03:20:37,741 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:20:40,742 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - map > map

2018-12-10T03:20:41,680 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 32% reduce 0%

2018-12-10T03:20:42,266 INFO [pool-29-thread-1] io.druid.indexer.JobHelper - File[s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip / 2018-12-10T03:18:48.000Z / 453947B] existed, but wasn’t the same as [s3n://my-bucket/myfolder/data/MyDataSource/2018-11-29T00:00:00.000Z_2018-11-30T00:00:00.000Z/2018-12-10T03:18:15.958Z/0/index.zip.0 / 2018-12-10T03:18:48.000Z / 453947B]

2018-12-10T03:20:42,327 INFO [Thread-70] org.apache.hadoop.mapred.LocalJobRunner - reduce task executor complete.

2018-12-10T03:20:42,355 WARN [Thread-70] org.apache.hadoop.mapred.LocalJobRunner - job_local1775894961_0002

java.lang.Exception: java.lang.RuntimeException: org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Service Error Message. – ResponseCode: 403, ResponseStatus: Forbidden, XML Error Message: <?xml version="1.0" encoding="UTF-8"?>AccessDeniedAccess DeniedF48F805D5EE26D98RXETAxOO1iQR/D4o+dQuEu6MbF6XXMvSf0XVdYTdCC2GQTs5pkRxEq/JV5JQz819/NwLQ75XdPk=

at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) ~[hadoop-mapreduce-client-common-2.7.3.jar:?]

at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529) [hadoop-mapreduce-client-common-2.7.3.jar:?]

Caused by: java.lang.RuntimeException: org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Service Error Message. – ResponseCode: 403, ResponseStatus: Forbidden, XML Error Message: <?xml version="1.0" encoding="UTF-8"?>AccessDeniedAccess DeniedF48F805D5EE26D98RXETAxOO1iQR/D4o+dQuEu6MbF6XXMvSf0XVdYTdCC2GQTs5pkRxEq/JV5JQz819/NwLQ75XdPk=

at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]

at io.druid.indexer.JobHelper.renameIndexFiles(JobHelper.java:665) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.serializeOutIndex(JobHelper.java:443) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:750) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:500) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) ~[hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) ~[hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) ~[hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) ~[hadoop-mapreduce-client-common-2.7.3.jar:?]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_191]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_191]

at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_191]

Caused by: org.apache.hadoop.fs.s3.S3Exception: org.jets3t.service.S3ServiceException: Service Error Message. – ResponseCode: 403, ResponseStatus: Forbidden, XML Error Message: <?xml version="1.0" encoding="UTF-8"?>AccessDeniedAccess DeniedF48F805D5EE26D98RXETAxOO1iQR/D4o+dQuEu6MbF6XXMvSf0XVdYTdCC2GQTs5pkRxEq/JV5JQz819/NwLQ75XdPk=

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:464) ~[hadoop-aws-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411) ~[hadoop-aws-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.delete(Jets3tNativeFileSystemStore.java:293) ~[hadoop-aws-2.7.3.jar:?]

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.$Proxy203.delete(Unknown Source) ~[?:?]

at org.apache.hadoop.fs.s3native.NativeS3FileSystem.delete(NativeS3FileSystem.java:459) ~[hadoop-aws-2.7.3.jar:?]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:637) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:615) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:63) ~[java-util-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:81) ~[java-util-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.renameIndexFiles(JobHelper.java:613) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.serializeOutIndex(JobHelper.java:443) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:750) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:500) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) ~[hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) ~[hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) ~[hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) ~[hadoop-mapreduce-client-common-2.7.3.jar:?]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_191]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_191]

at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_191]

Caused by: org.jets3t.service.S3ServiceException: Service Error Message.

at org.jets3t.service.S3Service.deleteObject(S3Service.java:2380) ~[jets3t-0.9.4.jar:0.9.4]

at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.delete(Jets3tNativeFileSystemStore.java:291) ~[hadoop-aws-2.7.3.jar:?]

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.7.3.jar:?]

at org.apache.hadoop.fs.s3native.$Proxy203.delete(Unknown Source) ~[?:?]

at org.apache.hadoop.fs.s3native.NativeS3FileSystem.delete(NativeS3FileSystem.java:459) ~[hadoop-aws-2.7.3.jar:?]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:637) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper$6.call(JobHelper.java:615) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:63) ~[java-util-0.12.3.jar:0.12.3]

at io.druid.java.util.common.RetryUtils.retry(RetryUtils.java:81) ~[java-util-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.renameIndexFiles(JobHelper.java:613) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.JobHelper.serializeOutIndex(JobHelper.java:443) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:750) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:500) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) ~[hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) ~[hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) ~[hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) ~[hadoop-mapreduce-client-common-2.7.3.jar:?]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_191]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_191]

at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_191]

2018-12-10T03:20:43,250 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Job job_local1775894961_0002 failed with state FAILED due to: NA

2018-12-10T03:20:43,254 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Counters: 35

File System Counters

FILE: Number of bytes read=2129344161

FILE: Number of bytes written=1769696310

FILE: Number of read operations=0

FILE: Number of large read operations=0

FILE: Number of write operations=0

S3N: Number of bytes read=408360

S3N: Number of bytes written=453947

S3N: Number of read operations=0

S3N: Number of large read operations=0

S3N: Number of write operations=0

Map-Reduce Framework

Map input records=443903

Map output records=443903

Map output bytes=255558293

Map output materialized bytes=257333917

Input split bytes=1621

Combine input records=0

Combine output records=0

Reduce input groups=1

Reduce shuffle bytes=257333917

Reduce input records=443903

Reduce output records=0

Spilled Records=1317832

Shuffled Maps =2

Failed Shuffles=0

Merged Map outputs=2

GC time elapsed (ms)=201

Total committed heap usage (bytes)=6442450944

Shuffle Errors

BAD_ID=0

CONNECTION=0

IO_ERROR=0

WRONG_LENGTH=0

WRONG_MAP=0

WRONG_REDUCE=0

File Input Format Counters

Bytes Read=0

File Output Format Counters

Bytes Written=8

2018-12-10T03:20:43,258 INFO [task-runner-0-priority-0] io.druid.indexer.JobHelper - Deleting path[/opt/switchdin/druid-work-dir/middleManager/druid/hadoop-tmp/MyDataSource/2018-12-10T031815.958Z_f860e1b05e234f779ba0a77765385496]

2018-12-10T03:20:43,274 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[AbstractTask{id=‘MyDataSource-mqttdate-2018-12-01-datadate-2018-11-29’, groupId=‘MyDataSource-mqttdate-2018-12-01-datadate-2018-11-29’, taskResource=TaskResource{availabilityGroup=‘MyDataSource-mqttdate-2018-12-01-datadate-2018-11-29’, requiredCapacity=1}, dataSource=‘MyDataSource’, context={}}]

java.lang.RuntimeException: java.lang.reflect.InvocationTargetException

at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]

at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:222) ~[druid-indexing-service-0.12.3.jar:0.12.3]

at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:238) ~[druid-indexing-service-0.12.3.jar:0.12.3]

at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:444) [druid-indexing-service-0.12.3.jar:0.12.3]

at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:416) [druid-indexing-service-0.12.3.jar:0.12.3]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_191]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_191]

Caused by: java.lang.reflect.InvocationTargetException

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]

at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:219) ~[druid-indexing-service-0.12.3.jar:0.12.3]

… 7 more

Caused by: io.druid.java.util.common.ISE: Job[class io.druid.indexer.IndexGeneratorJob] failed!

at io.druid.indexer.JobHelper.runJobs(JobHelper.java:391) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexer.HadoopDruidIndexerJob.run(HadoopDruidIndexerJob.java:95) ~[druid-indexing-hadoop-0.12.3.jar:0.12.3]

at io.druid.indexing.common.task.HadoopIndexTask$HadoopIndexGeneratorInnerProcessing.runTask(HadoopIndexTask.java:293) ~[druid-indexing-service-0.12.3.jar:0.12.3]

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_191]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_191]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_191]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_191]

at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:219) ~[druid-indexing-service-0.12.3.jar:0.12.3]

… 7 more

2018-12-10T03:20:43,279 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [MyDataSource-mqttdate-2018-12-01-datadate-2018-11-29] status changed to [FAILED].

2018-12-10T03:20:43,281 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {

“id” : “MyDataSource-mqttdate-2018-12-01-datadate-2018-11-29”,

“status” : “FAILED”,

“duration” : 143631

}

My understanding is that hadoop is trying to access S3 but it fails and after many retries, the task died. BTW I resubmitted the task and it worked.

Can someone with more experience explain to me what went wrong? Could be that I have a limit on the max number of connection to S3, therefore the task failed (such task was running along with 2 others).

thank you very much guys!

Serg