Druid error Please help us

Hello All,

We are using Hadoop 2.7.3 and druid0.9.2 with hadoop extension 2.7.3.

We are facing error in mapreduce.

2017-04-08T13:02:37,265 INFO [task-runner-0-priority-0] io.druid.indexer.DetermineHashedPartitionsJob - Job wikiticker-determine_partitions_hashed-Optional.of([2015-09-12T00:00:00.000Z/2015-09-13T00:00:00.000Z]) submitted, status available at: http://ip-172-31-31-146.us-west-2.compute.internal:8088/proxy/application_1491654582987_0003/
2017-04-08T13:02:37,266 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Running job: job_1491654582987_0003
2017-04-08T13:02:42,326 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Job job_1491654582987_0003 running in uber mode : false
2017-04-08T13:02:42,327 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job -  map 0% reduce 0%
2017-04-08T13:02:46,556 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Task Id : attempt_1491654582987_0003_m_000000_0, Status : FAILED
Error: com.google.inject.util.Types.collectionOf(Ljava/lang/reflect/Type;)Ljava/lang/reflect/ParameterizedType;
2017-04-08T13:02:51,587 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Task Id : attempt_1491654582987_0003_m_000000_1, Status : FAILED
Error: com.google.inject.util.Types.collectionOf(Ljava/lang/reflect/Type;)Ljava/lang/reflect/ParameterizedType;
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

2017-04-08T13:02:55,601 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Task Id : attempt_1491654582987_0003_m_000000_2, Status : FAILED
Error: com.google.inject.util.Types.collectionOf(Ljava/lang/reflect/Type;)Ljava/lang/reflect/ParameterizedType;
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

2017-04-08T13:03:01,620 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job -  map 100% reduce 100%
2017-04-08T13:03:01,625 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Job job_1491654582987_0003 failed with state FAILED due to: Task failed task_1491654582987_0003_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

2017-04-08T13:03:01,719 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Counters: 16
	Job Counters
		Failed map tasks=4
		Killed reduce tasks=1
		Launched map tasks=4
		Other local map tasks=3
		Data-local map tasks=1
		Total time spent by all maps in occupied slots (ms)=10834
		Total time spent by all reduces in occupied slots (ms)=0
		Total time spent by all map tasks (ms)=10834
		Total time spent by all reduce tasks (ms)=0
		Total vcore-seconds taken by all map tasks=10834
		Total vcore-seconds taken by all reduce tasks=0
		Total megabyte-seconds taken by all map tasks=11094016
		Total megabyte-seconds taken by all reduce tasks=0
	Map-Reduce Framework
		CPU time spent (ms)=0
		Physical memory (bytes) snapshot=0
		Virtual memory (bytes) snapshot=0
2017-04-08T13:03:01,720 ERROR [task-runner-0-priority-0] io.druid.indexer.DetermineHashedPartitionsJob - Job failed: job_1491654582987_0003
2017-04-08T13:03:01,720 INFO [task-runner-0-priority-0] io.druid.indexer.JobHelper - Deleting path[var/druid/hadoop-tmp/wikiticker/2017-04-08T130228.974Z_180092dd3fe74d7e816b4df3a810c0ea]
2017-04-08T13:03:01,753 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[HadoopIndexTask{id=index_hadoop_wikiticker_2017-04-08T13:02:28.974Z, type=index_hadoop, dataSource=wikiticker}]
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
	at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:204) ~[druid-indexing-service-0.9.2.jar:0.9.2]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:175) ~[druid-indexing-service-0.9.2.jar:0.9.2]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.2.jar:0.9.2]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.2.jar:0.9.2]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_121]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_121]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_121]
	at java.lang.Thread.run(Thread.java:745) [?:1.8.0_121]
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_121]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_121]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_121]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) ~[druid-indexing-service-0.9.2.jar:0.9.2]
	... 7 more
Caused by: com.metamx.common.ISE: Job[class io.druid.indexer.DetermineHashedPartitionsJob] failed!
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:369) ~[druid-indexing-hadoop-0.9.2.jar:0.9.2]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) ~[druid-indexing-hadoop-0.9.2.jar:0.9.2]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:291) ~[druid-indexing-service-0.9.2.jar:0.9.2]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_121]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_121]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_121]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_121]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) ~[druid-indexing-service-0.9.2.jar:0.9.2]
	... 7 more
2017-04-08T13:03:01,759 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_hadoop_wikiticker_2017-04-08T13:02:28.974Z] status changed to [FAILED].
2017-04-08T13:03:01,761 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_hadoop_wikiticker_2017-04-08T13:02:28.974Z",
  "status" : "FAILED",
  "duration" : 29009
}
2017-04-08T13:03:01,766 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.server.coordination.AbstractDataSegmentAnnouncer.stop()] on object[io.druid.server.coordination.BatchDataSegmentAnnouncer@743c3520].
2017-04-08T13:03:01,766 INFO [main] io.druid.server.coordination.AbstractDataSegmentAnnouncer - Stopping class io.druid.server.coordination.BatchDataSegmentAnnouncer with config[io.druid.server.initialization.ZkPathsConfig@22e2266d]
2017-04-08T13:03:01,766 INFO [main] io.druid.curator.announcement.Announcer - unannouncing [/druid/announcements/ip-172-31-28-160.us-west-2.compute.internal:8100]
2017-04-08T13:03:01,780 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.server.listener.announcer.ListenerResourceAnnouncer.stop()] on object[io.druid.query.lookup.LookupResourceListenerAnnouncer@18ffca6c].
2017-04-08T13:03:01,780 INFO [main] io.druid.curator.announcement.Announcer - unannouncing [/druid/listeners/lookups/__default/ip-172-31-28-160.us-west-2.compute.internal:8100]
2017-04-08T13:03:01,782 INFO [main] io.druid.server.listener.announcer.ListenerResourceAnnouncer - Unannouncing start time on [/druid/listeners/lookups/__default/ip-172-31-28-160.us-west-2.compute.internal:8100]
2017-04-08T13:03:01,782 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.query.lookup.LookupReferencesManager.stop()] on object[io.druid.query.lookup.LookupReferencesManager@143fefaf].
2017-04-08T13:03:01,782 INFO [main] io.druid.query.lookup.LookupReferencesManager - Stopping lookup factory references manager
2017-04-08T13:03:01,784 INFO [main] org.eclipse.jetty.server.ServerConnector - Stopped ServerConnector@310d57b1{HTTP/1.1}{0.0.0.0:8100}
2017-04-08T13:03:01,785 INFO [main] org.eclipse.jetty.server.handler.ContextHandler - Stopped o.e.j.s.ServletContextHandler@62f9c790{/,null,UNAVAILABLE}
2017-04-08T13:03:01,787 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.indexing.worker.executor.ExecutorLifecycle.stop() throws java.lang.Exception] on object[io.druid.indexing.worker.executor.ExecutorLifecycle@79e7188e].
2017-04-08T13:03:01,788 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.indexing.overlord.ThreadPoolTaskRunner.stop()] on object[io.druid.indexing.overlord.ThreadPoolTaskRunner@45c80312].
2017-04-08T13:03:01,788 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery.ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator.discovery.ServerDiscoverySelector@15fb4566].
2017-04-08T13:03:01,791 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.announcement.Announcer.stop()] on object[io.druid.curator.announcement.Announcer@6a15b73].
2017-04-08T13:03:01,791 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery.ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator.discovery.ServerDiscoverySelector@307e4c44].
2017-04-08T13:03:01,791 INFO [main] io.druid.curator.CuratorModule - Stopping Curator
2017-04-08T13:03:01,792 INFO [Curator-Framework-0] org.apache.curator.framework.imps.CuratorFrameworkImpl - backgroundOperationsLoop exiting
2017-04-08T13:03:01,794 INFO [main] org.apache.zookeeper.ZooKeeper - Session: 0x15b4cf98dac0012 closed
2017-04-08T13:03:01,794 INFO [main-EventThread] org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x15b4cf98dac0012
2017-04-08T13:03:01,794 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.http.client.NettyHttpClient.stop()] on object[com.metamx.http.client.NettyHttpClient@5a058be5].
2017-04-08T13:03:01,801 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.storage.hdfs.HdfsStorageAuthentication.stop()] on object[io.druid.storage.hdfs.HdfsStorageAuthentication@52ae997b].
2017-04-08T13:03:01,801 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.metrics.MonitorScheduler.stop()] on object[com.metamx.metrics.MonitorScheduler@9b23822].
2017-04-08T13:03:01,801 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.emitter.service.ServiceEmitter.close() throws java.io.IOException] on object[com.metamx.emitter.service.ServiceEmitter@25c2a9e3].
2017-04-08T13:03:01,801 INFO [main] com.metamx.emitter.core.LoggingEmitter - Close: started [false]
2017-04-08T13:03:01,801 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.initialization.Log4jShutterDownerModule$Log4jShutterDowner.stop()] on object[io.druid.initialization.Log4jShutterDownerModule$Log4jShutterDowner@39666e42].
2017-04-08 13:03:01,833 Thread-2 ERROR Unable to register shutdown hook because JVM is shutting down. java.lang.IllegalStateException: Not started
	at io.druid.common.config.Log4jShutdown.addShutdownCallback(Log4jShutdown.java:45)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.addShutdownCallback(Log4jContextFactory.java:273)
	at org.apache.logging.log4j.core.LoggerContext.setUpShutdownHook(LoggerContext.java:256)
	at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:216)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:145)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:41)
	at org.apache.logging.log4j.LogManager.getContext(LogManager.java:182)
	at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:103)
	at org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:43)
	at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:42)
	at org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:29)
	at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:284)
	at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:155)
	at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132)
	at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:273)
	at org.apache.hadoop.hdfs.LeaseRenewer.<clinit>(LeaseRenewer.java:72)
	at org.apache.hadoop.hdfs.DFSClient.getLeaseRenewer(DFSClient.java:699)
	at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:859)
	at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:853)
	at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2407)
	at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2424)
	at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)


Some one please help us to resolve the issue.

hdfs-site.xml (1.09 KB)

mapred-site.xml (1.42 KB)

yarn-site.xml (2.01 KB)

common.runtime.properties (3.73 KB)

core-site.xml (918 Bytes)

Looks like a configuration issue with your hadoop configs, Can you try setting below property -

<property>
    <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
  </property>

Also see http://druid.io/docs/latest/configuration/hadoop.html for sample hadoop configs.