Hadoop ingest task failed connection

I am able to successfully run a local batch ingestion task and store segment data in HDFS, however I have been unsuccessful in getting the ingestion task to run on Hadoop. I consistently receive the below error:

java.net.ConnectException: Call From myserver.com/10.0.0.1 to 0.0.0.0:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_141]
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[?:1.8.0_141]
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[?:1.8.0_141]
	at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[?:1.8.0_141]
	at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783) ~[hadoop-common-2.3.0.jar:?]
	at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730) ~[hadoop-common-2.3.0.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1410) ~[hadoop-common-2.3.0.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1359) ~[hadoop-common-2.3.0.jar:?]
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) ~[hadoop-common-2.3.0.jar:?]
	at com.sun.proxy.$Proxy211.getNewApplication(Unknown Source) ~[?:?]
	at org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getNewApplication(ApplicationClientProtocolPBClientImpl.java:167) ~[hadoop-yarn-common-2.3.0.jar:?]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_141]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_141]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_141]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_141]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186) ~[hadoop-common-2.3.0.jar:?]
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) [hadoop-common-2.3.0.jar:?]
	at com.sun.proxy.$Proxy212.getNewApplication(Unknown Source) [?:?]
	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getNewApplication(YarnClientImpl.java:133) [hadoop-yarn-client-2.3.0.jar:?]
	at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.createApplication(YarnClientImpl.java:141) [hadoop-yarn-client-2.3.0.jar:?]
	at org.apache.hadoop.mapred.ResourceMgrDelegate.getNewJobID(ResourceMgrDelegate.java:175) [hadoop-mapreduce-client-jobclient-2.3.0.jar:?]
	at org.apache.hadoop.mapred.YARNRunner.getNewJobID(YARNRunner.java:229) [hadoop-mapreduce-client-jobclient-2.3.0.jar:?]
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:357) [hadoop-mapreduce-client-core-2.3.0.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285) [hadoop-mapreduce-client-core-2.3.0.jar:?]
	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282) [hadoop-mapreduce-client-core-2.3.0.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_141]
	at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_141]
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548) [hadoop-common-2.3.0.jar:?]
	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282) [hadoop-mapreduce-client-core-2.3.0.jar:?]
	at io.druid.indexer.DetermineHashedPartitionsJob.run(DetermineHashedPartitionsJob.java:116) [druid-indexing-hadoop-0.10.0.jar:0.10.0]
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:349) [druid-indexing-hadoop-0.10.0.jar:0.10.0]
	at io.druid.indexer.HadoopDruidDetermineConfigurationJob.run(HadoopDruidDetermineConfigurationJob.java:91) [druid-indexing-hadoop-0.10.0.jar:0.10.0]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopDetermineConfigInnerProcessing.runTask(HadoopIndexTask.java:306) [druid-indexing-service-0.10.0.jar:0.10.0]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_141]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_141]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_141]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_141]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:208) [druid-indexing-service-0.10.0.jar:0.10.0]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:176) [druid-indexing-service-0.10.0.jar:0.10.0]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.10.0.jar:0.10.0]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.10.0.jar:0.10.0]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_141]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_141]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_141]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]
Caused by: java.net.ConnectException: Connection refused
	at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:1.8.0_141]
	at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[?:1.8.0_141]
	at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) ~[hadoop-common-2.3.0.jar:?]
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) ~[hadoop-common-2.3.0.jar:?]
	at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493) ~[hadoop-common-2.3.0.jar:?]
	at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:601) ~[hadoop-common-2.3.0.jar:?]
	at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:696) ~[hadoop-common-2.3.0.jar:?]
	at org.apache.hadoop.ipc.Client$Connection.access$2700(Client.java:367) ~[hadoop-common-2.3.0.jar:?]
	at org.apache.hadoop.ipc.Client.getConnection(Client.java:1458) ~[hadoop-common-2.3.0.jar:?]
	at org.apache.hadoop.ipc.Client.call(Client.java:1377) ~[hadoop-common-2.3.0.jar:?]
	... 38 more

I have attempted many things to try and resolve this issue, including the recommendations on the Druid site for working with different versions of Hadoop.

  • All of my hadoop XMLs are located in the conf/druid/_common_configuration directory.
  • I have added the mapreduce.job.classloader = true to the tuning config of my ingest task.
  • I have added a link to my hadoop client libraries in the hadoop-depenedencies/hadoop-client directory.
  • I have specified in my ingest task file that it should use this directory.
  • I have added a property druid.indexer.task.defaultHadoopCoordinates specifying the correct version to use to the middleManager runtime.properties file.
  • When reviewing the logs, it does appear to be using my version of the client libraries.
    Currently, my job runs until it has attempted 30 connections and then it fails. What is it trying to connect to here? I don’t see anywhere in any of my configuration where there is a service specified on port 8032.

Any suggestions on how to move forward would be very much appreciated!