Can't connect rm, logs : Failing over to rm2

Hi. I’m new in druid and try to batch ingest.

This is my logs :

2017-07-11T06:30:54,591 INFO [task-runner-0-priority-0] org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider - Failing over to rm2
2017-07-11T06:31:10,615 INFO [task-runner-0-priority-0] org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider - Failing over to rm1
2017-07-11T06:31:10,617 WARN [task-runner-0-priority-0] org.apache.hadoop.io.retry.RetryInvocationHandler - Exception while invoking class org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getNewApplication. Not retrying because failovers (30) exceeded maximum allowed (30)
java.net.ConnectException: Call From lean13.lean.com/10.0.2.13 to 0.0.0.0:8032 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

``

< Check List >
1.I check druid/_common/yarn-site.xml in druid server and /etc/hadoop/lib/yarn-sit.xml in hadoop server.

Between druid and hadoop server is not different about resource manager.

2.Also I check resource manager log(/var/log/hadoop-yarn/yarn-yarn-resourcemanager-${hostname}.log), there is not any log about upper job.

So I think druid server can’t connect rm, What am I missed with checking??

Thank you

It seems that your yarn configuration is not loaded correctly on the classpath.

Can you share information about ps -ef|grep druid?

Thank you jerry for replying.

The information of my test server is like below:
-A1:overlord,middleManager,historical

-A2:coordinator,historical,broker

-A3:historical

-B1:hdfs

-B2:hdfs

When I was checked yarn config between druid server and hdfs, those are same as well as resourcemanager configuration.

Druid User group says the answer of yarn question about batch ingestion can be found mostly in http://druid.io/docs/0.9.0/operations/other-hadoop.html.

So I added “hadoopDependencyCoordinates”: [“org.apache.hadoop:hadoop-client:2.7.1”] in bottom of my batch ingestion spec but this problem is not solved.

(+And I was wrote this question in your post in popit.kr)

Thank you

2017년 7월 12일 수요일 오후 10시 25분 26초 UTC+9, jerry jung 님의 말:

This is my batch ingestion spec.

{

“type”: “index_hadoop”,

“spec”: {

“dataSchema”: {

“dataSource”: “test_josh”,

“parser”: {

“type”: “string”,

“parseSpec”: {

“timestampSpec”: {

“column”: “ts”,

“format”: “auto”

},

“dimensionsSpec”: {

“dimensions”: [

“said”

],

“dimensionExclusions”: [

“ts”,

“raw_log”

]

},

“format”: “json”

}

},

“granularitySpec”: {

“type”: “uniform”,

“segmentGranularity”: “hour”,

“queryGranularity”: “minute”,

“intervals”: [

“2017-07-07T00:00:00+0800/2017-07-11T00:00:00+0800”

]

},

“metricsSpec”: [

{

“type”: “count”,

“name”: “count”

}

]

},

“ioConfig”: {

“type”: “hadoop”,

“inputSpec” : {

“type” : “static”,

“paths”: “batch_test_josh.json”

}

},

“tuningConfig”: {

“type”: “hadoop”,

“jobProperties” : {

“mapreduce.job.user.classpath.first”: “true”

},

“partitionsSpec” : {

“type” : “dimension”,

“targetPartitionSize” : 10000000,

“rowFlushBoundary” : 500000

}

}

},

“hadoopDependencyCoordinates”: [“org.apache.hadoop:hadoop-client:2.7.1”]

}

``

2017년 7월 17일 월요일 오후 5시 26분 39초 UTC+9, josh lee 님의 말: