Druid is support Hadoop HA?

Hi,
my hadoop cluster is running on HA Service with hadoop 2.6.0 in cdh 5.4.0.I have added the hadoop conf “/etc/hadoop” to the classpath,but when I start the indexing servcie,it doesn’t work on hdfs. and when I start indexing service with the params such as -D hadoop.fs.defaultFS=hdfs://hadoop/ -Dhadoop.dfs.nameservices= hadoop -Dhadoop.dfs.ha.namenodes. hadoop =nn,snn -Dhadoop.dfs.namenode.rpc-address. hadoop.nn=192.168.7.168:9000 -Dhadoop.dfs.namenode.rpc-address.

hadoop.snn=192.168.7.169:9000 -Dhadoop.dfs.client.failover.proxy.provider. hadoop =org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider -Dhadoop.yarn.resourcemanager.address=192.168.7.168:9020 ,it can work.

and then if i not spectify the param -Dhadoop.mapreduce.framework.name=yarn ,it work on LocalJobRuner. When I spectify the param -Dhadoop.mapreduce.framework.name=yarn,I can find the job on the yarn application,but the job status is always accepting.,it isn’t running.

can you see the problem which i describe?and How I can solve it?

Hi,

Can you run command “hadoop classpath” on the machine which is setup correctly and can submit MR jobs. Take the output of that command and put it on the druid process classpath. Also, put druid-hdfs-storage directly on the classpath instead of adding it to extensions in druid runtime.properties.

Hopefully that should work.

– Himanshu

Hi Himanshu Gupta,thank you very much. now the druid works well with your advice.

在 2015年7月22日星期三 UTC+8下午6:52:36,wangm…@gmail.com写道: