Druid Cluster Setup Issue - hdfs as deep storage

Hello All,

I am trying to setup Druid Cluster. I am facing some issue while setting up druid cluster. I am unable to inject data in druid cluster while I am able to inject data in single server setup.

We wanted to use hdfs as deep storage.

I choose 3 servers. Below are the details.

server1 - Coordinator, Overlord and zookeeper, metastore - mysql server (master server)

server2 - Historicals and MiddleManagers (Data Server) + tranquility

server3 - Druid Brokers, Pivot, PlyQL (Query Server)

While try to post data in druid, I am receiving “still running” and it’s going forever.
Documentation looks half or it’s scattered. It will be very help to me if you can provide proper documentation for cluster setup using hdfs as deep storage.


Suraj Nayak

hadoop@lappy /usr/local/druid-ana/imply-1.3.0/bin $ ./post-index-task --url http://MASTERSERVERIP:8090/ --file …/quickstart/my-index-task.json
Task started: index_hadoop_pageviews_2016-11-01T15:12:27.845Z
Task log:
Task status:
Task index_hadoop_pageviews_2016-11-01T15:12:27.845Z still running…
Task index_hadoop_pageviews_2016-11-01T15:12:27.845Z still running…

hey can you please attache the logs ?

Please find the attached log and json file.

druid.log.2.log (222 KB)

my-index-task.json (1.19 KB)