Coordinator Cannot see Historical Nodes.


I setup the druid cluster as follows.

1 Broker Node

1 Indexer Node

2 Historical Nodes

1 Coordinator Node

Along with above, there are mysql, zookeeper, s3 deep storage.

I have seen few problems after this cluster is up. Please help me to solve the problems.

  1. I have two historical nodes and mentioned to use 12G RAM while running the historical server. Both the machines are up and running fine. But when I see in Druid Coordinator console, it is showing 1node, 10GB free space each even though there are 2 historical nodes and 12GB RAM. Please let me know why coordinator console is showing like that.

  2. Broker is not forwarding requests to Historical nodes and returning some error.

Thanks in advance.


This looks like a problem with zookeeper. Did you configure<comma separated all zookeeper host:port>

`on all druid nodes ?


`also, if you have more than more zookeeper nodes, did you configure “quorum” correctly on all the zookeeper nodes?


`it will be helpful if you shared all your property files (druid nodes, as well as zookeeper settings)


-- Himanshu

I suspected the same. Thats why I have done several tests on that.

  1. I checked the quoram configuration (In that one is showing as a leader and rest are followers)

  2. If I stop one of the historical node, then it is still showing one available historical node.

  3. If I stop both the historical node, then coordinator console is not showing any memory info.


  1. Not sure what you mean by “one is showing as leader”, in zookeeper settings quorum is just a comma separate list of all the zookeeper hosts[and ports]

Hi Himanshu,

Thanks for answering my queries.

I solved the first problem. As you said, It is the configuration issue. I fixed the problem. Now coordinator is showing two historical nodes.

But I am struggling with second problem.

Broker is up and running fine. When I try to query the datasource information, it is showing empty dimensions and metrics.

Please suggest me some options.

The common configuration Property file:



Metadata Storage (use something like mysql in production by uncommenting properties below)

by default druid will use derby

Deep storage (local filesystem for examples - don’t use this in production)



Query Cache (we use a simple 10mb heap-based local cache on the broker)



Indexing service discovery


Monitoring (disabled for examples, if you enable SysMonitor, make sure to include sigar jar in your cp)


Metrics logging (disabled for examples - change this to logging or http in production)


The broker configuration property file is:

Default host: localhost. Default port: 8082. If you run each node type on its own node in production, you should override these values to be IP:8080*



We enable using the local query cache here

For prod: set numThreads = # cores - 1, and sizeBytes to 512mb



Hi Himanshu,

Thank you. I solved the problem.

That is great. Can you also reply with the what worked so that if someone has same problem in future, he can refer to this thread.

– Himanshu

Sorry Himanshu. I forgot mention in the previous mail.

When you bring one component up in Druid, it will create znode in zookeeper with that machine IP. Each and every component communicate with that IP address.

In my case, there are two Historical nodes and in config file, I mentioned host IP as localhost. So only one machine is able to create the znode with localhost ip and another couldn’t create. So I changed the IP address of the hosts in the configuration file and it started working fine.