Simple cluster setup

Hi,

Our initial testing has been doing great, especially since the new kafka indexer. The testing have been running on single computer and now we want to take this a step further by taking this to a production like environment.

I’ve read the cluster documentation and my understanding is something like this:

2x Coordinator+Overlord

2x Historical+MiddleManager (CPU+RAM)

1x Broker (CPU+RAM)

We’re not going fully fault protected in the beginning, but it’s my understanding that we could easily expand by clone one of the above services when needed?

Not mentioned yet is:

1x MySQL metadata storage

ZooKeeper cluster (use the C+O machines at the beginning?)

Kafka cluster (use C+O machines? H+M machines?)

How will Druid handle the new kafka indexing service when running several nodes?

regards,

Robin

Hey Robin,

In general, yeah, you can add fault tolerance and scale out by adding more machines. The exception is the Coordinator/Overlord, where all you get is fault tolerance (they don’t scale out – only one is active at any given time). So generally people have:

  1. 1 Coordinator/Overlord/ZK box if you don’t need fault tolerance, or 3 if you do (ZK needs at least 3 machines for fault tolerance)

  2. As many Historical/MiddleManager as needed for your data scale and fault tolerance requirements

  3. As many Brokers as needed for your query workload and fault tolerance requirements

  4. MySQL/PostgreSQL metadata storage, optionally with replication and failover for fault tolerance

  5. Kafka generally on its own machines, but it can share Druid’s ZK cluster

The new Kafka indexing service can scale out on MiddleManagers as much as needed to handle your data rates. Its supervision occurs on the Overlord, and having multiple Overlords provides fault tolerance for supervision.

You may also be interested in the Imply distribution, which includes a service supervisor and query tools as well: http://imply.io/docs/latest/