Getting warning on coordinator.log - Not enough [_default_tier] servers or node capacity to assign

Hi,

I have set up a Druid cluster with 1 master node (coordinator
and overload), 2 data nodes (historical and middlemanager service) and 2
query nodes (broker). AWS S3 is been used for deep storage. Both historical nodes use the same S3 bucket. I noticed we are getting warning on coordinator.log as following:

2017-01-23T21:18:06,011 INFO [Coordinator-Exec–0] io.druid.server.coordinator.
ReplicationThrottler - [_default_tier]: Replicant create queue is empty.
2017-01-23T21:18:06,011 INFO [Coordinator-Exec–0] io.druid.server.coordinator.ReplicationThrottler - [_default_tier]: Replicant terminate queue is empty.
2017-01-23T21:18:06,012 WARN [Coordinator-Exec–0] io.druid.server.coordinator.rules.LoadRule - Not enough [_default_tier] servers or node capacity to assign segment[ABC_Company_2017-01-11T00:00:00.000Z_2017-01-12T00:00:00.000Z_2017-01-23T21:16:31.050Z]! Expected Replicants[2]
2017-01-23T21:18:06,012 WARN [Coordinator-Exec–0] io.druid.server.coordinator.rules.LoadRule - Not enough [_default_tier] servers or node capacity to assign segment[ABC_Company_2017-01-11T00:00:00.000Z_2017-01-12T00:00:00.000Z_2017-01-23T19:44:45.240Z]! Expected Replicants[2]

We
have 2 historical nodes, and here is the runtime.properties (both nodes
have exactly the same runtime.properties and common_runtime.properies):

druid.host=DruidData1:8083,DruidData2:8083
druid.service=druid/historical
druid.port=8083

HTTP server threads

druid.server.http.numThreads=40

Processing threads and buffers

druid.processing.buffer.sizeBytes=1073741824
druid.processing.numThreads=12

Segment storage

druid.segmentCache.locations=[{“path”:“var/druid/segment-cache”,“maxSize”:130000000000}]
druid.server.maxSize=130000000000

Query cache

druid.historical.cache.useCache=true
druid.historical.cache.populateCache=true
druid.cache.type=local
druid.cache.sizeInBytes=2000000000

Common property for deep storage:

For S3:

druid.storage.type=s3
druid.storage.bucket=bigdata-test
druid.storage.baseKey=druid/segments
druid.s3.accessKey=XXX
druid.s3.secretKey=YYYYY

Anybody know why we are getting this warning? Is there any property configured wrong?

Thanks

Hong

druid.host shouldn’t be exactly the same on the two historicals; one of them should have DruidData1:8083 and the other should have DruidData2:8083. Or you can also leave druid.host blank if you want Druid to derive it from the hostname of the machine, and then all historical nodes could have the same runtime properties.

I bet that right now, in your setup, the coordinator is only detecting one of the historicals because they have the same name.

Thanks, Gian. That’s my issue. After making change as you suggested, the warning went away.

Thanks

Hong