For F1 tier, the first machine is double the utilization of other machines. This situation lasted for days . There is nothing wrong for Historical and coordinator log. Is this a bug?
we experienced a similar behaviour in our cluster, see my thoughts on that in this druid-user topic.
I assumed it was due to extreme dynamic configuration of the coordinator, but even with default-values, we had this situation happen again with no idea why it happened.
which types of balancer strategy are you using?
2018년 2월 27일 화요일 오전 7시 50분 11초 UTC-8, an...@simplaex.com 님의 말:
I have no idea on this issue. I guess, is this cost function bug? (Https://github.com/druid-io/druid/pull/2972). But I do not have any evidence. Have you solved this problem?
在 2018年2月27日星期二 UTC+8下午11:50:11，an…@simplaex.com写道：
Do all the machines have the same druid.server.maxSize?
I try to increase the value of maxSegmentsToMove to 500 (default is 5), now found coordinator balance faster. After testing, I’ll give the conclusion again to see if this problem can be solved.
在 2018年3月1日星期四 UTC+8下午1:56:02，Gian Merlino写道：
That’s good – good luck!
I try to increase the value of maxSegmentsToMove to 500 (default is 5).
It didn’t work, I don’t know how to solve it
在 2018年2月27日星期二 UTC+8下午3:30:08，linjing li写道：
we experience the same behaviour. For us it happens since we updated to Druid 0.12.0. At the moment we switched/changed coordinator runtime.properties from
which is/seems to be default, to
`After a few hours it looks like the segment balancing is going to be better.