Historical node capacity problem.

When data indexing did not work out, I checked the logs.

Tasks are pended.

I could see ‘Not enough [_default_tier] servers or node capacity to assign segment’ in Overlord logs.

Historical node setting is…

druid.server.maxSize=150000000000

druid.segmentCache.locations=[{“path”: “/mnt/persistent/zk_druid”, “maxSize”: 150000000000}]

I resolve this problem by scaling out Historical node.

However I have a some questions.

  • If data must grow increasingly, should Historical node also be added more and more?

  • How many Historical node is needed? Furthermore can I configure capacity of Historical node ?

Hi, See Inline

When data indexing did not work out, I checked the logs.

Tasks are pended.

I could see ‘Not enough [_default_tier] servers or node capacity to assign segment’ in Overlord logs.

Historical node setting is…

druid.server.maxSize=150000000000

druid.segmentCache.locations=[{“path”: “/mnt/persistent/zk_druid”, “maxSize”: 150000000000}]

I resolve this problem by scaling out Historical node.

However I have a some questions.

  • If data must grow increasingly, should Historical node also be added more and more?

As the data grows, you will need to add more historical nodes to add more capacity.

You can also configure data retention rules in druid coordinator to only load data for a certain period.

i.e If your application only needs data for last 1 year, you can configure rules to only load data for prev 1 year on historical nodes.

  • How many Historical node is needed? Furthermore can I configure capacity of Historical node ?

yes, druid.server.maxSize is the capacity that a historical node holds.

Thanks a lot!!!

2016년 7월 12일 화요일 오후 6시 35분 30초 UTC+9, Hwansung Yu 님의 말: