duplicated maxSize settings for Historical nodes

Here are 2 lines from my runtime.properties for historical node:

80GB

druid.server.maxSize=80000000000

#80gb

druid.segmentCache.locations=[{“path”: “/persistent/zk_druid”, “maxSize”: 80000000000}]

what is the difference between the two? why is the same value required twice?

Hi Prashant,
with the druid.segmentCache.locations you can specify multiple directories and maxSize for each of them individually, its believe its there to support multiple disk mounts as segment cache locations.

druid.server.maxSize is the overall total limit on the amount of data the historical node is assigned by the coordinatoir,

In general, sum of maxSize over all segmentCache locations >= druid.server.maxSize.

In general, sum of maxSize over all segmentCache locations >= druid.server.maxSize.

I believe you meant

segmentCache locations <= druid.server.maxSize

am i correct? otherwise the maxSize has no value

Reading the code, the constraint should be server.maxSize < sum(locations.maxSize), so I’m going to retract my previous statements :stuck_out_tongue:

so then whats the point of server.maxSize then, since its not the ‘max size’…
i can set server.maxSize to 0 and it will be < sum(locations.maxSize)

If you set it to 0, then no segments will be assigned.

druid.server.maxSize should be set such that it works around the corner case I described in the previous email.