Historical Node , disk space free wrong, on coordinator ui

So early this week we had an issue where we’re tables wouldnt load into druid. they would stay red circles for < 99% loaded. The issue was druid thought there wasnt as much space. according to the coordinator there is only 60gb available per node. so when it filled up to 240 it refused to load the data for s3 till we killed some of those datasources.

The issue is that those machines have more than 60gb. in fact the volume the historical data is loaded on is a 500gb raid volume. so there is way more than enough space. My guess is when coordinator starts up it gets the disk space of where the binary is stored? which would be nearly 60gb on the root volume. Is this a way of configuring this so it knows that it actually has more space?

[root@ip-172-17-27-84 ~]# df -h

Filesystem Size Used Avail Use% Mounted on

devtmpfs 63G 72K 63G 1% /dev

tmpfs 63G 0 63G 0% /dev/shm

/dev/nvme0n1p1 59G 30G 30G 50% /

/dev/md0 590G 45G 516G 8% /mnt

What are your config values for druid.segmentCache.locations and druid.server.maxSize (on your historicals)?

Ah that was it. Thanks., i checked our runtime.properties which had those values and saw they were set to that. changing them and restarting the servers fixed it.