strange total druid cluster total storage on the druid console

hi, all

I have 5 historical nodes, each have 1TB.

df -h

Filesystem Size Used Avail Use% Mounted on

/dev/vda1 20G 2.3G 17G 13% /

devtmpfs 32G 0 32G 0% /dev

tmpfs 32G 0 32G 0% /dev/shm

tmpfs 32G 145M 32G 1% /run

tmpfs 32G 0 32G 0% /sys/fs/cgroup

/dev/vdb1 1008G 5.1G 952G 1% /********

but on the druid console, it shows

n + 4

1.48 TB free

5 nodes, 300 GB each

1.5 TB total

19 GB (1.3%) used

what’s mean “n+4”, and why just 300GB storage each node?


n+4 means you can lose 4 historical nodes and still have enough capacity to load all of your data.

The 300GB is because historical nodes don’t automatically use the full space available on their disk. There’s a concept of “maxSize” that you can use to limit how much space is used per node. This is useful if you want to ensure that you don’t overcommit your nodes too much. The configs to edit if you want to adjust that are,

  • druid.server.maxSize

  • druid.segmentCache.locations


I’ll try to change the maxSize.

在 2015年9月11日星期五 UTC+8上午2:42:33,Gian Merlino写道: