For my Druid Cluster I have chosen to use use Hadoop HDFS for my Deep Storage of segments. Unfortunately, the current documentation doesn’t suggest values for the HDFS block size: http://druid.io/docs/latest/configuration/hadoop.html
As such, can someone comment on the ideal HDFS block size for Druid? My current HDFS block size is 128MB and I am thinking a smaller value would help my disk usage with little effect on performance.
Thought I would throw in some related links I found: