Druid Deep Storage HDFS Block Size?

For my Druid Cluster I have chosen to use use Hadoop HDFS for my Deep Storage of segments. Unfortunately, the current documentation doesn’t suggest values for the HDFS block size: http://druid.io/docs/latest/configuration/hadoop.html

As such, can someone comment on the ideal HDFS block size for Druid? My current HDFS block size is 128MB and I am thinking a smaller value would help my disk usage with little effect on performance.

Thought I would throw in some related links I found:

Hey Mark,

I don’t think blocks in HDFS actually take up the full blocksize on disk if they are smaller than the blocksize. It is just how HDFS chunks up larger files. So I don’t think changing the blocksize would have much of an effect on the storage needs or performance of Druid segments.

I would definitely agree with you on the point of HDFS disk usage, as I have found similar articles saying something similar (http://stackoverflow.com/questions/13012924/large-block-size-in-hdfs-how-is-the-unused-space-accounted-for).