Optimal segment size

The Druid documentation mentions that optimal segment sizes would be between 200-700MB. Is this the size that a segment would have in deep storage when it is gzip compressed or the size a segment would have on the local disk of a historical?



Hi Sascha

it is the size of the segment loaded by historical nodes.
Also keep in mind that another important metric of optimality is the number of rows per segment. Also try to get as close as possible to 5M rows. Having a high number of rows per segment can lead to huge performance degradation.