I have 7 days imported in HDFS, with 10-20GB zipped segments.
From them, only 3 days I see with proper size like 40GB, in coordinator UI.
Also, I see Polled and found 1,568 segments in the database, but in MySQL there are 3000 rows in druid_segments.
Also I noticed that reprocessing a day took it from the lowest hdfs usage to the highest.
What can be the explanation? The map reduces finalized in both cases; I can only speculate that the metadata was not fully written. But in the logs I could not yet find an exception for that.