How to resize the Historical node based on data volume

Hello folks, we have this Druid / minio setup in the lab and trying to determine how we should determine the hardware specs and number of Historical nodes in our architecture. My questions are as follows:

  • I am new to Druid but I think, as part of its design, all the segments in minio are downloaded to the Historical node periodically, and kept in memory for querying; hence we need to make sure that we have a lot of memory on each Historical node. Our initial design had 64Gb of RAM for the Historical node. At the moment we have one Historical node and the current memory usage is around 50Gb. Our total bucket size on minio is 404Mb so far. This data has been collected for 50 days so far, and we would like to come up with some design guides based on these numbers. Is there a way to determine how much space each segment on minio consumes on Historical’s memory? Is it possible to determine a ratio based on a metadata that is kept somewhere in the Druid architecture.
  • I believe MiddleManager stores the Real-time data in memory. So is there a method to determine the same thing for MiddleManager?
    Thanks,