We have total memory of 2TB ( 8 historical nodes) and have 4TB of segments in deep storage. We have enabled the caching ( not a Memcache) on historical nodes and after 2TB of segments loaded to historical nodes it’s not loading any further segments even if we fire the query spanning all 4Tb segments. I am expecting if the queried data is not in the cache, the druid should load it from deep storage by swapping some of the segments from historical hash/cache.
I want to understand how historical nodes load/swaps the segments and cache them ?. It is not practical to have enough memory to match your deep storage segment size. I am thinking I may be missing a setting or configuration which does swap/refresh the segments/cache on demand in both historical/broker nodes. Should I not enable caching ?, or I’m I missing a trick here ?
Following are my historical and Broker settings.
HTTP server threads
Processing threads and buffers