we have a big lookup from a table in MySql (about 350MB, 1,6 millions of rows). I tried to configure Druid to use OffHeap caching
to lower memory footprint. OffHeap caching is only used on broker node for testing purpose. I added the following lines to common.runtime.properties
and restarted broker node.
Were you able to solve your issues? I’ve been investigating the look up feature (I have several million rows) and I haven’t had the best of luck with the look up feature. I’m almost to the point of modifying the plugin to suit our needs plus avoid breakages on upgrades…
The current implementation has an issue (even when doing offheap storage) that it needs to load the entire lookup dataset into the java heap first, before flushing it offheap. So even with this mode you do need enough heap to store the lookup temporarily. It is a limitation that we definitely are motivated to address soon. If you get to it before us and are interesting in contributing a patch upstream, it would definitely be appreciated. Perhaps something that enables streaming of the lookup data from its source to the offheap cache without materializing it in the heap first.
I do think focusing on offheap storage is a brighter future for lookup feature. Onheap storage has proven to lead to too many issues with memory management.