I’m seeing an issue with data serving by historical node , some background …over the weekend historical node max out …
after that I have increased the **druid.server.maxSize. size to 300GB from 30GB ,**during that hadoop indexer was running fine.
However I don’t see many segments published after that period in query although I see the same at deep storage .
Do I have to reload those segment again or is there any rule change require to reload those segments ??