Historical node missing segments

Hello,

I’m seeing an issue with data serving by historical node , some background …over the weekend historical node max out …

after that I have increased the **druid.server.maxSize. size to 300GB from 30GB ,**during that hadoop indexer was running fine.

However I don’t see many segments published after that period in query although I see the same at deep storage .

Do I have to reload those segment again or is there any rule change require to reload those segments ??

~Biswajit

After changing druid.server.maxSize. size I did restart the cluster

Hi,

You can have an overview of your segments and datasource at <coordinator_ip>:8081.

You can also see which segments are served or not and the ressources of your historical nodes (check if they still can load data).

If your datasource is set as fully available and your historical don’t serve all the segments of your deep storage, it’s sounds more like a metadata storage problem.

–> https://gist.github.com/oodavid/2206527 It is a good practice in case you lose your metadata storage ! I use it in production and it works perfectly.

Let us know about your investigations on this subject.

Ben