Number Historical node required to handle 1tb of data

Hello,

I am trying to load segments which adds up to the size of 1 TB. I am running 4 r3.8xlarge instances for the historical node. In coordinator UI it shows like this.

When I am trying to do simple queries like group by on single column it is getting timed out. In coordinator logs, we can see log related to load segment

2017-05-17T05:09:49,522 INFO [Master-PeonExec–0] io.druid.server.coordinator.LoadQueuePeon - Server[/druid/prod/loadQueue/x.x.x.x:8083] processing segment[Test_Feed_2017-03-07T20:22:00.000Z_2017-03-07T20:23:00.000Z_2017-05-11T02:49:21.465Z]

Do I need to add more historical nodes? I have created segments for a minute which are the size of approx 30-40MB.

Thanks,

Navneet

Query time out is not related to coordinator. What you see in the log is related to loading the segment and not processing. To investigate the timeout issue you need to look at the historical node logs.

Adding historical can help for sure since it will give you more parallel computing, but keep in mind that, there is many nobs that you can use to tune the cluster. I will start by optimizations of segments sizes to be around 600mb.