query read performance over large segment

I am facing a situation where I have to load a table which has close to 500millions records. The actual challenge is that this table speaks about last week items sales history, so I just have the year week id alone as the time-related column in this table. I could attach an extra column with a single timestamp value for every rows.

As I understood, the segments will be created based on the timestamp value in Druid, So i guess the above table’s 500millions rows will be written to single partition. Please advice me if I am wrong.

If so, I am really concerned about its read performance as it is queried an application which requires very low latency. I kindly request your help in advising me correct approaches to write the data and also the relevant configuration ideas to make the read faster.

Thanks.

Hi see inline