Query time increases from 2s to 200s

As a part of the performance improvement we have increased the “maxRowsInMemory”: 100k to 1M for the Realtime Injection in Tranquility server.json settings (“segmentGranularity”: “hour”, “queryGranularity”: “hour” and “windowPeriod”: “PT1H”). Tasks are running on 24 core machines with 160gb memory with druid.worker.capacity = 10 and Each task has heap = 6gb and buffers = 5gb. We are running 10 task consuming around 500-700m rows of data per hour and only 2 task are running on each 24 core machine. After changing the “maxRowsInMemory”: 100k to 1M, the query times has gone from 2s to 200s. In metrics we can see in all the indexers query time has gone up. Can you let us know if any of the properties in middle manager need to be tuned when we increase maxRowsInMemory .

Thanks in advance

Hi Rahul,
I request you to check your segment size to be in range of 300MB to 800MB. if not, may be you need compaction and ensure that segment sizes are optimal.
Is it possible for you to consider scan query instead of select query?

Please take a look at old link – https://groups.google.com/forum/#!topic/druid-user/O56ewVcpzV8



If am reading correctly, you have both historical and middle manager running on same server. Middle manager is eating up 10 cores and approx 60 gigs of memory. Your historical has enough threads and memory to run optimally I feel.

But having said that, what does your historical config look like? Can you share your historical configs(both JVM and runtime), how many segments you have and size of each segment? What’s the size of 1M rows in your dataset?

Our segment size is around 400MB. No our historical and middle manager running on different servers. Every things is running fine with Query server responding to the queries within 2sec. we were seeing lag in Kafka while consuming realtime data so we increased the “maxRowsInMemory”: **100k to 1M **and the query times has gone from 2s to 200s. Is there any tuning need to be done on the middle managers?