Can i increase the num of processing threads aiming to improve parallel querying performance

I found that query performance is poor when there are a lot of queries In minute. I set the num of processing thread for historical nodes following the performance tuning guide. Can I increase the num of thread to improve query performance as the cpu usage of historical nodes are always low.

Hi Ze,

You can definitely test oversubscribing numThreads to help with concurrency and this may help a little bit especially if your cluster is less ingestion and more query heavy.

I would also look at checking your connection pooling settings to make sure your cluster can handle the multiple http connections one would would expect with heavy query concurrency.

I have three brokers and 5 historical nodes(16 cpu) with “hot” tier. The http connection of broker is set to 50 and it is set to 160 for historical node. Can I set the num of thread for historical node to maybe 30 ?

The highest I would go on processing.numThreads in historical would be 20 in your case.

If httpConnections in your brokers are set to 50, then I would set the http.numThreads in your historical to (3*50+10)=160.

I think you may be better served by converting one/or even two of the brokers to a historical unless you have a reason for 3 brokers.

Can you tell me the why it’s a better choice to convert one or two brokers? I think multiple brokers can rebalance our queries.

The recommendation is that you scale your broker vertically when tuning for concurrency.

In the official docs, you’ll notice that the recommended ratio of historicals to brokers is 15:1

Adding more historicals will help process more segments in a given unit of time.

Thanks for your advice. I have 5 “latest” historical nodes, 2 “hot” nodes, 10 warm nodes and 7 cold nodes. So the ratio is 8 : 1.

Another question is, I tried to reindex data to a new datasource with new query granularity. But I couldnt get any data from the new datasource even the new segments after reindex already exist and available.