My team is using druid at bi report. We design a 10 dimensions and 100 metrics .I don’t know if Druid supports such that design, because I see that the number of metrics in many cases is less than or equal to the number of dimensions.
Currently, we have encountered very difficult query performance issues.
Our druid cluster: two 8-core CPUs and 16G RAM, one 8-core CPU and 32G RAM(Broker at)
Broker Configuration: druid.processing.numThreads=7, druid.processing.numMergeBuffers=7;jvm option: -Xmx20g and -XX:MaxDirectMemorySize=6g
Historical Configuration: druid.processing.numThreads=7;jvm option: -Xmx4g and -XX:MaxDirectMemorySize=2g
When I use jmeter for query testing(jmeter thread group set 10 thead and forever run), Broker’s cpu usage rate is around 400% and Historical’s cpu usage rate is around 400%,I don’t understand why the CPU utilization does not reach the bottleneck. What is the relationship between the CPU usage of the broker? What is the relationship between the CPU usage of the Historical?
Is Druid’s query throughput related to the Historical node? When I tested the throughput, I found that there were only 2tps, and there were only 2 positive Historical nodes.
I hope someone can help answer the doubts, thank you very much.