historical not visible after changing numThreads

Hi All,
I recently migrated our historical nodes to a new aws instance type, with more processing cores. I did this by bringing up the new instances, adding a rule to replicate data across all nodes, then stopped the older nodes.

This migration went smoothly. However afterwards, I changed the configuration of my historical nodes to match the new node types. One of the configuration properties, specifically druid.processing.numThreads, seems to cause the historical nodes to not be available for the coordinator nodes.

It had previously been set to 7, (old nodes had 8 cores), and the new configuration is set to 31 (new nodes have 31 cores). After this change goes in, the coordinator console view shows all datasources as unavailable. I also tried waiting 5 minutes and restarting the coordinator node but to no avail.

The interesting part is that if I change the value to something lower (e.g. 6) the coordinator node is fine and can still view the historical nodes. As soon as I move it above 7 (I tried 8), the historical nodes are foobar’d.

Is there any other configuration or setting that is tied to this that I am missing/forgetting?



I’ve actually never seen this error before. What version of Druid are you using?

Is it possible that they’re unable to start up? Historicals need enough direct memory available for a processing buffer for each thread, it’s possible you ran out. If this is going on then there should be a message in the logs.

As Gian Merlino said,Please check logs.Maybe like following

"Not enough direct memory. Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, or druid.processing.numThreads: + “maxDirectMemory[%,d], memoryNeeded[%,d] = druid.processing.buffer.sizeBytes[%,d] * ( druid.processing.numThreads[%,d] + 1 )”,

Ensure that maxDirectMemory is greater than (druid.processing.buffer.sizeBytes * (druid.processing.numThreads + 1)) .

在 2016年4月27日星期三 UTC+8上午8:54:57,james…@optimizely.com写道:

Yup, this was the problem. Thanks everyone!