I’ve recently started using Druid (so far really impressed with the technology!) and I’ve been experimenting with streaming avro encoded data from Kafka using Tranquility Kafka.
When a new realtime task is created I see this error in the log:
2017-05-05T15:36:52,514 ERROR [main] io.druid.cli.CliPeon - Error when starting up. Failing. com.google.inject.ProvisionException: Unable to provision, see the following errors: 1) Not enough direct memory. Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, druid.processing.numThreads, or druid.processing.numMergeBuffers: maxDirectMemory[1,908,932,608], memoryNeeded[2,147,483,648] = druid.processing.buffer.sizeBytes[536,870,912] * (druid.processing.numMergeBuffers + druid.processing.numThreads + 1)
I have already tried updating the MiddleManager -XX:MaxDirectMemorySize to 30gb in the appropriate jvm.properties files. However, it doesn't appear that this is working. Is this the correct place to update the memory setting for this error? Previous older threads suggested this is the case, however, I'm seeing this error repeatedly, even after updating the config to multiple different values.
ps -ef | grep java | grep middleManager root 22713 15312 0 15:14 pts/2 00:00:12 java -server -Xms64m -Xmx64m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.io.tmpdir=var/tmp -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager -XX:MaxDirectMemorySize=30g -cp conf/druid/_common:conf/druid/middleManager:lib/* io.druid.cli.Main server middleManager
Here’s some setup details:
I have a test cluster with overlord x1, coordinator x1, middleManager x3, broker x3, historical x3
The versions I’m using are:
druid-avro-extensions 0.10.0 [Using Confluent Schema Registry]
Appreciate any help!