Not enough direct memory (already changed druid.processing.buffer.sizeBytes)

hi, all:

I encounter this problem, very confused.

2018-02-07T09:38:01,358 INFO [main] io.druid.cli.CliPeon - Starting up with processors[32], memory[1,188,560,896], maxMemory[3,817,865,216].

  1. Not enough direct memory. Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, druid.processing.numThreads, or druid.processing.numMergeBuffers: maxDirectMemory[3,817,865,216], memoryNeed

ed[41,875,931,136] = druid.processing.buffer.sizeBytes[1,073,741,824] * (druid.processing.numMergeBuffers[7] + druid.processing.numThreads[31] + 1)

the problem is that I’ve already changed all the configurations to set druid.processing.buffer.sizeBytes as 256M, but in the error msg it still shows 1,073,741,824,

**also I’ve changed Xms Xmx in my .bashrc **

2018-02-07T09:38:01,661 INFO [main] io.druid.guice.JsonConfigurator - Loaded class[class io.druid.query.metadata.SegmentMetadataQueryConfig] from props[druid.query.segmentMetadata.] as [io.druid.query.metadata.Se

gmentMetadataQueryConfig@340afaf8]

2018-02-07T09:38:01,665 INFO [main] io.druid.guice.JsonConfigurator - Loaded class[class io.druid.query.groupby.GroupByQueryConfig] from props[druid.query.groupBy.] as [GroupByQueryConfig{defaultStrategy=‘v2’, si

ngleThreaded=false, maxIntermediateRows=50000, maxResults=500000, bufferGrouperMaxSize=2147483647, bufferGrouperMaxLoadFactor=0.0, bufferGrouperInitialBuckets=0, maxMergingDictionarySize=100000000, maxOnDiskStora

ge=0, forcePushDownLimit=false, forceHashAggregation=false}]

2018-02-07T09:38:01,672 INFO [main] io.druid.guice.JsonConfigurator - Loaded class[interface io.druid.server.log.RequestLoggerProvider] from props[druid.request.logging.] as [io.druid.server.log.NoopRequestLogger

Provider@72c4a3aa]

2018-02-07T09:38:01,673 ERROR [main] io.druid.cli.CliPeon - Error when starting up. Failing.

com.google.inject.ProvisionException: Unable to provision, see the following errors:

  1. Not enough direct memory. Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, druid.processing.numThreads, or druid.processing.numMergeBuffers: maxDirectMemory[3,817,865,216], memoryNeed

ed[41,875,931,136] = druid.processing.buffer.sizeBytes[1,073,741,824] * (druid.processing.numMergeBuffers[7] + druid.processing.numThreads[31] + 1)

at io.druid.guice.DruidProcessingModule.getIntermediateResultsPool(DruidProcessingModule.java:110) (via modules: com.google.inject.util.Modules$OverrideModule -> com.google.inject.util.Modules$OverrideModule ->

io.druid.guice.DruidProcessingModule)

at io.druid.guice.DruidProcessingModule.getIntermediateResultsPool(DruidProcessingModule.java:110) (via modules: com.google.inject.util.Modules$OverrideModule -> com.google.inject.util.Modules$OverrideModule ->

io.druid.guice.DruidProcessingModule)

while locating io.druid.collections.NonBlockingPool<java.nio.ByteBuffer> annotated with @io.druid.guice.annotations.Global()

for the 2nd parameter of io.druid.query.groupby.GroupByQueryEngine.(GroupByQueryEngine.java:81)

at io.druid.guice.QueryRunnerFactoryModule.configure(QueryRunnerFactoryModule.java:88) (via modules: com.google.inject.util.Modules$OverrideModule -> com.google.inject.util.Modules$OverrideModule -> io.druid.gu

ice.QueryRunnerFactoryModule)

while locating io.druid.query.groupby.GroupByQueryEngine

for the 2nd parameter of io.druid.query.groupby.strategy.GroupByStrategyV1.(GroupByStrategyV1.java:77)

while locating io.druid.query.groupby.strategy.GroupByStrategyV1

for the 2nd parameter of io.druid.query.groupby.strategy.GroupByStrategySelector.(GroupByStrategySelector.java:43)

while locating io.druid.query.groupby.strategy.GroupByStrategySelector

for the 1st parameter of io.druid.query.groupby.GroupByQueryQueryToolChest.(GroupByQueryQueryToolChest.java:104)

at io.druid.guice.QueryToolChestModule.configure(QueryToolChestModule.java:95) (via modules: com.google.inject.util.Modules$OverrideModule -> com.google.inject.util.Modules$OverrideModule -> io.druid.guice.Quer

yRunnerFactoryModule)

while locating io.druid.query.groupby.GroupByQueryQueryToolChest

while locating io.druid.query.QueryToolChest annotated with @com.google.inject.multibindings.Element(setName=,uniqueId=78, type=MAPBINDER, keyType=java.lang.Class<? extends io.druid.query.Query>)

at io.druid.guice.DruidBinders.queryToolChestBinder(DruidBinders.java:45) (via modules: com.google.inject.util.Modules$OverrideModule -> com.google.inject.util.Modules$OverrideModule -> io.druid.guice.QueryRunn

erFactoryModule -> com.google.inject.multibindings.MapBinder$RealMapBinder)

while locating java.util.Map<java.lang.Class<? extends io.druid.query.Query>, io.druid.query.QueryToolChest>

for the 1st parameter of io.druid.query.MapQueryToolChestWarehouse.(MapQueryToolChestWarehouse.java:36)

while locating io.druid.query.MapQueryToolChestWarehouse

while locating io.druid.query.QueryToolChestWarehouse

Does anyone can give some hint ?

can you share your Middle manager configs properties file?

Hi,

Try setting druid.processing.numThreads=2 in your middleManager runtime.properties. This is the default and it should fix your issue. I guess you have some non-default configuration. Please also double check if you’re editing “conf” (cluster mode config) or “conf-quickstart” (quickstart, single machine mode config).

druid.port=8091

Number of tasks per middleManager

druid.worker.capacity=3

Task launch parameters

druid.indexer.runner.javaOpts=-server -Xmx40g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

#druid.indexer.runner.javaOpts=-server -Xmx40g -XX:MaxDirectMemorySize=10G -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

druid.indexer.task.baseTaskDir=var/druid/task

HTTP server threads

druid.server.http.numThreads=25

Processing threads and buffers on Peons

druid.indexer.fork.property.druid.processing.buffer.sizeBytes=268435456

druid.indexer.fork.property.druid.processing.numThreads=2

Hadoop indexing

druid.indexer.task.hadoopWorkingPath=hdfs://user/hadoop/druid/tmp/druid-indexing

druid.indexer.task.defaultHadoopCoordinates=[“org.apache.hadoop:hadoop-client:2.7.3”]

It is like this

This setting does not exists in config file

but

druid.indexer.fork.property.druid.processing.buffer.sizeBytes=268435456

druid.indexer.fork.property.druid.processing.numThreads=2

these two exist.

you meaning add a setting druid.processing.numThreads=2, right?

Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, druid.processing.numThreads, or druid.processing.numMergeBuffers: maxDirectMemory[3,817,865,216], memoryNeeded[41,875,931,136] = druid.processing.buffer.sizeBytes[1,073,741,824] * (druid.processing.numMergeBuffers[7] + druid.processing.numThreads[31] + 1)

this really confused me that , whatever change I made to the runtime.properties under druid-0.11.0/conf/druid/ seems not apply at all.

druid.processing.buffer.sizeBytes[1,073,741,824] I’ve already set this to 256m, I sed replace all the occurrence of this params

druid.processing.numThreads[31] + 1, this is due to the machine is 32 core

I have a 5 node cluster.

what exactly has fixed this problem?