Not enough direct memory exception in Middle Manager

Hi,

I am new to Druid and we are setting the druid cluster in production. We setup the cluster with following software versions

  1. Mysql - 5.5

  2. Zookeeper - 3.4.9

  3. Druid 0.9.1.1

We are running Historical, Broker, Overlord, middle-manager components on independent machines. Each machine having 15GB of RAM. We are using Hadoop(internal cluster) as Deep storage

Historical Configurations:

JVM :

-server

-Xmx10g

-Xms2g

-XX:NewSize=1g

-XX:MaxNewSize=3g

-XX:MaxDirectMemorySize=10g

-XX:+UseConcMarkSweepGC

-XX:+PrintGCDetails

-XX:+PrintGCTimeStamps

-Duser.timezone=UTC

-Dfile.encoding=UTF-8

-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

-Djava.io.tmpdir=/mnt/data/druid_test/tmp

runtime.properties:

druid.service=druid/historical

druid.host=10.X.X.X

druid.port=8083

HTTP server threads

druid.server.http.numThreads=25

Processing threads and buffers

druid.processing.buffer.sizeBytes=536870912

druid.processing.numThreads=5

Segment storage

druid.segmentCache.locations=[{“path”:"/mnt/data/druid_test/segment-cache",“maxSize”: 13000000000}]

druid.server.maxSize=13000000000

Middle Manager:

JVM:

-server

-Xmx64m

-Xms64m

-XX:+UseConcMarkSweepGC

-XX:+PrintGCDetails

-XX:MaxDirectMemorySize=7g

-XX:+PrintGCTimeStamps

-Duser.timezone=UTC

-Dfile.encoding=UTF-8

-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

-Djava.io.tmpdir=/mnt/data/druid_test/tmp

runtime.properties:

druid.service=druid/middleManager

druid.host=10.Y.Y.Y

druid.port=8091

Number of tasks per middleManager

druid.worker.capacity=4

Task launch parameters

druid.indexer.runner.javaOpts=-server -Xmx3g -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps

druid.indexer.task.baseTaskDir=/mnt/data/druid_test/task

HTTP server threads

druid.server.http.numThreads=25

Processing threads and buffers

druid.processing.buffer.sizeBytes=136870912

druid.processing.numThreads=1

Hadoop indexing

druid.indexer.task.hadoopWorkingPath=/mnt/data/druid_test/hadoop-tmp

druid.indexer.task.defaultHadoopCoordinates=[“org.apache.hadoop:hadoop-client:2.3.0”]

Overlord:

JVM:

-server

-Xmx4g

-Xms1g

-XX:NewSize=256m

-XX:MaxNewSize=256m

-XX:+UseConcMarkSweepGC

-XX:+PrintGCDetails

-XX:+PrintGCTimeStamps

-Duser.timezone=UTC

-Dfile.encoding=UTF-8

-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

-Djava.io.tmpdir=/mnt/data/druid_test/tmp

runtime.properties:

druid.service=druid/overlord

druid.host=10.X.Y.Z

druid.port=8090

druid.indexer.queue.startDelay=PT30S

druid.indexer.runner.type=remote

druid.indexer.storage.type=metadata

I am trying to ingest the sample data given in the website. We copied Hadoop configuration files (core-site.xml etc) in every machine. We copied sample data to HDFS location also. Then we are trying to ingest data in batchmode from hadoop.

We are getting below error after submitting the job in the middlemanager whatever the parameter we tweak.

2016-09-30T17:27:02,970 ERROR [main] io.druid.cli.CliPeon - Error when starting up. Failing.

com.google.inject.ProvisionException: Guice provision errors:

1) Not enough direct memory. Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, or druid.processing.numThreads: maxDirectMemory[3,748,659,200], memoryNeeded[6,442,450,944] = druid.processing.buffer.sizeBytes[1,073,741,824] * ( druid.processing.numThreads[5] + 1 )

at io.druid.guice.DruidProcessingModule.getIntermediateResultsPool(DruidProcessingModule.java:108)

at io.druid.guice.DruidProcessingModule.getIntermediateResultsPool(DruidProcessingModule.java:108)

while locating io.druid.collections.StupidPool<java.nio.ByteBuffer> annotated with @io.druid.guice.annotations.Global()

for parameter 1 at io.druid.query.groupby.GroupByQueryEngine.<init>(GroupByQueryEngine.java:79)

at io.druid.guice.QueryRunnerFactoryModule.configure(QueryRunnerFactoryModule.java:85)

while locating io.druid.query.groupby.GroupByQueryEngine

for parameter 0 at io.druid.query.groupby.GroupByQueryRunnerFactory.<init>(GroupByQueryRunnerFactory.java:64)

at io.druid.guice.QueryRunnerFactoryModule.configure(QueryRunnerFactoryModule.java:82)

while locating io.druid.query.groupby.GroupByQueryRunnerFactory

while locating io.druid.query.QueryRunnerFactory annotated with @com.google.inject.multibindings.Element(setName=,uniqueId=28, type=MAPBINDER)

at io.druid.guice.DruidBinders.queryRunnerFactoryBinder(DruidBinders.java:38)

while locating java.util.Map<java.lang.Class<? extends io.druid.query.Query>, io.druid.query.QueryRunnerFactory>

for parameter 0 at io.druid.query.DefaultQueryRunnerFactoryConglomerate.<init>(DefaultQueryRunnerFactoryConglomerate.java:36)

while locating io.druid.query.DefaultQueryRunnerFactoryConglomerate

at io.druid.guice.StorageNodeModule.configure(StorageNodeModule.java:55)

while locating io.druid.query.QueryRunnerFactoryConglomerate

for parameter 9 at io.druid.indexing.common.TaskToolboxFactory.<init>(TaskToolboxFactory.java:95)

at io.druid.cli.CliPeon$1.configure(CliPeon.java:153)

while locating io.druid.indexing.common.TaskToolboxFactory

for parameter 0 at io.druid.indexing.overlord.ThreadPoolTaskRunner.<init>(ThreadPoolTaskRunner.java:96)

at io.druid.cli.CliPeon$1.configure(CliPeon.java:180)

while locating io.druid.indexing.overlord.ThreadPoolTaskRunner

while locating io.druid.indexing.overlord.TaskRunner

for parameter 3 at io.druid.indexing.worker.executor.ExecutorLifecycle.<init>(ExecutorLifecycle.java:78)

at io.druid.cli.CliPeon$1.configure(CliPeon.java:170)

while locating io.druid.indexing.worker.executor.ExecutorLifecycle

2) Not enough direct memory. Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, or druid.processing.numThreads: maxDirectMemory[3,748,659,200], memoryNeeded[6,442,450,944] = druid.processing.buffer.sizeBytes[1,073,741,824] * ( druid.processing.numThreads[5] + 1 )

at io.druid.guice.DruidProcessingModule.getIntermediateResultsPool(DruidProcessingModule.java:108)

at io.druid.guice.DruidProcessingModule.getIntermediateResultsPool(DruidProcessingModule.java:108)

while locating io.druid.collections.StupidPool<java.nio.ByteBuffer> annotated with @io.druid.guice.annotations.Global()

for parameter 1 at io.druid.query.groupby.GroupByQueryEngine.<init>(GroupByQueryEngine.java:79)

at io.druid.guice.QueryRunnerFactoryModule.configure(QueryRunnerFactoryModule.java:85)

while locating io.druid.query.groupby.GroupByQueryEngine

for parameter 2 at io.druid.query.groupby.GroupByQueryQueryToolChest.<init>(GroupByQueryQueryToolChest.java:113)

at io.druid.guice.QueryToolChestModule.configure(QueryToolChestModule.java:74)

while locating io.druid.query.groupby.GroupByQueryQueryToolChest

for parameter 3 at io.druid.query.groupby.GroupByQueryRunnerFactory.<init>(GroupByQueryRunnerFactory.java:64)

at io.druid.guice.QueryRunnerFactoryModule.configure(QueryRunnerFactoryModule.java:82)

while locating io.druid.query.groupby.GroupByQueryRunnerFactory

while locating io.druid.query.QueryRunnerFactory annotated with @com.google.inject.multibindings.Element(setName=,uniqueId=28, type=MAPBINDER)

at io.druid.guice.DruidBinders.queryRunnerFactoryBinder(DruidBinders.java:38)

while locating java.util.Map<java.lang.Class<? extends io.druid.query.Query>, io.druid.query.QueryRunnerFactory>

for parameter 0 at io.druid.query.DefaultQueryRunnerFactoryConglomerate.<init>(DefaultQueryRunnerFactoryConglomerate.java:36)

while locating io.druid.query.DefaultQueryRunnerFactoryConglomerate

at io.druid.guice.StorageNodeModule.configure(StorageNodeModule.java:55)

while locating io.druid.query.QueryRunnerFactoryConglomerate

for parameter 9 at io.druid.indexing.common.TaskToolboxFactory.<init>(TaskToolboxFactory.java:95)

at io.druid.cli.CliPeon$1.configure(CliPeon.java:153)

while locating io.druid.indexing.common.TaskToolboxFactory

for parameter 0 at io.druid.indexing.overlord.ThreadPoolTaskRunner.<init>(ThreadPoolTaskRunner.java:96)

at io.druid.cli.CliPeon$1.configure(CliPeon.java:180)

while locating io.druid.indexing.overlord.ThreadPoolTaskRunner

while locating io.druid.indexing.overlord.TaskRunner

for parameter 3 at io.druid.indexing.worker.executor.ExecutorLifecycle.<init>(ExecutorLifecycle.java:78)

at io.druid.cli.CliPeon$1.configure(CliPeon.java:170)

while locating io.druid.indexing.worker.executor.ExecutorLifecycle

  1. Not enough direct memory. Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, or druid.processing.numThreads: maxDirectMemory[3,748,659,200], memoryNeeded[6,442,450,944] = druid.processing.buffer.sizeBytes[1,073,741,824] * ( druid.processing.numThreads[5] + 1 )

at io.druid.guice.DruidProcessingModule.getIntermediateResultsPool(DruidProcessingModule.java:108)

at io.druid.guice.DruidProcessingModule.getIntermediateResultsPool(DruidProcessingModule.java:108)

while locating io.druid.collections.StupidPool<java.nio.ByteBuffer> annotated with @io.druid.guice.annotations.Global()

for parameter 3 at io.druid.query.groupby.GroupByQueryQueryToolChest.<init>(GroupByQueryQueryToolChest.java:113)

at io.druid.guice.QueryToolChestModule.configure(QueryToolChestModule.java:74)

while locating io.druid.query.groupby.GroupByQueryQueryToolChest

for parameter 3 at io.druid.query.groupby.GroupByQueryRunnerFactory.<init>(GroupByQueryRunnerFactory.java:64)

at io.druid.guice.QueryRunnerFactoryModule.configure(QueryRunnerFactoryModule.java:82)

while locating io.druid.query.groupby.GroupByQueryRunnerFactory

while locating io.druid.query.QueryRunnerFactory annotated with @com.google.inject.multibindings.Element(setName=,uniqueId=28, type=MAPBINDER)

at io.druid.guice.DruidBinders.queryRunnerFactoryBinder(DruidBinders.java:38)

while locating java.util.Map<java.lang.Class<? extends io.druid.query.Query>, io.druid.query.QueryRunnerFactory>

for parameter 0 at io.druid.query.DefaultQueryRunnerFactoryConglomerate.<init>(DefaultQueryRunnerFactoryConglomerate.java:36)

while locating io.druid.query.DefaultQueryRunnerFactoryConglomerate

at io.druid.guice.StorageNodeModule.configure(StorageNodeModule.java:55)

while locating io.druid.query.QueryRunnerFactoryConglomerate

for parameter 9 at io.druid.indexing.common.TaskToolboxFactory.<init>(TaskToolboxFactory.java:95)

at io.druid.cli.CliPeon$1.configure(CliPeon.java:153)

while locating io.druid.indexing.common.TaskToolboxFactory

for parameter 0 at io.druid.indexing.overlord.ThreadPoolTaskRunner.<init>(ThreadPoolTaskRunner.java:96)

at io.druid.cli.CliPeon$1.configure(CliPeon.java:180)

while locating io.druid.indexing.overlord.ThreadPoolTaskRunner

while locating io.druid.indexing.overlord.TaskRunner

for parameter 3 at io.druid.indexing.worker.executor.ExecutorLifecycle.<init>(ExecutorLifecycle.java:78)

at io.druid.cli.CliPeon$1.configure(CliPeon.java:170)

while locating io.druid.indexing.worker.executor.ExecutorLifecycle

  1. Not enough direct memory. Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, or druid.processing.numThreads: maxDirectMemory[3,748,659,200], memoryNeeded[6,442,450,944] = druid.processing.buffer.sizeBytes[1,073,741,824] * ( druid.processing.numThreads[5] + 1 )

at io.druid.guice.DruidProcessingModule.getIntermediateResultsPool(DruidProcessingModule.java:108)

at io.druid.guice.DruidProcessingModule.getIntermediateResultsPool(DruidProcessingModule.java:108)

while locating io.druid.collections.StupidPool<java.nio.ByteBuffer> annotated with @io.druid.guice.annotations.Global()

for parameter 4 at io.druid.query.groupby.GroupByQueryRunnerFactory.<init>(GroupByQueryRunnerFactory.java:64)

at io.druid.guice.QueryRunnerFactoryModule.configure(QueryRunnerFactoryModule.java:82)

while locating io.druid.query.groupby.GroupByQueryRunnerFactory

while locating io.druid.query.QueryRunnerFactory annotated with @com.google.inject.multibindings.Element(setName=,uniqueId=28, type=MAPBINDER)

at io.druid.guice.DruidBinders.queryRunnerFactoryBinder(DruidBinders.java:38)

while locating java.util.Map<java.lang.Class<? extends io.druid.query.Query>, io.druid.query.QueryRunnerFactory>

for parameter 0 at io.druid.query.DefaultQueryRunnerFactoryConglomerate.<init>(DefaultQueryRunnerFactoryConglomerate.java:36)

while locating io.druid.query.DefaultQueryRunnerFactoryConglomerate

at io.druid.guice.StorageNodeModule.configure(StorageNodeModule.java:55)

while locating io.druid.query.QueryRunnerFactoryConglomerate

for parameter 9 at io.druid.indexing.common.TaskToolboxFactory.<init>(TaskToolboxFactory.java:95)

at io.druid.cli.CliPeon$1.configure(CliPeon.java:153)

while locating io.druid.indexing.common.TaskToolboxFactory

for parameter 0 at io.druid.indexing.overlord.ThreadPoolTaskRunner.<init>(ThreadPoolTaskRunner.java:96)

at io.druid.cli.CliPeon$1.configure(CliPeon.java:180)

while locating io.druid.indexing.overlord.ThreadPoolTaskRunner

while locating io.druid.indexing.overlord.TaskRunner

for parameter 3 at io.druid.indexing.worker.executor.ExecutorLifecycle.<init>(ExecutorLifecycle.java:78)

at io.druid.cli.CliPeon$1.configure(CliPeon.java:170)

while locating io.druid.indexing.worker.executor.ExecutorLifecycle

4 errors

at com.google.inject.internal.InjectorImpl$3.get(InjectorImpl.java:1014) ~[guice-4.0-beta.jar:?]

at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1036) ~[guice-4.0-beta.jar:?]

at io.druid.guice.LifecycleModule$2.start(LifecycleModule.java:153) ~[druid-api-0.9.1.1.jar:0.9.1.1]

at io.druid.cli.GuiceRunnable.initLifecycle(GuiceRunnable.java:91) [druid-services-0.9.1.1.jar:0.9.1.1]

at io.druid.cli.CliPeon.run(CliPeon.java:274) [druid-services-0.9.1.1.jar:0.9.1.1]

at io.druid.cli.Main.main(Main.java:105) [druid-services-0.9.1.1.jar:0.9.1.1]

Please help me to resolve this error and suggest best configuation parameters for Hadoop integration with Druid. Thanks in advance.

Regards,

V Santhosh Kumar Tangudu

you need to increase -XX:MaxDirectMemorySize to match at least -> druid.processing.buffer.sizeBytes * ( druid.processing.numThreads + 1)

We understand that from the exception message. We tried changing config to satisfy criteria described in the exception but its just not working. Please take a look at our configuration and suggest in which component do we need to make the change.

Thanks for the quick reply.

you need to increase -XX:MaxDirectMemorySize to match at least -> druid.processing.buffer.sizeBytes * ( druid.processing.numThreads + 1)

B-Slim
///_///_///_///_///__

Hi,

I am new to Druid and we are setting the druid cluster in production. We setup the cluster with following software versions

  1. Mysql - 5.5
  2. Zookeeper - 3.4.9
  3. Druid 0.9.1.1

We are running Historical, Broker, Overlord, middle-manager components on independent machines. Each machine having 15GB of RAM. We are using Hadoop(internal cluster) as Deep storage

>> Historical Configurations:
>>
>> JVM :

-server

-Xmx10g

-Xms2g

-XX:NewSize=1g

-XX:MaxNewSize=3g

-XX:MaxDirectMemorySize=10g

-XX:+UseConcMarkSweepGC

-XX:+PrintGCDetails

-XX:+PrintGCTimeStamps

-Duser.timezone=UTC

-Dfile.encoding=UTF-8

-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

-Djava.io.tmpdir=/mnt/data/druid_test/tmp

>> runtime.properties:

druid.service=druid/historical

druid.host=10.X.X.X

druid.port=8083

HTTP server threads

druid.server.http.numThreads=25

Processing threads and buffers

druid.processing.buffer.sizeBytes=536870912

druid.processing.numThreads=5

Segment storage

druid.segmentCache.locations=[{“path”:"/mnt/data/druid_test/segment-cache",“maxSize”: 13000000000}]

druid.server.maxSize=13000000000

>> Middle Manager:

>> JVM:

-server

-Xmx64m

-Xms64m

-XX:+UseConcMarkSweepGC

-XX:+PrintGCDetails

-XX:MaxDirectMemorySize=7g

-XX:+PrintGCTimeStamps

-Duser.timezone=UTC

-Dfile.encoding=UTF-8

-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

-Djava.io.tmpdir=/mnt/data/druid_test/tmp

>> runtime.properties:
>>
>>

druid.service=druid/middleManager

druid.host=10.Y.Y.Y

druid.port=8091

Number of tasks per middleManager

druid.worker.capacity=4

Task launch parameters

druid.indexer.runner.javaOpts=-server -Xmx3g -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps

druid.indexer.task.baseTaskDir=/mnt/data/druid_test/task

HTTP server threads

druid.server.http.numThreads=25

Processing threads and buffers

druid.processing.buffer.sizeBytes=136870912

druid.processing.numThreads=1

Hadoop indexing

druid.indexer.task.hadoopWorkingPath=/mnt/data/druid_test/hadoop-tmp

druid.indexer.task.defaultHadoopCoordinates=[“org.apache.hadoop:hadoop-client:2.3.0”]

>> Overlord:
>>
>>
>> JVM:

-server

-Xmx4g

-Xms1g

-XX:NewSize=256m

-XX:MaxNewSize=256m

-XX:+UseConcMarkSweepGC

-XX:+PrintGCDetails

-XX:+PrintGCTimeStamps

-Duser.timezone=UTC

-Dfile.encoding=UTF-8

-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

-Djava.io.tmpdir=/mnt/data/druid_test/tmp

>> runtime.properties:

druid.service=druid/overlord

druid.host=10.X.Y.Z

druid.port=8090

druid.indexer.queue.startDelay=PT30S

druid.indexer.runner.type=remote

druid.indexer.storage.type=metadata

I am trying to ingest the sample data given in the website. We copied Hadoop configuration files (core-site.xml etc) in every machine. We copied sample data to HDFS location also. Then we are trying to ingest data in batchmode from hadoop.

We are getting below error after submitting the job in the middlemanager whatever the parameter we tweak.

2016-09-30T17:27:02,970 ERROR [main] io.druid.cli.CliPeon - Error when starting up. Failing.

com.google.inject.ProvisionException: Guice provision errors:

>> 1) Not enough direct memory. Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, or druid.processing.numThreads: maxDirectMemory[3,748,659,200], memoryNeeded[6,442,450,944] = druid.processing.buffer.sizeBytes[1,073,741,824] * ( druid.processing.numThreads[5] + 1 )

at io.druid.guice.DruidProcessingModule.getIntermediateResultsPool(DruidProcessingModule.java:108)

at io.druid.guice.DruidProcessingModule.getIntermediateResultsPool(DruidProcessingModule.java:108)

while locating io.druid.collections.StupidPool<java.nio.ByteBuffer> annotated with @io.druid.guice.annotations.Global()

for parameter 1 at io.druid.query.groupby.GroupByQueryEngine.<init>(GroupByQueryEngine.java:79)

at io.druid.guice.QueryRunnerFactoryModule.configure(QueryRunnerFactoryModule.java:85)

while locating io.druid.query.groupby.GroupByQueryEngine

for parameter 0 at io.druid.query.groupby.GroupByQueryRunnerFactory.<init>(GroupByQueryRunnerFactory.java:64)

at io.druid.guice.QueryRunnerFactoryModule.configure(QueryRunnerFactoryModule.java:82)

while locating io.druid.query.groupby.GroupByQueryRunnerFactory

while locating io.druid.query.QueryRunnerFactory annotated with @com.google.inject.multibindings.Element(setName=,uniqueId=28, type=MAPBINDER)

at io.druid.guice.DruidBinders.queryRunnerFactoryBinder(DruidBinders.java:38)

while locating java.util.Map<java.lang.Class<? extends io.druid.query.Query>, io.druid.query.QueryRunnerFactory>

for parameter 0 at io.druid.query.DefaultQueryRunnerFactoryConglomerate.<init>(DefaultQueryRunnerFactoryConglomerate.java:36)

while locating io.druid.query.DefaultQueryRunnerFactoryConglomerate

at io.druid.guice.StorageNodeModule.configure(StorageNodeModule.java:55)

while locating io.druid.query.QueryRunnerFactoryConglomerate

for parameter 9 at io.druid.indexing.common.TaskToolboxFactory.<init>(TaskToolboxFactory.java:95)

at io.druid.cli.CliPeon$1.configure(CliPeon.java:153)

while locating io.druid.indexing.common.TaskToolboxFactory

for parameter 0 at io.druid.indexing.overlord.ThreadPoolTaskRunner.<init>(ThreadPoolTaskRunner.java:96)

at io.druid.cli.CliPeon$1.configure(CliPeon.java:180)

while locating io.druid.indexing.overlord.ThreadPoolTaskRunner

while locating io.druid.indexing.overlord.TaskRunner

for parameter 3 at io.druid.indexing.worker.executor.ExecutorLifecycle.<init>(ExecutorLifecycle.java:78)

at io.druid.cli.CliPeon$1.configure(CliPeon.java:170)

while locating io.druid.indexing.worker.executor.ExecutorLifecycle

>> 2) Not enough direct memory. Please adjust -XX:MaxDirectMemorySize, druid.processing.buffer.sizeBytes, or druid.processing.numThreads: maxDirectMemory[3,748,659,200], memoryNeeded[6,442,450,944] = druid.processing.buffer.sizeBytes[1,073,741,824] * ( druid.processing.numThreads[5] + 1 )

you will need to configure correct maxDirectMemory in druid.indexer.runner.javaOpts for the peons.

We tried to tweak peon parameters also. It didn’t work well for us. Can you please suggest right configuration settings by looking at the below configuration ?

Hi,
In the configs mentioned in this thread the middlemanager is set to druid.processing.numThreads=1 which should be passed to the peons. the logs in your case seem to indicate you have set processing threads to 5.

Could you attach the middlemanager & common runtime.props you are trying to run with ?

Hi Nishant,

I already posted middlemanager properties there. common.runtime.prop are:

druid.extensions.loadList=[“druid-hdfs-storage”, “mysql-metadata-storage”]

druid.startup.logging.logProperties=true

druid.zk.service.host=10.A.B.C,10.D.E.F,10.G.H.I

druid.zk.paths.base=/druid

druid.metadata.storage.type=mysql

druid.metadata.storage.connector.connectURI=jdbc:mysql://10.Y.Z.A:3306/druid

druid.metadata.storage.connector.user=root

druid.metadata.storage.connector.password=root

druid.storage.type=hdfs

druid.storage.storageDirectory=/test/druid_test

druid.indexer.logs.type=file

druid.indexer.logs.directory=var/druid/indexing-logs

druid.selectors.indexing.serviceName=druid/overlord

druid.selectors.coordinator.serviceName=druid/coordinator

druid.monitoring.monitors=[“com.metamx.metrics.JvmMonitor”]

druid.emitter=logging

druid.emitter.logging.logLevel=info

Setting up these properties became very difficult for us. I deployed imply package which is wrapper on Druid. This is very easy to setup and work.

Thanks for your help.