Error starting broker/historical on single node (12G Max RAM)

I want to start (broker, coordinator, historical and overlord) on a single machine with 12G of memory.

What should

  1. druid.extensions.coordinates=[“io.druid.extensions:druid-hdfs-storage:0.6.99”] for Hadoop 2.4.1 ?

  2. When i start Broker with

#druid.processing.buffer.sizeBytes=134217728

#druid.processing.numThreads=1

druid.host=druidclient-1082475.phx01.eaz.ebayc3.com

druid.service=broker

druid.port=8080

druid.zk.service.host=druidclient-1082475.phx01.eaz.ebayc3.com

#Disable query chunking

druid.query.chunkPeriod=P10Y

druid.query.topN.chunkPeriod=P10Y

#druid.processing.buffer.sizeBytes=100000000

#druid.processing.numThreads=1

#Memcache Layer

#druid.broker.cache.type=memcached

#druid.broker.cache.hosts:11211=broker-1-287451.slc01.dev.ebayc3.com

#druid.broker.cache.expiration=2147483647

#druid.broker.cache.memcachedPrefix=d1

#druid.broker.http.numConnections=20

#druid.broker.http.readTimeout=PT5M

druid.server.http.numThreads=50

druid.request.logging.type=emitter

druid.request.logging.feed=druid_requests

druid.monitoring.monitors=[“com.metamx.metrics.SysMonitor,com.metamx.metrics.JvmMonitor”]

and’

java -Xmx5g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath $DRUID_HOME/lib/*:$DRUID_HOME/config/broker/runtime.properties io.druid.cli.Main server broker

It only starts with 5G of memory. I want to run druid on Single node (for now and i have only 12G). Otherwise it complains to increase Xmx or DirectMemory ? What should i reduce so that i can run it with 256MB/1G of memory

  1. Historical Node Error:

When i start historical

$ cat config/historical/runtime.properties

druid.host=druidclient-1082475.phx01.eaz.ebayc3.com

druid.port=8081

druid.service=historical

druid.zk.service.host=druidclient-1082475.phx01.eaz.ebayc3.com

druid.db.connector.connectURI=jdbc:mysql://druidclient-1082475.phx01.eaz.ebayc3.com:3306/druid

druid.db.connector.user=druid

druid.db.connector.password=diurd

druid.extensions.coordinates=[“io.druid.extensions:druid-hdfs-storage:0.6.99”]

druid.storage.type=hdfs

druid.storage.storageDirectory=hdfs://apollo-phx-nn-ha/tmp/erpstorage

druid.server.maxSize=11000000000

druid.segmentCache.locations=[{“path”: “/tmp/druid/indexCache”, “maxSize”: 11000000000}]

druid.monitoring.monitors=[“io.druid.server.metrics.ServerMonitor”, “com.metamx.metrics.SysMonitor”,“com.metamx.metrics.JvmMonitor”]

Change these to make Druid faster

druid.processing.buffer.sizeBytes=512000000

druid.processing.numThreads=7

druid.query.groupBy.maxResults=1000000

and

java -Xmx5g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -classpath $DRUID_HOME/lib/*:$DRUID_HOME/config/historical/runtime.properties io.druid.cli.Main server historical

I see error

  1. druid.segmentCache.locations - may not be empty

Any suggestions ?

Regards,

Deepak

$ cat config/broker/runtime.properties

-server

-Xmx256m

-Duser.timezone=UTC

-Dfile.encoding=UTF-8

-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager

druid.processing.buffer.sizeBytes=107374182

druid.processing.numThreads=1

[dvasthimal@druidclient-1082475 druid]$ java -classpath lib/*:config/broker/runtime.properties io.druid.cli.Main server broker

I still see the same error. Looks like its unable to read the properties from broker/runtime.properties (even if its in class path) and always defaults to default values and hence needs more memory.