Ingestion speed is too low, and also its not using all the given resources

My current ingestion spec is this :

{

“type” : “index_hadoop”,

“spec” : {

“dataSchema” : {

“dataSource” : “TestDruidCluster”,

“parser” : {

“type” : “string”,

“parseSpec” : {

“timestampSpec” : {

“column” : “time”,

“format” : “E MMM dd HH:mm:ss YYYY”

},

“dimensionsSpec” : {

“dimensions” : ,

“dimensionExclusions” : [ “rtime” ]

},

“format” : “json”

}

},

“granularitySpec” : {

“type” : “uniform”,

“segmentGranularity” : “day”,

“queryGranularity” : “none”,

“intervals”: [“2016-01-01/2016-09-01”]

},

“metricsSpec” : [

{

“type” : “count”,

“name” : “count”

},

{

“type” : “doubleSum”,

“name” : “reqsize”,

“fieldName” : “reqsize”

},

{

“type” : “doubleSum”,

“name” : “respsize”,

“fieldName” : “respsize”

},

{

“type” : “doubleSum”,

“name” : “resphdrsize”,

“fieldName” : “respsize”

},

{

“type” : “doubleSum”,

“name” : “reqhdrsize”,

“fieldName” : “respsize”

}

]

},

“tuningConfig” : {

“type” : “hadoop”,

“partitionsSpec”: {

“type”: “hashed”,

“numShards” : 21

}

},

“ioConfig” : {

“type”: “hadoop”,

“inputSpec”:{

  "type":"static",

“paths”:"/sc/remote/data/data.1.sample"

}

}

}

}

And my cluster comprises of two 32G and 16 core boxes in which middlemanagers are running with 7 worker threads. And when i try to run the ingestion task, Druid is not utilizing all the threads available but only one, and also its taking 180 seconds to index 10000 docs which gives ingestion rate of 50 docs per second. Now what can i do to improve this ingestion rate?.

Are you using a local Hadoop cluster or a remote one?

What version of Druid?

I am using a nfs based filesystem for deep storage and, i am using hadoop based batch file ingestion. My data is stored in flat json files.
And i am using druid version - 0.9.0

Are you using a remote hadoop cluster or running hadoop in local mode? If it’s local mode (the default if you didn’t provide special hadoop configs) then this will only take advantage of one cpu per task. The task won’t be parallelized unless it runs on a remote hadoop cluster.