{"error":"Could not resolve type id 'index_parallel' into a subtype of [simple type, class io.druid

Hi Team,

We are trying to ingest the data using the Native Index Task. We were successfully able to ingest the data in to druid with index task. But we has issues with Index task for huge amount of data hence we thought of using the Index_parallel Native Index Task.

But the overload api is throwing the below error

{“error”:“Could not resolve type id ‘index_parallel’ into a subtype of [simple type, class io.druid.indexing.common.task.Task]\n at [Source: HttpInputOverHTTP@270732f1[c=5860,q=1,[0]=EOF,s=STREAM]; line: 1, column: 4]”}

Can some one help here to solve this issue so that we can use multiple Middle manager nodes to do index task parallel.

Here is the ingestor spec we are using to ingest the data.

{

“type” : “index_parallel”,

“spec” : {

“dataSchema” : {

“dataSource” : “Demo”,

“parser” : {

“type” : “string”,

“parseSpec” : {

“format” : “tsv”,

“columns” : [“ReportType”,“DataSource”,“current_date”,“Impressions”],

“delimiter”:"\t",

“dimensionsSpec” : {

“dimensions” : [

“ReportType”,

“DataSource”

]

},

“timestampSpec”: {

“column”: “current_date”,

“format”: “yyyy-MM-dd HH:mm:ss.SSS”

}

}

},

“metricsSpec” : [

{ “type” : “count”, “name” : “count” },

{ “type” : “longSum”, “name” : “Impressions”, “fieldName” :“Impressions” }

],

“granularitySpec” : {

“type” : “uniform”,

“segmentGranularity” : “hour”,

“queryGranularity” : “none”,

“intervals” : [“2018-04-01T00:00:00.000/2018-04-01T00:59:59.999”],

“rollup” : true

}

},

“ioConfig” : {

“type” : “index_parallel”,

“firehose” : {

“type” : “local”,

“baseDir” : “/home/ubuntu/process/”,

“filter” : “*.gz”

},

“appendToExisting” : false

},

“tuningConfig” : {

“type” : “index_parallel”,

“targetPartitionSize” : 5000000,

“maxRowsInMemory” : 1000000,

“forceExtendableShardSpecs” : true

}

}

}

Regards,

Chethan G Puttaswamy

Hey Chethan,

Which version of Druid are you using? index_parallel was introduced in 0.13.0

Best regards,

Dylan

Also note that the package path changed from io.druid to org.apache.druid in 0.13 so it seems very likely that this is your problem.

Hi David,

Thanks for looking at the issue. The version I am using is lower to 0.13.0, hence this error.

Will try batch ingestion with Hadoop instead.

Regards,
Chethan G Puttaswamy