Batch Hadoop ingestion with inputSpec 'granularity', Input path structure a little different and throwing input not found

Hi,

Trying to ingest data in Druid But getting error java.io.IOException: No input paths specified in job

As I was going through the documentation, i realised everywhere the default folder structure y=XXXX/m=XX/d=XX/H=XX is being used. However I have a little different folder structure and tried that in but its giving me the error.

“inputSpec” : {

“type” : “granularity”,

“inputFormat”: “org.apache.orc.mapreduce.OrcInputFormat”,

“dataGranularity”:“MINUTE”,

“inputPath” : “hdfs://server-ip:8020/path_before_partitions/”,

“pathFormat”:"‘partition_year’=yyyy/‘partition_month’=MM/‘partition_day’=dd/‘partition_hour’=HH/‘partition_minute’=mm/",

“filePattern”:".*"

}

Data is present as eg. /path_before_partitions/partition_year=2019/partition_month=11/partition_day=28/partition_hour=14/partition_minute=3/part-00.orc

I’ve been able to ingest a single partition using “type” : “static”, so I’m sure its not orc related.

Any idea how to resolve this or making changes in the folder structure the only solution?

Okay, So I think i’ve found the reason.
Minute is being stored from 0-60 (integer) and not in the format (00-60). Due to this it is unable to find folders. Any particular way if this can be handled in druid?