Batch Hadoop ingestion with inputSpec 'granularity', Input path structure a little different and throwing input not found


Trying to ingest data in Druid But getting error No input paths specified in job

As I was going through the documentation, i realised everywhere the default folder structure y=XXXX/m=XX/d=XX/H=XX is being used. However I have a little different folder structure and tried that in but its giving me the error.

“inputSpec” : {

“type” : “granularity”,

“inputFormat”: “org.apache.orc.mapreduce.OrcInputFormat”,


“inputPath” : “hdfs://server-ip:8020/path_before_partitions/”,




Data is present as eg. /path_before_partitions/partition_year=2019/partition_month=11/partition_day=28/partition_hour=14/partition_minute=3/part-00.orc

I’ve been able to ingest a single partition using “type” : “static”, so I’m sure its not orc related.

Any idea how to resolve this or making changes in the folder structure the only solution?

Okay, So I think i’ve found the reason.
Minute is being stored from 0-60 (integer) and not in the format (00-60). Due to this it is unable to find folders. Any particular way if this can be handled in druid?