Error: io.druid.java.util.common.ISE: WTF?! No bucket found for timestamp

I am encountering the following error when I changed the queryGranularity from none to all. When I ran the same ingestion spec using queryGranularity=none, it works just fine. But when I change it to queryGranularity=all, it fails. It fails during the map reduce with the error I mentioned below.

Any input you can give me will be much help. Thanks.

Here is the ingestion spec and removed some parts not relevant to this thread.

{

“type”: “index_hadoop”,

“spec”: {

“ioConfig”: {

“type”: “hadoop”,

“inputSpec”: {

“type”: “granularity”,

“dataGranularity”: “HOUR”,

“inputPath”: “******* removed for posting in the user group ****”,

“filePattern”: “.*\.gz”,

“pathFormat”: “‘y’=yyyy/‘m’=MM/‘d’=dd/‘h’=HH/”

}

},

“dataSchema”: {

“dataSource”: “test_check_v14”,

“granularitySpec”: {

“type”: “uniform”,

“segmentGranularity”: {

“type”: “period”,

“period”: “P1D”,

“timeZone”: “America/Phoenix”

},

"queryGranularity": “all”,

“intervals”: [

“2018-06-02T00:00-07:00/2018-06-03T00:00-07:00”

]

},

“parser”: {"******* removed for posting in the user group ****"},

“metricsSpec”: ["******* removed for posting in the user group ****"],

“transformSpec”: {"******* removed for posting in the user group ****"}

},

“tuningConfig”: {

“type”: “hadoop”,

“partitionsSpec”: {

“type”: “dimension”,

“targetPartitionSize”: 5000000,

“partitionDimension”: “******* removed for posting in the user group ****”

},

“ignoreInvalidRows”: “false”,

“jobProperties”: {

“mapreduce.job.classloader”: “true”,

“mapreduce.reduce.memory.mb”: 8192

}

}

}

}

Error:

2019-08-19T22:29:29,132 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Task Id : attempt_1564184123553_67028_m_000000_0, Status : FAILED
Error: io.druid.java.util.common.ISE: WTF?! No bucket found for timestamp: -146136543-09-08T08:23:32.096Z
at io.druid.indexer.DeterminePartitionsJob$DeterminePartitionsDimSelectionMapperHelper.emitDimValueCounts(DeterminePartitionsJob.java:400)
at io.druid.indexer.DeterminePartitionsJob$DeterminePartitionsDimSelectionPostGroupByMapper.map(DeterminePartitionsJob.java:329)
at io.druid.indexer.DeterminePartitionsJob$DeterminePartitionsDimSelectionPostGroupByMapper.map(DeterminePartitionsJob.java:305)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)

Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

Add this segmentgranularity = Day and querygranularity= Hour. I hope this work.

Hi Christopher, setting queryGranularity to all during ingestion does not make sense either. Can you share with us why you changed it from none to all? What are you trying to achieve ?

Thanks