overlord node runing task Exception

need help

this is error msg in mapper task:

2015-09-16 15:17:06,198 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running child : com.metamx.common.RE: Failure on row[{"hits":1,"province":"??","metric":"product.sip.province.hits.bits","bits":1227,"sip":"10.75.12.84","timestamp":"1441836480"}]
	at io.druid.indexer.HadoopDruidIndexerMapper.map(HadoopDruidIndexerMapper.java:79)
	at io.druid.indexer.HadoopDruidIndexerMapper.map(HadoopDruidIndexerMapper.java:31)
	at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
Caused by: java.lang.NullPointerException
	at io.druid.indexer.HadoopDruidIndexerConfig.getBucket(HadoopDruidIndexerConfig.java:342)
	at io.druid.indexer.IndexGeneratorJob$IndexGeneratorMapper.innerMap(IndexGeneratorJob.java:225)
	at io.druid.indexer.HadoopDruidIndexerMapper.map(HadoopDruidIndexerMapper.java:75)
	... 9 more



spec:
{
  "type" : "index_hadoop",
  "spec" : {
    "dataSchema" : {
      "dataSource" : "www-analyze-test1",
      "parser" : {
        "type" : "string",
        "parseSpec" : {
          "format" : "json",
          "timestampSpec" : {
            "column" : "timestamp",
            "format" : "posix"
          },
          "dimensionsSpec" : {
            "dimensions": ["province","metric","sip"],
            "dimensionExclusions" :[] ,
            "spatialDimensions" : []
          }
        }
      },
      "metricsSpec" : [{
        "type" : "count",
        "name" : "count"
      }
      , {
        "type" : "longSum",
        "name" : "hits",
        "fieldName" : "hits"
      }
      , {
        "type" : "longSum",
        "name" : "bits",
        "fieldName" : "bits"
      }

  ],
      "granularitySpec" : {
        "type" : "uniform",
        "segmentGranularity" : "day",
        "queryGranularity" : "minute",
         "intervals" : [ "2015-09-09/2015-09-11" ]
      }
    },
     "ioConfig" : {
            "type" : "hadoop",
            "inputSpec" : {
              "type" : "static",
              "paths" : "hdfs://***/user/druid/sparkdata/newtest.log"
            }
    },
  "tuningConfig" : {
    "type" : "hadoop"
    }

 }
}


newtest.log:
{"hits":1,"province":"新疆","metric":"product.sip.province.hits.bits","bits":1227,"sip":"10.75.12.84","timestamp":"1441836480"}
{"hits":2,"province":"新疆","metric":"product.sip.province.hits.bits","bits":1227,"sip":"10.75.12.84","timestamp":"1441836480"}
{"hits":3,"province":"新疆","metric":"product.sip.province.hits.bits","bits":1227,"sip":"10.75.12.84","timestamp":"1441836480"}
{"hits":4,"province":"新疆","metric":"product.sip.province.hits.bits","bits":1227,"sip":"10.75.12.84","timestamp":"1441836480"}

What version of Druid is this?

I am trying to match the code to the line that the exception is being thrown from.

the version is 0.7.0,
I also tried the 0.8.0 version, but it also happened to this error

在 2015年9月17日星期四 UTC+8上午8:21:10,Fangjin Yang写道:

As a very quick check, can you change your interval to end on the 10th and not the 11th? I’m looking at the code to see where the NPE can be thrown.

I tried running this locally with the exact data and the exact configs posted and the task completed successfully for me. Can you try with the latest stable (0.8.1) to see if the problem still exists?

The error caused by the hadoop mapred configuration, please refer to the http://druid.io/docs/0.7.1.1/Hadoop-Configuration.html. and append the following config in mapred-site.xml:

<property>
    <name>mapreduce.map.java.opts</name>
    <value>-server -Xmx1536m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps</value>
  </property>
  <property>
    <name>mapreduce.reduce.java.opts</name>
    <value>-server -Xmx2560m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps</value>
  </property>

在 2015年9月16日星期三 UTC+8下午3:27:35,hexi…@gmail.com写道: