Error Using flattenSpec in Druid Index File

Hi,

I am using druid to read an nested json file and ingest the data into it.

My Input Data file is as follows:

{“channel”:{“id”:154734,“name”:“ABC”,“description”:“ABC”,“latitude”:“19.1752112”,“longitude”:“72.83245”,“field1”:“Mumbai-CO2”,“field2”:“Mumbai-Tempretaure”,“field3”:“Mumbai-Hunidity”,“field4”:“Mumbai-Air Pressure”,“created_at”:“2016-09-06T05:09:19Z”,“updated_at”:“2016-09-07T06:49:18Z”,“last_entry_id”:749},“feeds”:[{“created_at”:“2016-09-07T03:27:36Z”,“entry_id”:650,“field1”:“476”,“field2”:“30.6”,“field3”:“68.4”,“field4”:“1007.34”},{“created_at”:“2016-09-07T03:29:42Z”,“entry_id”:651,“field1”:“472”,“field2”:“30.6”,“field3”:“68.5”,“field4”:“1007.29”},
{“created_at”:“2016-09-07T03:31:43Z”,“entry_id”:652,“field1”:“469”,“field2”:“30.6”,“field3”:“68.8”,“field4”:“1007.32”},{“created_at”:“2016-09-07T03:33:49Z”,“entry_id”:653,“field1”:“469”,“field2”:“30.6”,“field3”:“68.8”,“field4”:“1007.3”}]}

I have followed the below link to create the flatten json file.

http://druid.io/docs/latest/ingestion/flatten-json.html

Below is my Index file which is trying to ingest the data into druid.

{
“type” : “index_hadoop”,
“spec” : {
“ioConfig” : {
“type” : “hadoop”,
“inputSpec” : {
“type” : “static”,
“paths” : “quickstart/feed.json”
}
},
“dataSchema” : {
“dataSource” : “feed2”,
“granularitySpec” : {
“type” : “uniform”,
“segmentGranularity” : “day”,
“queryGranularity” : “none”,
“intervals” : [“2015-06-01/2016-10-02”]
},
“parser” : {
“type” : “string”,
“parseSpec” : {
“format” : “json”,
“flattenSpec”: {
“useFieldDiscovery”: true,
“fields”: [
{
“type”: “path”,
“name”: “created_at”,
“expr”: “.feeds[0].created_at" }, { "type": "path", "name": "entry_id", "expr": ".feeds[0].entry_id”
},
{
“type”: “path”,
“name”: “Co2”,
“expr”: “.feeds[0].field1" }, { "type": "path", "name": "Temperature", "expr": ".feeds[0].field2”
},
{
“type”: “path”,
“name”: “Humidity”,
“expr”: “.feeds[0].field3" }, { "type": "path", "name": "Air Pressure", "expr": ".feeds[0].field4”
}
]
},
“dimensionsSpec” : {
“dimensions” : [
“entry_id”,
“created_at”
]
},
“timestampSpec” : {
“format” : “auto”,
“column” : “created_at”
}
}
},
“metricsSpec” : [
{
“name” : “count”,
“type” : “count”
},

    {
      "name" : "user_unique",
      "type" : "hyperUnique",
      "fieldName" : "entry_id"
    },

    {
      "name" : "Co2_sum",
              "type" : "doubleSum",
      "fieldName" : "Co2"
    },

    {
      "name" : "Temperature_sum",
              "type" : "doubleSum",
      "fieldName" : "Temperature"
    },

    {
      "name" : "Humidity_sum",
              "type" : "doubleSum",
      "fieldName" : "Humidity"
    },

    {
      "name" : "Air_Pressure_sum",
              "type" : "longSum",
      "fieldName" : "Air Pressure"
    }
  ]
},
"tuningConfig" : {
  "type" : "hadoop",
  "partitionsSpec" : {
    "type" : "hashed",
    "targetPartitionSize" : 5000000
  },
  "jobProperties" : {}
}

}
}

Problem is the above index file fetches only first row from the data file but i need to have all the data at once.I have tried other option like giving * in the flattenspec but it is not working.

Can you please suggest a way out to deal with this problem?