Ingestion task fails with RunTime Exception during BUILD_SEGMENTS phase


I have started using Docker Druid following ‘quickstart’ approach as mentioned in the tutorials here-

I am using localfile to ingest data as mentioned in the tutorial but i see the ingestion status - Failed after few seconds.

ERROR [task-runner-0-priority-0] org.apache.druid.indexing.common.task.IndexTask - Encountered exception in BUILD_SEGMENTS.
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: Invalid argument
at org.apache.druid.indexing.common.task.IndexTask.generateAndPublishSegments( ~[druid-indexing-service-0.17.0.jar:0.17.0]

Here is the log -

Task submitted looked like this -

“type”: “index_parallel”,
“ioConfig”: {
“type”: “index_parallel”,
“inputSource”: {
“type”: “local”,
“baseDir”: “quickstart/tutorial/”,
“filter”: “wikiticker-2015-09-12-sampled.json.gz”
“inputFormat”: {
“type”: “json”
“tuningConfig”: {
“type”: “index_parallel”,
“partitionsSpec”: {
“type”: “dynamic”
“dataSchema”: {
“dataSource”: “wikipedia”,
“granularitySpec”: {
“type”: “uniform”,
“queryGranularity”: “NONE”,
“rollup”: false,
“segmentGranularity”: “DAY”
“timestampSpec”: {
“column”: “time”,
“format”: “iso”
“dimensionsSpec”: {
“dimensions”: [
“type”: “long”,
“name”: “added”
“type”: “long”,
“name”: “deleted”
“type”: “long”,
“name”: “delta”

I am not sure what am i missing here.

Any help is much appreciated. Thanks

Regards, Neha

What do you use for storage? Local storage, AWS? Could be that you cannot commit your segments and the task runs out or time looping.