java.lang.IllegalArgumentException: Parameter 'directory' is not a directory:

Hi All,

We are trying to load TSV file to druid cluster setup with Ambari. Before doing so we did the same on a single node in the local machine and everything worked fine. On Ambari cluster is gives the error as mentioned below

2018-10-14T12:22:12,321 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[IndexTask{id=index_etlranker_2018-10-14T12:22:08.103Z, type=index, dataSource=etlranker}]
java.lang.IllegalArgumentException: Parameter ‘directory’ is not a directory: /home/druid/DFP_SESReport/2018/04/02
at ~[commons-io-2.5.jar:2.5]
at ~[commons-io-2.5.jar:2.5]
at io.druid.segment.realtime.firehose.LocalFirehoseFactory.initObjects( ~[druid-server-]
at ~[druid-api-]
at ~[druid-api-]
at io.druid.indexing.common.task.IndexTask.determineShardSpecs( ~[druid-indexing-service-]
at ~[druid-indexing-service-]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ [druid-indexing-service-]
at io.druid.indexing.overlord.ThreadPoolTaskRunner$ [druid-indexing-service-]
at [?:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:1.8.0_112]
at java.util.concurrent.ThreadPoolExecutor$ [?:1.8.0_112]
at [?:1.8.0_112]
2018-10-14T12:22:12,323 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_etlranker_2018-10-14T12:22:08.103Z] status changed to [FAILED].
2018-10-14T12:22:12,327 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
“id” : “index_etlranker_2018-10-14T12:22:08.103Z”,
“status” : “FAILED”,
“duration” : 37

attached as the json file and the full error logs. Have spend hours to fix this with no luck.

Any help will be appreciated.


Chethan G Puttaswamy

[error_log.txt|attachment](upload://iqDD4aMppWVWdlxflHy6vFqKTP2.txt) (67.4 KB)

[import.json|attachment](upload://oWDl0oddHTiTN5YvKVxPQ6U16xn.json) (2.71 KB)

Looking over the error message and the docs for the local firehose, my best guess is that it has worked locally because the input files were available at: /home/druid/DFP_SESReport/2018/04/02

The docs at: are not clear on what “This Firehose can be used to read the data from files on local disk.” actually means in a clustered environment.

Without further input on where the files should be on a multi-server set up, you could try ensuring that the files are available at: /home/druid/DFP_SESReport/2018/04/02 on each server.

Note that the docs also state that “local” mode “…can be used for POCs to ingest data on disk” leading me to think it may not be suitable for a clustered/production environment.


This I fixed. Please place data files in datanode machine and give directory as below. Please make sure “baseDir”: “/home/druidadmin/druiddata/” exists in datanode(s)

“ioConfig”: {

“type”: “index”,

“firehose”: {

“type”: “local”,

“baseDir”: “/home/druidadmin/druiddata/”,

“filter”: “*.csv”