Tasks to Indexer Failing, cannot find Logs


I am working on a Druid (ver.0.12.0) 3-node cluster with PostGresSQL as Metadata Store and HDFS as Deep storage in respect to a POC to test Migration of Segments from one cluster to another.

I am currently facing an issue with executing below commands on the indexer.

curl -X ‘POST’ -H ‘Content-Type:application/json’ -d @wikiticker-index-modified.json <OVERLORD/COORDINATOR IP>:8090/druid/indexer/v1/task

curl -X ‘POST’ -H ‘Content-Type:application/json’ -d @kill-datasource.json <OVERLORD/COORDINATOR IP>:8090/druid/indexer/v1/task

While the Tasks can be seen getting created, each is Failing. Attached the JSON files as below-

wikiticker-index-modified.json - Is a copy of the quickstart/wikiticker-index.json with only the “datasource” and “paths” modified.

kill-datasource.json - To kill existing “wikiticker” datasource. Note that the Datasource is already Disabled.

I can view all the Tasks corresponding to above commands in table “druid_tasks” in PostGresSQL, with column “status_payload” indicating ‘FAILED’.

The following is set in common_runtime.properties but no logs are being generated in specified HDFS location -



The table “druid_tasklogs” is empty.

The issue sounds similar to -


I can successfully run some other commands like -

curl -X ‘GET’ <OVERLORD/COORDINATOR IP>:8081/druid/coordinator/v1/datasources

curl -X ‘DELETE’ <OVERLORD/COORDINATOR IP>:8081/druid/coordinator/v1/datasources/wikiticker

Please advise.

Thank you,


kill-datasource.json (93 Bytes)

wikiticker-index-modified.json (1.95 KB)


Do you see any errors in the middlemanager logs?



Hello Atul,

As mentioned, I dont see any logs being created under the /druid/indexing-logs directory in HDFS. Where should I look for the Middle Manager logs?



Found out the /druid/indexing-logs were not getting written due to Permissions Issue for user “druid”. The folders were earlier owned by “hadoop” user. Changed ownership to “druid” and the task log files are now getting created.

The Tasks are continuing to fail though with below error -

“Error in custom
provider, com.amazonaws.AmazonClientException: Unable to load AWS credentials
from any provider in the chain”

Per post https://groups.google.com/forum/#!msg/druid-user/SutYkCkLTWA/5mZ8fDR7CgAJ removed extra Extensions from common.runtime.properties. Currently set to -

druid.extensions.loadList=[“postgresql-metadata-storage”, “druid-hdfs-storage”]

Copied the common.runtime.properties to all the 3 nodes of the cluster. However, the Task still appears to pick the old version of the file with old list of extensions getting loaded.

Services were required to be restarted for updated common.run.properties to be picked up. Post restart, moved past the AWS credentials error and into some other error related to memory .

The issue in reference to this post is resolved.