Data ingestion task : Hadoop running in local instead of remote Hadoop EMR cluster

I have setup a multi-node druid cluster with:

  1. 1 node running as coordinator and overlord (m4.xl)

  2. 2 nodes each running historical and middle managers both. (r3.2xl)

  3. 1 node running broker (r3.2xl)

Now I have an EMR cluster running which I want to use for ingestion tasks, the problem is whenever I try to submit a job via the CURL command, the job always starts as local hadoop job in both the middle managers instead of being submitted to the remote EMR cluster. My data lies in S3 and also S3 is configured for deep storage as well.
I have also copied all the jars from EMR master to hadoop-dependencies/hadoop-client/2.7.3/

Druid version: 0.9.2

EMR version: 5.2

Please find attached indexing job, common runtime properties and middle manager runtime properties.

Q1) How to get the job to submit to remote EMR cluster.

Q2) Logs for the indexing task are not coming on overlord:8090, how to enable it.

indexing_job.json (3.36 KB) (3.7 KB) (752 Bytes)