Druid ingestion ends with SUCCESS status but exception is thrown at the end

I am getting this exact issue above.

Ingestion from Kafka into Druid and store in HDFS Deep Storage.

Druid 0.12.3 and Hadoop (HDP build hadoop cluster)

Any reason everything looks good and tasks are showing successful but historicals are showing nothing and no segments are in deep storage?

What is really odd about my druid cluster is that I am able to successfully ingest data from Kafka to Druid and store in to Azure blob storage.

I am also successfully able to ingest data into Druid from tranquillity and store it in both Azure blob storage or hdfs.

But when I go from Kafka to Druid attempting to store in hdfs I get task completion status of success but no segment cash stored on the historical and no data obviously stored in hdfs.

The only thing that I am changing in my druid configuration is swapping the druid hdfs and azure storage extensions.

Hi Chris:

Can you make sure Druid has proper permission to access HDFS?

At end of ingestion, if there is no segments stored in historical, do you see them at least stored in HDFS?

Do you have proper core-site.xml and hdfs-site.xml copied to Druid’s conf/druid/_common path?

I have proven druid had proper permission as I can ingest from tranquility and store in hdfs.

It is when ingesting using the Kafka ingesting service that I am experiencing a problem.