Druid update from 20.2 to 21.0 breaks entry in hdfs

Hello, I’m new in druid and ran into the problem that new segments and task logs could not be written to hdfs storage, when I upgraded druid from version 0.20.2 to version >=0.21.0

Exception appeared in middlemanager Kuber Pods (from 0.22.0 example):

2021-12-09T11:17:56,762 INFO [forking-task-runner-3] org.apache.druid.storage.hdfs.tasklog.HdfsTaskLogs - Writing task log to: /apps/druid/indexing-logs/single_phase_sub_task_indicators_penpgaom_2021-12-09T11_13_38.031Z
2021-12-09T11:17:56,763 INFO [forking-task-runner-3] org.apache.druid.indexing.overlord.ForkingTaskRunner - Exception caught during execution
java.io.IOException: Mkdirs failed to create /apps/druid/indexing-logs (exists=false, cwd=file:/opt/druid)
        at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:455) ~[?:?]
        at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:441) ~[?:?]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:929) ~[?:?]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:910) ~[?:?]
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:807) ~[?:?]
        at org.apache.druid.storage.hdfs.tasklog.HdfsTaskLogs.pushTaskFile(HdfsTaskLogs.java:84) ~[?:?]
        at org.apache.druid.storage.hdfs.tasklog.HdfsTaskLogs.pushTaskLog(HdfsTaskLogs.java:66) ~[?:?]
        at org.apache.druid.indexing.overlord.ForkingTaskRunner.waitForTaskProcessToComplete(ForkingTaskRunner.java:473) ~[druid-indexing-service-0.22.0.jar:0.22.0]
        at org.apache.druid.indexing.overlord.ForkingTaskRunner$1.call(ForkingTaskRunner.java:365) [druid-indexing-service-0.22.0.jar:0.22.0]
        at org.apache.druid.indexing.overlord.ForkingTaskRunner$1.call(ForkingTaskRunner.java:138) [druid-indexing-service-0.22.0.jar:0.22.0]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_275]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_275]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_275]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_275] 

When I roll back the version to 0.20.2, middlemanager works fine.

What am I doing wrong? Thanks for advice.

Could this be a permissions issue somewhere? E.g. the user running the MM process?

Yes, it seems, but why does this issue depend on the version of the druid? :slight_smile:
And Releases · apache/druid · GitHub found nothing about the rights :frowning:
I tried adding full access rights to the mentioned hdfs directory, for user druid - it doesn’t help.
Could you please explain to me how to check this problem correctly?

I could only think that this may be something about the pods … I’m afraid I was just thinking out loud :smiley: