Broker and Historicals are going down when querying with K6 with multiple VUs

We were trying to query with multiple VUs, for 2VU all went well, but above 2VUs the historicals (we have 2 historicals) are going down with 143 as error and then come back again, only one historical goes down at a time. Historical shows error:
{“instant”:{“epochSecond”:1657263315,“nanoOfSecond”:734000000},“thread”:“ZKCoordinator–0”,“level”:“ERROR”,“loggerName”:“org.apache.druid.server.coordination.SegmentLoadDropHandler”,“message”:"Failed to load segment for dataSource: {class=org.apache.druid.server.coordination.SegmentLoadDropHandler, exceptionType=class org.apache.druid.segment.loading.SegmentLoadingException, exceptionMessage=Exception loading segment[pmdata_2022-07-08T06:00:00.000Z_2022-07-08T07:00:00.000Z_2022-07-08T06:00:02.307Z_28], segment=DataSegment{binaryVersion=9, id=pmdata_2022-07-08T06:00:00.000Z_2022-07-08T07:00:00.000Z_2022-07-08T06:00:02.307Z_28, loadSpec={type=>hdfs, path=>hdfs://apache-hadoop-namenode.nom-apps.svc.cluster.local:8020/druid/segments/pmdata/20220708T060000.000Z_20220708T070000.000Z/2022-07-08T06_00_02.307Z/28_543646dc-b738-48e7-85f2-bf0cb8fa7500_index.zip}

But this is not the segment we are trying to query.

Broker shows no error in logs. I have also increased the memory but the issue still persists.

All segments will be loaded to historicals – it doesn’t matter which you are querying.

This error would indicate that the historical cannot load one of the segments that it needs to as part of its warm up. So it looks that you have an issue still with connection to HDFS.

Do you have this same issue on the other historical? (I mean, with not being able to load segments?). If not, maybe there is something specific to that other historical?

In Both historicals, I can see this issue of Failed to load segment for data source.

Configured Capacity: 472379670528 (439.94 GB)
Present Capacity: 472140344885 (439.71 GB)
DFS Remaining: 273561526272 (254.77 GB)
DFS Used: 198578818613 (184.94 GB)
DFS Used%: 42.06%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

Can u make sure u have enough memory in the vm. Can you refer to this page and ensure that you se the configs correct esp on memory.

Oh another thought comes to mind – have you updated the local segment cache config so Druid knows it has lots of addressable disk space? PArticularly druid.segmentCache.locations