Deep storage HDFS change problem

Hi druid users,

I have problem at deep storage change(HDFS cluster change).

Test envs:

  • CentOS 6.6

  • Druid v0.9.0

  • Druid cluster nodes

  • server#1 : overlord, coordinator

  • server#2 : broker

  • server#3 : middleManager

  • server#4 : historical

  • server#5 : MySQL(metadata storage)

  • HDFS clusters

Test sequences:

  • Copy all druid segments data in current HDFS to new HDFS.

  • Change all configurations that related with HDFS(druid and etc).

  • Restart all druid nodes.

  • Chceck all druid nodes status.

  • Shutdown old HDFS.

  • Add new Historical node with new HDFS configuration.

  • Check segment rebalancing progresses.

I expected that coordinator will rebalance segment with information in new HDFS and after segment load to new place, coordinator will update metadata in druid_segments table with new segment related information.

But, it does not use new HDFS information and it use current metadata in druid_segments table’s payload column data for rebalancing.

So, segment rebalancing to new historical node was failed.(new histroical node segment load fail)

[new historical node error log sample]

in the log, ‘hdfs://’ is old HDFS path.

2017-06-26T05:34:20,323 ERROR [ZkCoordinator-0] io.druid.server.coordination.ZkCoordinator - Failed to load segment for dataSource: {class=io.druid.server.coordination.ZkCoordinator, exceptionType=class io.druid.segment.loading.SegmentLoadingException, exceptionMessage=Exception loading segment[syslog_2017-06-16T06:00:00.000Z_2017-06-16T07:00:00.000Z_2017-06-16T05:58:18.484Z_3], segment=DataSegment{size=345374710, shardSpec=LinearShardSpec{partitionNum=3}, metrics=[count], dimensions=[action, action_csct, action_rate, cause, cause_csct, conn_seq, dvc_type, event_code, ip, mac, model_name, scn], version=‘2017-06-16T05:58:18.484Z’, loadSpec={type=hdfs, path=hdfs://}, interval=2017-06-16T06:00:00.000Z/2017-06-16T07:00:00.000Z, dataSource=‘syslog’, binaryVersion=‘9’}}

io.druid.segment.loading.SegmentLoadingException: Exception loading segment[syslog_2017-06-16T06:00:00.000Z_2017-06-16T07:00:00.000Z_2017-06-16T05:58:18.484Z_3]

at io.druid.server.coordination.ZkCoordinator.loadSegment( ~[druid-server-0.9.0.jar:0.9.0]

at io.druid.server.coordination.ZkCoordinator.addSegment( [druid-server-0.9.0.jar:0.9.0]

at io.druid.server.coordination.SegmentChangeRequestLoad.go( [druid-server-0.9.0.jar:0.9.0]

at io.druid.server.coordination.ZkCoordinator$1.childEvent( [druid-server-0.9.0.jar:0.9.0]

at$5.apply( [curator-recipes-2.9.1.jar:?]

at$5.apply( [curator-recipes-2.9.1.jar:?]

at org.apache.curator.framework.listen.ListenerContainer$ [curator-framework-2.9.1.jar:?]

at$SameThreadExecutorService.execute( [guava-16.0.1.jar:?]

at org.apache.curator.framework.listen.ListenerContainer.forEach( [curator-framework-2.9.1.jar:?]

at [curator-recipes-2.9.1.jar:?]

at [curator-recipes-2.9.1.jar:?]

at$ [curator-recipes-2.9.1.jar:?]

at java.util.concurrent.Executors$ [?:1.7.0_141]

at [?:1.7.0_141]

at java.util.concurrent.Executors$ [?:1.7.0_141]

at [?:1.7.0_141]

at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:1.7.0_141]

at java.util.concurrent.ThreadPoolExecutor$ [?:1.7.0_141]

at [?:1.7.0_141]

Caused by: io.druid.segment.loading.SegmentLoadingException: Error loading [hdfs://]

at ~[?:?]

at ~[?:?]

at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegmentFiles( ~[druid-server-0.9.0.jar:0.3.16]

at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegment( ~[druid-server-0.9.0.jar:0.3.16]

at io.druid.server.coordination.ServerManager.loadSegment( ~[druid-server-0.9.0.jar:0.9.0]

at io.druid.server.coordination.ZkCoordinator.loadSegment( ~[druid-server-0.9.0.jar:0.9.0]

… 18 more

Caused by: Call From to failed on connection exception: Connection refused; For more details see:

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.7.0_141]

at sun.reflect.NativeConstructorAccessorImpl.newInstance( ~[?:1.7.0_141]

at sun.reflect.DelegatingConstructorAccessorImpl.newInstance( ~[?:1.7.0_141]


How can i change deep storage(HDFS) without failing on segment rebalancing?

Are there any recommended way to change HDFS?

I’ll appreciate at any advices.

Thank you.

the coordinator does not automatically update the segment metadata info.

If you are moving the segments to new HDFS location, you will also need to manually update the payload in segment metadata entries in druid_segments table.

you can do this by executing an SQL command or you can also try using druid insert-segment tool ( to create new segment metadata entries in a new table. Do remember to take a backup of metadata storage before making any changes to metadata storage for added safety.