Hadoop-based Batch Ingestion Merge Segements fail

HI
Hadoop-based Batch Ingestion Merge Segements fail

2017-12-12 11:49:21 [forking-task-runner-13] INFO TaskRunnerUtils:70 - Task [index_hadoop_user_online_test_2017-12-12T11:49:21.850+08:00] location changed to [TaskLocation{host=‘index112.antfact.com’, port=8103}].

2017-12-12 11:49:21 [WorkerTaskMonitor] INFO WorkerTaskMonitor:70 - Updating task [index_hadoop_user_online_test_2017-12-12T11:49:21.850+08:00] announcement with location [TaskLocation{host=‘index112.antfact.com’, port=8103}]

2017-12-12 11:49:21 [forking-task-runner-13] INFO TaskRunnerUtils:70 - Task [index_hadoop_user_online_test_2017-12-12T11:49:21.850+08:00] status changed to [RUNNING].

2017-12-12 11:49:21 [forking-task-runner-13] INFO ForkingTaskRunner:70 - Logging task index_hadoop_user_online_test_2017-12-12T11:49:21.850+08:00 output to: var/druid/task/index_hadoop_user_online_test_2017-12-12T11:49:21.850+08:00/log

2017-12-12 11:49:32 [forking-task-runner-13-[index_hadoop_user_online_test_2017-12-12T11:49:21.850+08:00]] INFO ForkingTaskRunner:70 - Process exited with status[0] for task: index_hadoop_user_online_test_2017-12-12T11:49:21.850+08:00

2017-12-12 11:49:32 [forking-task-runner-13] INFO HdfsTaskLogs:72 - Writing task log to: hdfs://hstore/druid/indexing-logs/index_hadoop_user_online_test_2017-12-12T11_49_21.850+08_00

2017-12-12 11:49:32 [forking-task-runner-13] INFO HdfsTaskLogs:72 - Wrote task log to: hdfs://hstore/druid/indexing-logs/index_hadoop_user_online_test_2017-12-12T11_49_21.850+08_00

2017-12-12 11:49:32 [forking-task-runner-13] INFO TaskRunnerUtils:70 - Task [index_hadoop_user_online_test_2017-12-12T11:49:21.850+08:00] status changed to [FAILED].

2017-12-12 11:49:32 [forking-task-runner-13] INFO ForkingTaskRunner:70 - Removing task directory: var/druid/task/index_hadoop_user_online_test_2017-12-12T11:49:21.850+08:00

2017-12-12 11:49:32 [WorkerTaskMonitor] INFO WorkerTaskMonitor:70 - Job’s finished. Completed [index_hadoop_user_online_test_2017-12-12T11:49:21.850+08:00] with status [FAILED]

2017-12-12 11:49:32,175 Thread-2 ERROR Unable to register shutdown hook because JVM is shutting down. java.lang.IllegalStateException: Not started
	at io.druid.common.config.Log4jShutdown.addShutdownCallback(Log4jShutdown.java:45)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.addShutdownCallback(Log4jContextFactory.java:273)
	at org.apache.logging.log4j.core.LoggerContext.setUpShutdownHook(LoggerContext.java:256)
	at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:216)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:145)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:41)
	at org.apache.logging.log4j.LogManager.getContext(LogManager.java:182)
	at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:103)
	at org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:43)
	at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:42)
	at org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:29)
	at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:253)
	at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:155)
	at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132)
	at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685)
	at org.apache.hadoop.hdfs.LeaseRenewer.<clinit>(LeaseRenewer.java:72)
	at org.apache.hadoop.hdfs.DFSClient.getLeaseRenewer(DFSClient.java:699)
	at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:859)
	at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:853)
	at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2407)
	at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2424)
	at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)