Druid ingestion SUCCESS but exception at the end

Hi guys,
I run a ingestion job druid + hadoop 2.7.1 (custom version declared in common conf druid.extensions.hadoopDependenciesDir=/druid/hadoop-dependencies).

The job finishes successfully but at the very end i get the next exception:


2017-01-13T09:44:48,064 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_hadoop_impression_segments5_2017-01-13T09:42:05.410Z",
  "status" : "SUCCESS",
  "duration" : 159082
}


**2017-01-13 09:44:48,136 Thread-2 ERROR Unable to register shutdown hook because JVM is shutting down. java.lang.IllegalStateException: Not started
	at io.druid.common.config.Log4jShutdown.addShutdownCallback(Log4jShutdown.java:45)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.addShutdownCallback(Log4jContextFactory.java:273)
	at org.apache.logging.log4j.core.LoggerContext.setUpShutdownHook(LoggerContext.java:256)
	at org.apache.logging.log4j.core.LoggerContext.start(LoggerContext.java:216)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:145)
	at org.apache.logging.log4j.core.impl.Log4jContextFactory.getContext(Log4jContextFactory.java:41)
	at org.apache.logging.log4j.LogManager.getContext(LogManager.java:182)
	at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getContext(AbstractLoggerAdapter.java:103)
	at org.apache.logging.slf4j.Log4jLoggerFactory.getContext(Log4jLoggerFactory.java:43)
	at org.apache.logging.log4j.spi.AbstractLoggerAdapter.getLogger(AbstractLoggerAdapter.java:42)
	at org.apache.logging.slf4j.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:29)
	at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:284)
	at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:155)
	at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:132)
	at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:273)
	at org.apache.hadoop.hdfs.LeaseRenewer.<clinit>(LeaseRenewer.java:72)
	at org.apache.hadoop.hdfs.DFSClient.getLeaseRenewer(DFSClient.java:699)
	at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:859)
	at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:853)
	at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2407)
	at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2424)
	at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)**

Can anybody help with any idea what could be the reason?

I’m not sure what files to attach in order to help with the context. The exception is so generic that i don’t know if it’s related to any of config files.

This looks like the thing that we fixed in https://github.com/druid-io/druid/pull/1387. If you haven’t upgraded past 0.8.3 yet, then it’s probably the same thing. The good news is that the log message is annoying but harmless, and should go away when you upgrade.

We are using 0.9.1.1

In that case it’s probably something similar, but different. Could you please open a github issue with the stack trace and, a description of where you saw the error, and what extensions & deep storage you have loaded?

Done

I use druid 0.10.0 , but it has the same problem.

在 2017年1月13日星期五 UTC+8下午10:40:27,Vadim Vararu写道: