Query broker exception happend before index_kafka task start (druid version 0.11.0)

hi,

I met a exception in query broker. the broker log is:

[15:36:17:197] [INFO] - io.druid.java.util.common.logger.Logger.info(Logger.java:72) - Created new InventoryCacheListener for /druid/segments/172.16.8.63:8105

[15:36:17:197] [INFO] - com.metamx.common.logger.Logger.info(Logger.java:69) - New Server[DruidServerMetadata{name=‘172.16.8.63:8105’, hostAndPort=‘172.16.8.63:8105’, hostAndTlsPort=‘null’, maxSize=0, tier=’_default_tier’, type=indexer-executor, priority=0}]

[15:36:18:414] [INFO] - io.druid.java.util.common.logger.Logger.info(Logger.java:72) - Created new InventoryCacheListener for /druid/segments/172.16.152.182:8105

[15:36:18:414] [INFO] - com.metamx.common.logger.Logger.info(Logger.java:69) - New Server[DruidServerMetadata{name=‘172.16.152.182:8105’, hostAndPort=‘172.16.152.182:8105’, hostAndTlsPort=‘null’, maxSize=0, tier=’_default_tier’, type=indexer-executor, priority=0}]

[15:36:18:648] [INFO] - io.druid.java.util.common.logger.Logger.info(Logger.java:72) - Created new InventoryCacheListener for /druid/segments/172.16.152.181:8105

[15:36:18:648] [INFO] - com.metamx.common.logger.Logger.info(Logger.java:69) - New Server[DruidServerMetadata{name=‘172.16.152.181:8105’, hostAndPort=‘172.16.152.181:8105’, hostAndTlsPort=‘null’, maxSize=0, tier=’_default_tier’, type=indexer-executor, priority=0}]

[15:36:18:775] [INFO] - com.metamx.common.logger.Logger.info(Logger.java:69) - Generating: http://172.16.152.182:8105

[15:36:18:776] [WARN] - org.jboss.netty.logging.Log4JLogger.warn(Log4JLogger.java:77) - EXCEPTION, please implement org.jboss.netty.handler.codec.http.HttpContentDecompressor.exceptionCaught() for proper handling.

java.net.ConnectException: Connection refused: /172.16.152.182:8105

at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:1.8.0_131]

at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) ~[?:1.8.0_131]

at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152) ~[netty-3.10.6.Final.jar:?]

at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105) [netty-3.10.6.Final.jar:?]

at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79) [netty-3.10.6.Final.jar:?]

at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) [netty-3.10.6.Final.jar:?]

at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42) [netty-3.10.6.Final.jar:?]

at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.10.6.Final.jar:?]

at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [netty-3.10.6.Final.jar:?]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_131]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_131]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]

And, I check the task log(indexer/v1/task/index_kafka_sfs_weed_volume_075d9d08a5210aa_jdhlabco/log), I found a WARN in the log:

[15:36:18:756] [INFO] - com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory.getComponentProvider(GuiceComponentProviderFactory.java:146) - Binding io.druid.query.lookup.LookupListeningResource to GuiceInstantiatedComponentProvider
[15:36:18:758] [INFO] - com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory.getComponentProvider(GuiceComponentProviderFactory.java:146) - Binding io.druid.query.lookup.LookupIntrospectionResource to GuiceInstantiatedComponentProvider
[15:36:18:759] [INFO] - com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory.getComponentProvider(GuiceComponentProviderFactory.java:168) - Binding io.druid.server.StatusResource to GuiceManagedComponentProvider with the scope "Undefined"
[15:36:18:774] [**WARN**] - com.sun.jersey.spi.inject.Errors.processErrorMessages(Errors.java:173) - The following warnings have been detected with resource and/or provider classes:
  WARNING: A HTTP GET method, public void io.druid.server.http.SegmentListerResource.getSegments(long,long,long,javax.servlet.http.HttpServletRequest) throws java.io.IOException, MUST return a non-void type.
[15:36:18:783] [INFO] - org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:781) - Started o.e.j.s.ServletContextHandler@4a2e1e52{/,null,AVAILABLE}
[15:36:18:806] [INFO] - org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:278) - Started ServerConnector@1182d1df{HTTP/1.1,[http/1.1]}{0.0.0.0:8105}
[15:36:18:807] [INFO] - org.eclipse.jetty.server.Server.doStart(Server.java:414) - Started @5350ms
[15:36:18:808] [INFO] - io.druid.java.util.common.logger.Logger.info(Logger.java:72) - Invoking start method[public void io.druid.server.listener.announcer.ListenerResourceAnnouncer.start()] on object[io.druid.query.lookup.LookupResourceListenerAnnouncer@4b544732].
[15:36:18:819] [INFO] - io.druid.java.util.common.logger.Logger.info(Logger.java:72) - Announcing start time on [/druid/listeners/lookups/__default/http:172.16.152.182:8105]
[15:46:18:660] [INFO] - com.metamx.common.logger.Logger.info(Logger.java:69) - Submitting persist runnable for dataSource[sfs_weed_volume]
[15:46:18:663] [INFO] - com.metamx.common.logger.Logger.info(Logger.java:69) - Segment[sfs_weed_volume_2018-04-16T07:00:00.000Z_2018-04-16T08:00:00.000Z_2018-04-16T07:00:17.327Z_1], persisting


**We can see that the query time is [15:36:18:776] while the task start in [15:36:18:806]. I think this is the reason of the exception.**
**Am I right? is this a bug? can any one tell me why this exception happened?**
**thanks**