c.m.tranquility.beam.ClusteredBeam - Turns out we decided not to actually make beams for identifier

hello:
first time to post for my druid-cluster problem.

After I increased two (broker+middle+historical)nodes, I found when I restart my tranquility process,It did not load the json file to receive data from http server. my json file segmentGranularity is set “hour”,and only when the time went to the whole point,the tasks get started auto

. I have no ideal with this problem.

this is tranquility log.

May 03, 2018 5:17:20 PM com.twitter.finagle.loadbalancer.LoadBalancerFactory$StackModule$$anonfun$5 apply
INFO: druidTask!druid:overlord!index_realtime_lbs_dmq_test_2018-05-03T08:00:00.000Z_0_0: name resolution is negative (local dtab: Dtab())
2018-05-03 17:48:58,795 [ClusteredBeam-ZkFuturePool-2e656dd4-7e36-4a23-8c21-1874f766ea0f] INFO c.m.tranquility.beam.ClusteredBeam - Global latestCloseTime[2018-05-03T00:00:00.000Z] for identifier[druid:overlord/yyb_dmq_install] has moved past timestamp[2018-05-03T00:00:00.000Z], not creating merged beam
2018-05-03 17:48:58,796 [ClusteredBeam-ZkFuturePool-2e656dd4-7e36-4a23-8c21-1874f766ea0f] INFO c.m.tranquility.beam.ClusteredBeam - Turns out we decided not to actually make beams for identifier[druid:overlord/yyb_dmq_install] timestamp[2018-05-03T00:00:00.000Z]. Returning None.
2018-05-03 17:48:58,799 [ClusteredBeam-ZkFuturePool-2e656dd4-7e36-4a23-8c21-1874f766ea0f] INFO c.m.tranquility.beam.ClusteredBeam - Global latestCloseTime[2018-05-03T00:00:00.000Z] for identifier[druid:overlord/yyb_dmq_install] has moved past timestamp[2018-05-03T00:00:00.000Z], not creating merged beam
2018-05-03 17:48:58,799 [ClusteredBeam-ZkFuturePool-2e656dd4-7e36-4a23-8c21-1874f766ea0f] INFO c.m.tranquility.beam.ClusteredBeam - Turns out we decided not to actually make beams for identifier[druid:overlord/yyb_dmq_install] timestamp[2018-05-03T00:00:00.000Z]. Returning None.
2018-05-03 17:48:58,802 [ClusteredBeam-ZkFuturePool-2e656dd4-7e36-4a23-8c21-1874f766ea0f] INFO c.m.tranquility.beam.ClusteredBeam - Global latestCloseTime[2018-05-03T00:00:00.000Z] for identifier[druid:overlord/yyb_dmq_install] has moved past timestamp[2018-05-03T00:00:00.000Z], not creating merged beam
2018-05-03 17:48:58,802 [ClusteredBeam-ZkFuturePool-2e656dd4-7e36-4a23-8c21-1874f766ea0f] INFO c.m.tranquility.beam.ClusteredBeam - Turns out we decided not to actually make beams for identifier[druid:overlord/yyb_dmq_install] timestamp[2018-05-03T00:00:00.000Z]. Returning None.
2018-05-03 17:57:56,469 [qtp1363804914-243] WARN org.eclipse.jetty.http.HttpParser - badMessage: 400 for HttpChannelOverHttp@353e1e45{r=0,c=false,a=IDLE,uri=-}
2018-05-03 18:00:00,000 [ClusteredBeam-ZkFuturePool-cf7c9475-c2b0-4bc4-aae8-4e96b1d8637e] INFO c.m.tranquility.beam.ClusteredBeam - Creating new merged beam for identifier[druid:overlord/lbs_dmq_test] timestamp[2018-05-03T10:00:00.000Z] (target = 1, actual = 0)
2018-05-03 18:00:00,002 [ClusteredBeam-ZkFuturePool-cf7c9475-c2b0-4bc4-aae8-4e96b1d8637e] INFO com.metamx.common.scala.control$ - Creating druid indexing task (service = druid:overlord): {
“type” : “index_realtime”,
“id” : “index_realtime_lbs_dmq_test_2018-05-03T10:00:00.000Z_0_0”,
“resource” : {
“availabilityGroup” : “lbs_dmq_test-2018-05-03T10:00:00.000Z-0000”,
“requiredCapacity” : 1
},

``

I think this problem is related to the two new node,but has no evidence.for the two historical log went wrong with this:

2018-05-03T20:44:25,161 INFO [ZkCoordinator] io.druid.server.coordination.SegmentLoadDropHandler - Loading segment yyb_dmq_install_2018-04-24T14:00:00.000Z_2018-04-24T15:00:00.000Z_2018-04-24T14:00:19.586Z

2018-05-03T20:44:25,163 ERROR [ZkCoordinator] io.druid.segment.loading.SegmentLoaderLocalCacheManager - Failed to load segment in current location /usr/local/app/imply/var/druid/segment-cache, try next location if any: {class=io.druid.segment.loading.SegmentLoaderLocalCacheManager, exceptionType=class io.druid.segment.loading.SegmentLoadingException, exceptionMessage=Error loading [hdfs://hadoop-slave03:9820/druid/segments/yyb_dmq_install/20180424T140000.000Z_20180424T150000.000Z/2018-04-24T14_00_19.586Z/0_index.zip], location=/usr/local/app/imply/var/druid/segment-cache}

io.druid.segment.loading.SegmentLoadingException: Error loading [hdfs://hadoop-slave03:9820/druid/segments/yyb_dmq_install/20180424T140000.000Z_20180424T150000.000Z/2018-04-24T14_00_19.586Z/0_index.zip]

at io.druid.storage.hdfs.HdfsDataSegmentPuller.getSegmentFiles(HdfsDataSegmentPuller.java:281) ~[?:?]

at io.druid.storage.hdfs.HdfsLoadSpec.loadSegment(HdfsLoadSpec.java:62) ~[?:?]

at io.druid.segment.loading.SegmentLoaderLocalCacheManager.loadInLocation(SegmentLoaderLocalCacheManager.java:205) ~[druid-server-0.12.0-iap3.jar:0.12.0-iap3]

at io.druid.segment.loading.SegmentLoaderLocalCacheManager.loadInLocationWithStartMarker(SegmentLoaderLocalCacheManager.java:193) ~[druid-server-0.12.0-iap3.jar:0.12.0-iap3]

at io.druid.segment.loading.SegmentLoaderLocalCacheManager.loadSegmentWithRetry(SegmentLoaderLocalCacheManager.java:151) [druid-server-0.12.0-iap3.jar:0.12.0-iap3]

at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegmentFiles(SegmentLoaderLocalCacheManager.java:133) [druid-server-0.12.0-iap3.jar:0.12.0-iap3]

at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegment(SegmentLoaderLocalCacheManager.java:108) [druid-server-0.12.0-iap3.jar:0.12.0-iap3]

at io.druid.server.SegmentManager.getAdapter(SegmentManager.java:196) [druid-server-0.12.0-iap3.jar:0.12.0-iap3]

at io.druid.server.SegmentManager.loadSegment(SegmentManager.java:157) [druid-server-0.12.0-iap3.jar:0.12.0-iap3]

at io.druid.server.coordination.SegmentLoadDropHandler.loadSegment(SegmentLoadDropHandler.java:257) [druid-server-0.12.0-iap3.jar:0.12.0-iap3]

at io.druid.server.coordination.SegmentLoadDropHandler.addSegment(SegmentLoadDropHandler.java:303) [druid-server-0.12.0-iap3.jar:0.12.0-iap3]

at io.druid.server.coordination.SegmentChangeRequestLoad.go(SegmentChangeRequestLoad.java:47) [druid-server-0.12.0-iap3.jar:0.12.0-iap3]

at io.druid.server.coordination.ZkCoordinator$1.childEvent(ZkCoordinator.java:118) [druid-server-0.12.0-iap3.jar:0.12.0-iap3]

at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:520) [curator-recipes-4.0.0.jar:4.0.0]

at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:514) [curator-recipes-4.0.0.jar:4.0.0]

at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:93) [curator-framework-4.0.0.jar:4.0.0]

at org.apache.curator.shaded.com.google.common.util.concurrent.MoreExecutors$DirectExecutorService.execute(MoreExecutors.java:296) [curator-client-4.0.0.jar:?]

at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:85) [curator-framework-4.0.0.jar:4.0.0]

at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:512) [curator-recipes-4.0.0.jar:4.0.0]

at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35) [curator-recipes-4.0.0.jar:4.0.0]

at org.apache.curator.framework.recipes.cache.PathChildrenCache$9.run(PathChildrenCache.java:771) [curator-recipes-4.0.0.jar:4.0.0]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_161]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_161]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_161]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_161]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_161]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_161]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]

Caused by: org.apache.hadoop.ipc.RemoteException: Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error

at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)

at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1962)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1411)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3040)

at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1145)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:940)

at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)

at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)

at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)

at org.apache.hadoop.ipc.Client.call(Client.java:1475) ~[?:?]

at org.apache.hadoop.ipc.Client.call(Client.java:1412) ~[?:?]

at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) ~[?:?]

at com.sun.proxy.$Proxy62.getFileInfo(Unknown Source) ~[?:?]

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771) ~[?:?]

at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) ~[?:?]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_161]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_161]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) ~[?:?]

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[?:?]

at com.sun.proxy.$Proxy63.getFileInfo(Unknown Source) ~[?:?]

at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2108) ~[?:?]

at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305) ~[?:?]

at org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301) ~[?:?]

at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) ~[?:?]

at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317) ~[?:?]

at org.apache.hadoop.fs.FileSystem.isDirectory(FileSystem.java:1439) ~[?:?]

at io.druid.storage.hdfs.HdfsDataSegmentPuller.getSegmentFiles(HdfsDataSegmentPuller.java:180) ~[?:?]

… 27 more

2018-05-03T20:44:25,170 INFO [ZkCoordinator] io.druid.segment.loading.SegmentLoaderLocalCacheManager - Deleting directory[var/druid/segment-cache/yyb_dmq_install/2018-04-24T14:00:00.000Z_2018-04-24T15:00:00.000Z/2018-04-24T14:00:19.586Z/0]

``

Now please help me to solve it , thank any way!