Where is my data part 2

Hello,

For a stack that loves to throw errors all over the place, I get zero insight as to why I can’t ingest data. Maybe time stamp but I use what I gather is correct:

I use UTC and servers are in sync with NTP.

{‘timestamp’: ‘2019-03-25T00:00:55.077985’, ‘impressions’: 1, ‘cpi’: 0.1, ‘dim1’: ‘1’, ‘dim2’: ‘2’, ‘dim3’: ‘3’}

{“result”:{“received”:1,“sent”:0}}

Buy why? In my python code I use datetime.datetime.utcnow().isoformat()

This is how I ingest:

import requests

import json

import datetime

from pytz import timezone

for i in range(10):

utc_dt = datetime.datetime.utcnow().isoformat()

data = {“timestamp”:utc_dt,“impressions”:1,“cpi”:0.1,“dim1”:“1”,“dim2”:“2”,“dim3”:“3”}

print (data)

tranquility_host = “http://test:8200/v1/post/foo

headers = {‘Content-Type’:‘application/json’}

r = requests.post(tranquility_host, headers=headers, data=json.dumps(data))

print(r.text)

{“result”:{“received”:1,“sent”:0}}

Overload conf:

druid.zk.service.host=127.0.0.1

druid.service=druid/overlord

druid.plaintextPort=8090

druid.host = <%=@ipaddress%>:8090

druid.indexer.queue.startDelay=PT5S

druid.indexer.runner.type=remote

druid.indexer.storage.type=metadata

druid.extensions.loadList=[“mysql-metadata-storage”]

druid.metadata.storage.type=mysql

druid.metadata.storage.connector.connectURI=jdbc:mysql://<%=@mysql_host%>/druid

druid.metadata.storage.connector.user=<%=@mysql_username%>

druid.metadata.storage.connector.password=<%=@mysql_password%>

for tranquliuty conf:

{

“dataSources” : [

{

“spec” : {

“dataSchema” : {

“dataSource” : “foo”,

“metricsSpec” : [

{ “type” : “count”, “name” : “impressions” },

{ “type” : “doubleSum”, “fieldName” : “cpi”, “name” : “cpi” }

],

“granularitySpec” : {

“segmentGranularity” : “hour”,

“queryGranularity” : “none”,

“type” : “uniform”

},

“parser” : {

“type” : “string”,

“parseSpec” : {

“format” : “json”,

“timestampSpec” : { “column” : “timestamp”, “format” : “auto” },

“dimensionsSpec” : {

“dimensions” : [“dim1”, “dim2”, “dim3”]

}

}

}

},

“tuningConfig” : {

“type” : “realtime”,

“windowPeriod” : “PT10M”,

“intermediatePersistPeriod” : “PT10M”,

“maxRowsInMemory” : “100000”

}

},

“properties” : {

“task.partitions” : “1”,

“task.replicants” : “1”

}

}

],

“properties” : {

“zookeeper.connect” : “localhost”

}

}

Tranqulity Log after post

2019-03-25 00:00:28,603 [main] INFO org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=localhost sessionTimeout=20000 watcher=org.apache.curator.ConnectionState@415a3f6a

2019-03-25 00:00:28,628 [main] INFO c.m.c.l.Lifecycle$AnnotationBasedHandler - Invoking start method[public void com.metamx.common.scala.net.curator.Disco.start()] on object[com.metamx.common.scala.net.curator.Disco@22d8f4ed].

2019-03-25 00:00:28,632 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)

2019-03-25 00:00:28,642 [main] INFO c.m.c.l.Lifecycle$AnnotationBasedHandler - Invoking start method[public void com.metamx.tranquility.tranquilizer.Tranquilizer.start()] on object[Tranquilizer(com.metamx.tranquility.beam.TransformingBeam@315c081)].

2019-03-25 00:00:28,650 [main] INFO org.eclipse.jetty.server.Server - jetty-9.2.5.v20141112

2019-03-25 00:00:28,656 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn - Socket connection established to localhost/127.0.0.1:2181, initiating session

2019-03-25 00:00:28,666 [main-SendThread(localhost:2181)] INFO org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x10000075d97017b, negotiated timeout = 20000

2019-03-25 00:00:28,684 [main-EventThread] INFO o.a.c.f.state.ConnectionStateManager - State change: CONNECTED

2019-03-25 00:00:28,742 [main] INFO o.e.jetty.server.ServerConnector - Started ServerConnector@20b8c9d3{HTTP/1.1}{0.0.0.0:8200}

2019-03-25 00:00:28,743 [main] INFO org.eclipse.jetty.server.Server - Started @6627ms

2019-03-25 00:00:55,977 [ClusteredBeam-ZkFuturePool-853dd197-d5f5-4061-8e08-dfd66f41e603] INFO c.m.tranquility.beam.ClusteredBeam - Creating new merged beam for identifier[druid:overlord/foo] timestamp[2019-03-25T00:00:00.000Z] (target = 1, actual = 0)

2019-03-25 00:00:56,220 [ClusteredBeam-ZkFuturePool-853dd197-d5f5-4061-8e08-dfd66f41e603] INFO com.metamx.common.scala.control$ - Creating druid indexing task (service = druid:overlord): {

“type” : “index_realtime”,

“id” : “index_realtime_foo_2019-03-25T00:00:00.000Z_0_0”,

“resource” : {

“availabilityGroup” : “foo-2019-03-25T00:00:00.000Z-0000”,

“requiredCapacity” : 1

},

“spec” : {

“dataSchema” : {

“dataSource” : “foo”,

“parser” : {

“type” : “map”,

“parseSpec” : {

“format” : “json”,

“timestampSpec” : {

“column” : “timestamp”,

“format” : “millis”,

“missingValue” : null

},

“dimensionsSpec” : {

“dimensions” : [ “dim1”, “dim2”, “dim3” ],

“spatialDimensions” :

}

}

},

“metricsSpec” : [ {

“type” : “count”,

“name” : “impressions”

}, {

“type” : “doubleSum”,

“name” : “cpi”,

“fieldName” : “cpi”

} ],

“granularitySpec” : {

“type” : “uniform”,

“segmentGranularity” : “HOUR”,

“queryGranularity” : {

“type” : “none”

}

}

},

“ioConfig” : {

“type” : “realtime”,

“plumber” : null,

“firehose” : {

“type” : “clipped”,

“interval” : “2019-03-25T00:00:00.000Z/2019-03-25T01:00:00.000Z”,

“delegate” : {

“type” : “timed”,

“shutoffTime” : “2019-03-25T01:15:00.000Z”,

“delegate” : {

“type” : “receiver”,

“serviceName” : “firehose:druid:overlord:foo-000-0000-0000”,

“bufferSize” : 100000

}

}

}

},

“tuningConfig” : {

“shardSpec” : {

“type” : “linear”,

“partitionNum” : 0

},

“rejectionPolicy” : {

“type” : “none”

},

“buildV9Directly” : false,

“maxPendingPersists” : 0,

“intermediatePersistPeriod” : “PT10M”,

“windowPeriod” : “PT10M”,

“type” : “realtime”,

“maxRowsInMemory” : “100000”

}

}

}

2019-03-25 00:00:56,706 [ClusteredBeam-ZkFuturePool-853dd197-d5f5-4061-8e08-dfd66f41e603] INFO c.m.c.s.net.finagle.DiscoResolver - Updating instances for service[druid:overlord] to Set(ServiceInstance{name=‘druid:overlord’, id=‘229fed8e-ba74-431c-a94b-288f708ae81b’, address=‘172.31.37.177’, port=8090, sslPort=-1, payload=null, registrationTimeUTC=1553470881278, serviceType=DYNAMIC, uriSpec=null})

2019-03-25 00:00:56,876 [ClusteredBeam-ZkFuturePool-853dd197-d5f5-4061-8e08-dfd66f41e603] INFO c.m.t.finagle.FinagleRegistry - Created client for service: disco!druid:overlord

2019-03-25 00:00:57,375 [finagle/netty3-1] INFO com.metamx.common.scala.control$ - Created druid indexing task with id: index_realtime_foo_2019-03-25T00:00:00.000Z_0_0 (service = druid:overlord)

2019-03-25 00:00:57,403 [ClusteredBeam-ZkFuturePool-853dd197-d5f5-4061-8e08-dfd66f41e603] WARN org.apache.curator.utils.ZKPaths - The version of ZooKeeper being used doesn’t support Container nodes. CreateMode.PERSISTENT will be used instead.

2019-03-25 00:00:57,409 [ClusteredBeam-ZkFuturePool-853dd197-d5f5-4061-8e08-dfd66f41e603] INFO c.m.c.s.net.finagle.DiscoResolver - Updating instances for service[firehose:druid:overlord:foo-000-0000-0000] to Set()

2019-03-25 00:00:57,410 [ClusteredBeam-ZkFuturePool-853dd197-d5f5-4061-8e08-dfd66f41e603] INFO c.m.t.finagle.FinagleRegistry - Created client for service: disco!firehose:druid:overlord:foo-000-0000-0000

2019-03-25 00:00:57,438 [ClusteredBeam-ZkFuturePool-853dd197-d5f5-4061-8e08-dfd66f41e603] INFO c.m.tranquility.beam.ClusteredBeam - Created beam: {“interval”:“2019-03-25T00:00:00.000Z/2019-03-25T01:00:00.000Z”,“partition”:0,“tasks”:[{“id”:“index_realtime_foo_2019-03-25T00:00:00.000Z_0_0”,“firehoseId”:“foo-000-0000-0000”}],“timestamp”:“2019-03-25T00:00:00.000Z”}

2019-03-25 00:00:57,440 [ClusteredBeam-ZkFuturePool-853dd197-d5f5-4061-8e08-dfd66f41e603] INFO c.metamx.tranquility.druid.DruidBeam - Closing Druid beam for datasource[foo] interval[2019-03-25T00:00:00.000Z/2019-03-25T01:00:00.000Z] (tasks = index_realtime_foo_2019-03-25T00:00:00.000Z_0_0)

2019-03-25 00:00:57,441 [ClusteredBeam-ZkFuturePool-853dd197-d5f5-4061-8e08-dfd66f41e603] INFO c.m.t.finagle.FinagleRegistry - Closing client for service: disco!firehose:druid:overlord:foo-000-0000-0000

2019-03-25 00:00:57,457 [ClusteredBeam-ZkFuturePool-853dd197-d5f5-4061-8e08-dfd66f41e603] INFO c.m.c.s.net.finagle.DiscoResolver - No longer monitoring service[firehose:druid:overlord:foo-000-0000-0000]

2019-03-25 00:00:57,479 [ClusteredBeam-ZkFuturePool-853dd197-d5f5-4061-8e08-dfd66f41e603] INFO c.m.tranquility.beam.ClusteredBeam - Writing new beam data to[/tranquility/beams/druid:overlord/foo/data]: {“latestTime”:“2019-03-25T00:00:00.000Z”,“latestCloseTime”:“2019-03-24T23:00:00.000Z”,“beams”:{“2019-03-25T00:00:00.000Z”:[{“interval”:“2019-03-25T00:00:00.000Z/2019-03-25T01:00:00.000Z”,“partition”:0,“tasks”:[{“id”:“index_realtime_foo_2019-03-25T00:00:00.000Z_0_0”,“firehoseId”:“foo-000-0000-0000”}],“timestamp”:“2019-03-25T00:00:00.000Z”}]}}

2019-03-25 00:00:57,493 [ClusteredBeam-ZkFuturePool-853dd197-d5f5-4061-8e08-dfd66f41e603] INFO c.m.tranquility.beam.ClusteredBeam - Adding beams for identifier[druid:overlord/foo] timestamp[2019-03-25T00:00:00.000Z]: List(Map(interval -> 2019-03-25T00:00:00.000Z/2019-03-25T01:00:00.000Z, partition -> 0, tasks -> ArraySeq(Map(id -> index_realtime_foo_2019-03-25T00:00:00.000Z_0_0, firehoseId -> foo-000-0000-0000)), timestamp -> 2019-03-25T00:00:00.000Z))

2019-03-25 00:00:57,531 [ClusteredBeam-ZkFuturePool-853dd197-d5f5-4061-8e08-dfd66f41e603] INFO c.m.c.s.net.finagle.DiscoResolver - Updating instances for service[firehose:druid:overlord:foo-000-0000-0000] to Set()

2019-03-25 00:00:57,532 [ClusteredBeam-ZkFuturePool-853dd197-d5f5-4061-8e08-dfd66f41e603] INFO c.m.t.finagle.FinagleRegistry - Created client for service: disco!firehose:druid:overlord:foo-000-0000-0000

2019-03-25 00:01:10,791 [finagle/netty3-1] INFO c.m.tranquility.druid.TaskClient - Task index_realtime_foo_2019-03-25T00:00:00.000Z_0_0 status changed from TaskRunning -> TaskFailed

2019-03-25 00:01:10,795 [finagle/netty3-1] WARN c.m.tranquility.druid.TaskClient - Emitting alert: [anomaly] Loss of Druid redundancy: foo

{

“dataSource” : “foo”,

“task” : “index_realtime_foo_2019-03-25T00:00:00.000Z_0_0”,

“status” : “TaskFailed”

}

2019-03-25 00:01:10,816 [finagle/netty3-1] INFO c.metamx.emitter.core.LoggingEmitter - Event [{“feed”:“alerts”,“timestamp”:“2019-03-25T00:01:10.811Z”,“service”:“tranquility”,“host”:“localhost”,“severity”:“anomaly”,“description”:“Loss of Druid redundancy: foo”,“data”:{“dataSource”:“foo”,“task”:“index_realtime_foo_2019-03-25T00:00:00.000Z_0_0”,“status”:“TaskFailed”}}]

2019-03-25 00:01:10,825 [finagle/netty3-1] WARN c.m.tranquility.beam.ClusteredBeam - Emitting alert: [anomaly] Beam defunct: druid:overlord/foo

{

“eventCount” : 1,

“timestamp” : “2019-03-25T00:00:00.000Z”,

“beam” : “MergingPartitioningBeam(DruidBeam(interval = 2019-03-25T00:00:00.000Z/2019-03-25T01:00:00.000Z, partition = 0, tasks = [index_realtime_foo_2019-03-25T00:00:00.000Z_0_0/foo-000-0000-0000]))”

}

com.metamx.tranquility.beam.DefunctBeamException: Tasks are all gone: index_realtime_foo_2019-03-25T00:00:00.000Z_0_0

at com.metamx.tranquility.druid.DruidBeam$$anonfun$sendAll$2$$anonfun$6$$anonfun$apply$6.apply(DruidBeam.scala:115) ~[io.druid.tranquility-core-0.8.2.jar:0.8.2]

at com.metamx.tranquility.druid.DruidBeam$$anonfun$sendAll$2$$anonfun$6$$anonfun$apply$6.apply(DruidBeam.scala:115) ~[io.druid.tranquility-core-0.8.2.jar:0.8.2]

at scala.Option.getOrElse(Option.scala:121) ~[org.scala-lang.scala-library-2.11.7.jar:na]

at com.metamx.tranquility.druid.DruidBeam$$anonfun$sendAll$2$$anonfun$6.apply(DruidBeam.scala:112) ~[io.druid.tranquility-core-0.8.2.jar:0.8.2]

at com.metamx.tranquility.druid.DruidBeam$$anonfun$sendAll$2$$anonfun$6.apply(DruidBeam.scala:109) ~[io.druid.tranquility-core-0.8.2.jar:0.8.2]

at com.twitter.util.Future$$anonfun$map$1$$anonfun$apply$6.apply(Future.scala:950) ~[com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.util.Try$.apply(Try.scala:13) ~[com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.util.Future$.apply(Future.scala:97) ~[com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.util.Future$$anonfun$map$1.apply(Future.scala:950) ~[com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.util.Future$$anonfun$map$1.apply(Future.scala:949) ~[com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.util.Promise$Transformer.liftedTree1$1(Promise.scala:112) ~[com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.util.Promise$Transformer.k(Promise.scala:112) ~[com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.util.Promise$Transformer.apply(Promise.scala:122) ~[com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.util.Promise$Transformer.apply(Promise.scala:103) ~[com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.util.Promise$$anon$1.run(Promise.scala:366) ~[com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.concurrent.LocalScheduler$Activation.run(Scheduler.scala:178) [com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.concurrent.LocalScheduler$Activation.submit(Scheduler.scala:136) [com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.concurrent.LocalScheduler.submit(Scheduler.scala:207) [com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.concurrent.Scheduler$.submit(Scheduler.scala:92) [com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.util.Promise.runq(Promise.scala:350) [com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.util.Promise.updateIfEmpty(Promise.scala:721) [com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.util.Promise.update(Promise.scala:694) [com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.util.Promise.setValue(Promise.scala:670) [com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.concurrent.AsyncQueue.offer(AsyncQueue.scala:111) [com.twitter.util-core_2.11-6.30.0.jar:6.30.0]

at com.twitter.finagle.netty3.transport.ChannelTransport.handleUpstream(ChannelTransport.scala:55) [com.twitter.finagle-core_2.11-6.31.0.jar:6.31.0]

at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:194) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.handler.codec.http.HttpClientCodec.handleUpstream(HttpClientCodec.java:92) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142) [io.netty.netty-3.10.5.Final.jar:na]

at com.twitter.finagle.netty3.channel.ChannelStatsHandler.messageReceived(ChannelStatsHandler.scala:78) [com.twitter.finagle-core_2.11-6.31.0.jar:6.31.0]

at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142) [io.netty.netty-3.10.5.Final.jar:na]

at com.twitter.finagle.netty3.channel.ChannelRequestStatsHandler.messageReceived(ChannelRequestStatsHandler.scala:35) [com.twitter.finagle-core_2.11-6.31.0.jar:6.31.0]

at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [io.netty.netty-3.10.5.Final.jar:na]

at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [io.netty.netty-3.10.5.Final.jar:na]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_191]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_191]

at java.lang.Thread.run(Thread.java:748) [na:1.8.0_191]

2019-03-25 00:01:10,830 [finagle/netty3-1] INFO c.metamx.emitter.core.LoggingEmitter - Event [{“feed”:“alerts”,“timestamp”:“2019-03-25T00:01:10.829Z”,“service”:“tranquility”,“host”:“localhost”,“severity”:“anomaly”,“description”:“Beam defunct: druid:overlord/foo”,“data”:{“exceptionType”:“com.metamx.tranquility.beam.DefunctBeamException”,“exceptionStackTrace”:“com.metamx.tranquility.beam.DefunctBeamException: Tasks are all gone: index_realtime_foo_2019-03-25T00:00:00.000Z_0_0\n\tat com.metamx.tranquility.druid.DruidBeam$$anonfun$sendAll$2$$anonfun$6$$anonfun$apply$6.apply(DruidBeam.scala:115)\n\tat com.metamx.tranquility.druid.DruidBeam$$anonfun$sendAll$2$$anonfun$6$$anonfun$apply$6.apply(DruidBeam.scala:115)\n\tat scala.Option.getOrElse(Option.scala:121)\n\tat com.metamx.tranquility.druid.DruidBeam$$anonfun$sendAll$2$$anonfun$6.apply(DruidBeam.scala:112)\n\tat com.metamx.tranquility.druid.DruidBeam$$anonfun$sendAll$2$$anonfun$6.apply(DruidBeam.scala:109)\n\tat com.twitter.util.Future$$anonfun$map$1$$anonfun$apply$6.apply(Future.scala:950)\n\tat com.twitter.util.Try$.apply(Try.scala:13)\n\tat com.twitter.util.Future$.apply(Future.scala:97)\n\tat com.twitter.util.Future$$anonfun$map$1.apply(Future.scala:950)\n\tat com.twitter.util.Future$$anonfun$map$1.apply(Future.scala:949)\n\tat com.twitter.util.Promise$Transformer.liftedTree1$1(Promise.scala:112)\n\tat com.twitter.util.Promise$Transformer.k(Promise.scala:112)\n\tat com.twitter.util.Promise$Transformer.apply(Promise.scala:122)\n\tat com.twitter.util.Promise$Transformer.apply(Promise.scala:103)\n\tat com.twitter.util.Promise$$anon$1.run(Promise.scala:366)\n\tat com.twitter.concurrent.LocalScheduler$Activation.run(Scheduler.scala:178)\n\tat com.twitter.concurrent.LocalScheduler$Activation.submit(Scheduler.scala:136)\n\tat com.twitter.concurrent.LocalScheduler.submit(Scheduler.scala:207)\n\tat com.twitter.concurrent.Scheduler$.submit(Scheduler.scala:92)\n\tat com.twitter.util.Promise.runq(Promise.scala:350)\n\tat com.twitter.util.Promise.updateIfEmpty(Promise.scala:721)\n\tat com.twitter.util.Promise.update(Promise.scala:694)\n\tat com.twitter.util.Promise.setValue(Promise.scala:670)\n\tat com.twitter.concurrent.AsyncQueue.offer(AsyncQueue.scala:111)\n\tat com.twitter.finagle.netty3.transport.ChannelTransport.handleUpstream(ChannelTransport.scala:55)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n\tat org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108)\n\tat org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n\tat org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)\n\tat org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:194)\n\tat org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n\tat org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)\n\tat org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)\n\tat org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)\n\tat org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)\n\tat org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)\n\tat org.jboss.netty.handler.codec.http.HttpClientCodec.handleUpstream(HttpClientCodec.java:92)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n\tat org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)\n\tat com.twitter.finagle.netty3.channel.ChannelStatsHandler.messageReceived(ChannelStatsHandler.scala:78)\n\tat org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)\n\tat org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142)\n\tat com.twitter.finagle.netty3.channel.ChannelRequestStatsHandler.messageReceived(ChannelRequestStatsHandler.scala:35)\n\tat org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)\n\tat org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)\n\tat org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)\n\tat org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)\n\tat org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)\n\tat org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)\n\tat org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)\n\tat org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)\n\tat org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)\n\tat org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)\n\tat org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\n”,“timestamp”:“2019-03-25T00:00:00.000Z”,“eventCount”:1,“beam”:“MergingPartitioningBeam(DruidBeam(interval = 2019-03-25T00:00:00.000Z/2019-03-25T01:00:00.000Z, partition = 0, tasks = [index_realtime_foo_2019-03-25T00:00:00.000Z_0_0/foo-000-0000-0000]))”,“exceptionMessage”:“Tasks are all gone: index_realtime_foo_2019-03-25T00:00:00.000Z_0_0”}}]

2019-03-25 00:01:10,845 [ClusteredBeam-ZkFuturePool-853dd197-d5f5-4061-8e08-dfd66f41e603] INFO c.m.tranquility.beam.ClusteredBeam - Writing new beam data to[/tranquility/beams/druid:overlord/foo/data]: {“latestTime”:“2019-03-25T00:00:00.000Z”,“latestCloseTime”:“2019-03-25T00:00:00.000Z”,“beams”:{}}

Using druid 8. We had a process that started and stopped working randomly “load” I noticed this and tried to adjust some settings, I noticed that the error messages were impossibly consistent with the ingestion.json I was saying. I concluded that probably the settings were not properly applied until the entire cluster restarted. In any case a co-worker was was fairly convinced, “real time must work beause it is an advertised feature”, I told him, “All I ever see is people posting to the ML and rarely does anyone post back that they got it working #changemymind

Druid are you there? It tranquility even working? I never had an issue when using kafta with the real-time node but yet you say that its now deprecated. Is stack overflow a better place to post? please advise.
'I mean I see the below in the middle manager logs:

2019-03-26T08:03:43,286 INFO [WorkerTaskManager-CompletedTasksCleaner] org.apache.druid.indexing.worker.WorkerTaskManager - Deleting completed task[index_realtime_foo_2019-03-26T08:00:00.000Z_0_0] information, overlord task status[FAILED].

Again I get no errors but logs related to timestamps and I submit data using utc

2019-03-26 08:00:55,752 [ClusteredBeam-ZkFuturePool-7b917816-2e4e-4689-aca8-c5b5c4f3c2ea] INFO c.m.tranquility.beam.ClusteredBeam - Writing new beam data to[/tranquility/beams/druid:overlord/foo/data]: {“latestTime”:“2019-03-26T08:00:00.000Z”,“latestCloseTime”:“2019-03-26T08:00:00.000Z”,“beams”:{}}

2019-03-26 08:04:28,891 [ClusteredBeam-ZkFuturePool-7b917816-2e4e-4689-aca8-c5b5c4f3c2ea] INFO c.m.tranquility.beam.ClusteredBeam - Global latestCloseTime[2019-03-26T08:00:00.000Z] for identifier[druid:overlord/foo] has moved past timestamp[2019-03-26T08:00:00.000Z], not creating merged beam

2019-03-26 08:04:28,895 [ClusteredBeam-ZkFuturePool-7b917816-2e4e-4689-aca8-c5b5c4f3c2ea] INFO c.m.tranquility.beam.ClusteredBeam - Turns out we decided not to actually make beams for identifier[druid:overlord/foo] timestamp[2019-03-26T08:00:00.000Z]. Returning None.

David, if you are comfortable using Kafka to ingest into Druid, have you considered using Druid’s Kafka Indexing Service (http://druid.io/docs/latest/development/extensions-core/kafka-ingestion.html) ? I think most folks have migrated to this method of real-time ingestion for Druid.
–T