Tranquility immediately drops all data when received

I have a druid server and kafka server and I’m trying to pull data from kafka by tranquility.

tranquility log:

2017-11-23 06:18:47,355 [KafkaConsumer-CommitThread] INFO c.m.tranquility.kafka.KafkaConsumer - Flushed {og.test={receivedCount=3, sentCount=0, droppedCount=3, unparseableCount=0}} pending messages in 3ms and committed offsets in 7ms.

2017-11-23 06:19:02,361 [KafkaConsumer-CommitThread] INFO c.m.tranquility.kafka.KafkaConsumer - Flushed {og.test={receivedCount=5, sentCount=0, droppedCount=5, unparseableCount=0}} pending messages in 0ms and committed offsets in 4ms.
2017-11-23 06:19:17,366 [KafkaConsumer-CommitThread] INFO c.m.tranquility.kafka.KafkaConsumer - Flushed {og.test={receivedCount=2, sentCount=0, droppedCount=2, unparseableCount=0}} pending messages in 1ms and committed offsets in 2ms.

``

tranquility config json:

{

“dataSources” : [

{

“spec” : {

“dataSchema” : {

“granularitySpec” : {

“queryGranularity” : “none”,

“type” : “uniform”,

“segmentGranularity” : “hour”

},

“dataSource” : “ogtest2”,

“parser” : {

“type” : “string”,

“parseSpec” : {

“timestampSpec” : {

“format” : “auto”,

“column” : “time”

},

“format” : “json”,

“dimensionsSpec” : {

“dimensions” : [

“user”,

“url”

]

}

}

},

“metricsSpec” : [

{

“type” : “count”,

“name” : “count”

},

{

“type” : “doubleSum”,

“name” : “added”,

“fieldName” : “latencyMs”

}

]

},

“tuningConfig” : {

“type” : “realtime”,

“intermediatePersistPeriod” : “PT10M”,

“windowPeriod” : “PT10M”,

“maxRowsInMemory” : 75000

}

},

“properties” : {

“task.partitions” : “1”,

“task.replicants” : “1”,

“topicPattern” : “og.test”,

“topicPattern.priority” : “1”

}

}

],

“properties” : {

“zookeeper.connect” : “10.107.95.193:2181”,

“zookeeper.timeout” : “PT20S”,

“druid.selectors.indexing.serviceName” : “druid/overlord”,

“druid.discovery.curator.path” : “/druid/discovery”,

“kafka.zookeeper.connect” : “10.107.95.193:2181”,

“kafka.group.id” : “tranquility-kafka”,

“consumer.numThreads” : “2”,

“commit.periodMillis” : “15000”,

“reportDropsAsExceptions” : “false”

}

}

``

data I sent:

{“time”: “2017-11-20T10:00:00Z”, “url”: “/foo/bar”, “user”: “alice”, “latencyMs”: 32}

``

I didn’t change the coordinator rules so there is default rule only

I solved it. There were two problems.

  1. I was using too old test data, so tranquility was just dropping them.
    I fixed test data to recent and it stopped dropping them.

I still wonder where is the config for dropping old data.

  1. I had kafka zookeeper and druid zookeeper on different server but I typed wrong on my realtime.spec properties

“zookeeper.connect” was for druid zookeeper and “kafka.zookeeper.connect” was for kafka zookeeper

I changed the “zookeeper.connect” to my druid zookeeper address and everything is working well now!

Thanks to this thread.
https://groups.google.com/forum/#!topic/druid-user/WEmfTF-AUXM

https://groups.google.com/forum/#!topic/druid-user/DKGsyq_HW2o

2017년 11월 23일 목요일 오후 3시 27분 9초 UTC+9, 오지연 님의 말:

Hi,

The configuration responsible for this is - “windowPeriod” : “PT10M”, any record with the time/timestamp field older than 10 mins would be thrown away.