My real time task always running ,once kill it ,the data can't query

I’m new to druid。 I ingest data by kafka via tranquility,the granularitySpec about my json config is :
“granularitySpec” : {
“type” : “uniform”,

“segmentGranularity” : “HOUR”,

“queryGranularity” : “none”

}

Firstly the first task start,one hour past,the task is still running ,and another new task started.

Is there more and more tasks started as time pass by? what’s the way to solve it ?

the coordinator console shows the task status is :

{"task":"index_realtime_market-forexdata01_2017-08-31T06:00:00.000Z_0_0","status":{"id":"index_realtime_market-forexdata01_2017-08-31T06:00:00.000Z_0_0","status":"RUNNING","duration":-1}}

在 2017年8月31日星期四 UTC+8下午4:28:53,afa 阿发写道:

The tasks should eventually exit on their own, although there is a period where two copies will be running. See https://github.com/druid-io/tranquility/blob/master/docs/trouble.md and https://github.com/druid-io/tranquility/blob/master/docs/overview.md for details.

Hi Gian
Actually,the task was already alived two days ,they still hold the RUNNING status. I just don’t know when will it stoped,or whether there is some wrong about my cofiguration.

here is my configuration,please see it:

{

“dataSources” : {

“market-forexdata01” : {

“spec” : {

“dataSchema” : {

“dataSource” : “market-forexdata01”,

“parser” : {

“type” : “string”,

“parseSpec” : {

“timestampSpec” : {

“column” : “updateTime”,

“format” : “millis”

},

“dimensionsSpec” : {

“dimensions” : [“id”,“prodCode”,“prodName”,“fOpenReserve”,“fcloseReserve”,“lastClosePx”,“openPx”,“highPx”,“lowPx”,“lastPx”,“bussinessAmount”,“bussinessTotalPx”,“pxChange”,“pxChangeRate”,“preclosePx”]

},

“format” : “json”

}

},

“granularitySpec” : {

“type” : “uniform”,

“segmentGranularity” : “HOUR”,

“queryGranularity” : “none”

},

“metricsSpec” : [

{

“type” : “count”,

“name” : “count”

},

{

“name” : “week52High”,

“type” : “doubleMax”,

“fieldName” : “highPx”

},{

“name” : “week52Low”,

“type” : “doubleMin”,

“fieldName” : “lowPx”

}

]

},

“ioConfig” : {

“type” : “realtime”

},

“tuningConfig” : {

“type” : “realtime”,

“maxRowsInMemory” : “100000”,

“intermediatePersistPeriod” : “PT20M”,

“windowPeriod” : “PT10M”

}

},

“properties” : {

“task.partitions” : “1”,

“task.replicants” : “1”,

“topicPattern” : “forexdata01”

}

}

},

“properties” : {

“zookeeper.connect” : “192.168.1.38:2181”,

“druid.discovery.curator.path” : “/druid/discovery”,

“druid.selectors.indexing.serviceName” : “druid/overlord”,

“commit.periodMillis” : “15000”,

“consumer.numThreads” : “2”,

“kafka.zookeeper.connect” : “192.168.1.38:2181”,

“kafka.group.id” : “tranquility-kafka”,

“reportDropsAsExceptions” : “false”

}

}

在 2017年9月1日星期五 UTC+8上午2:41:34,Gian Merlino写道:

The config looks ok at first glance. It might be a problem with handoff. Check out the troubleshooting tips on https://github.com/druid-io/tranquility/blob/master/docs/trouble.md and see if those help.

I have same issue… Did you solved this problem??

The normal, expected use cases have the following overall constraints: intermediatePersistPeriod ≤ windowPeriod < segmentGranularity and queryGranularity ≤ segmentGranularity

2017년 8월 31일 목요일 오후 5시 28분 53초 UTC+9, afa 阿发 님의 말: