When taskDuration time out,it is seem druid clean old data and ingest kafka data again?

hi all,my kafka supervisor config are below
{

“type”: “kafka”,

“dataSchema”: {

“dataSource”: “sdp-to-aps-0930”,

“parser”: {

“type”: “string”,

“parseSpec”: {

“format”: “json”,

“timestampSpec”: {

“column”: “created”,

“format”: “auto”

},

“dimensionsSpec”: {

“dimensions”: [

“ITEM_NUM”,

“is_effective”,

“GOODS_CHANNEL”,

“GOODS_NUM”,

“GOODS_NAME”

]

}

}

},

“metricsSpec”: ,

“granularitySpec”: {

“type”: “uniform”,

“segmentGranularity”: “MONTH”,

“queryGranularity”: {

“type”: “none”

},

“rollup”: false,

“intervals”: null

},

“transformSpec”: {

“filter”: null,

“transforms”:

}

},

“ioConfig”: {

“topic”: “erp-join-pi-0228”,

“replicas”: 1,

“taskCount”: 2,

“taskDuration”: “PT1800S”,

“consumerProperties”: {

“bootstrap.servers”: “ip1,ip2”

},

“startDelay”: “PT5S”,

“period”: “PT30S”,

“useEarliestOffset”: true,

“completionTimeout”: “PT1800S”,

“lateMessageRejectionPeriod”: null,

“earlyMessageRejectionPeriod”: null,

“skipOffsetGaps”: false

},

“context”: null

};

**when the task begin and taskDuration time out,then new task will be created;and then taskLock will be deleted in log **

2019-02-28T20:34:57,599 INFO [Curator-PathChildrenCache-3] io.druid.indexing.overlord.MetadataTaskStorage - Deleting TaskLock with id[901258]: TaskLock{type=EXCLUSIVE, groupId=index_kafka_sdp-to-aps-0930, dataSource=sdp-to-aps-0930, interval=2013-08-01T00:00:00.000Z/2013-09-01T00:00:00.000Z, version=2019-02-28T12:04:51.011Z, priority=75, revoked=false}

and from now on,i query from druid and get nothing, and then all task delete ,and then druid found existing pending segment ??

Location{host=‘null’, port=-1, tlsPort=-1}]

2019-02-28T20:34:59,033 INFO [Curator-PathChildrenCache-3] io.druid.indexing.overlord.RemoteTaskRunner - Worker[IBG-Sales-Druid-cwP02:8091] wrote RUNNING status for task [index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_kgljgcop] on [TaskLocation{host=‘IBG-Sales-Druid-cwP02’, port=8100, tlsPort=-1}]

2019-02-28T20:34:59,033 INFO [Curator-PathChildrenCache-3] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_kgljgcop] location changed to [TaskLocation{host=‘IBG-Sales-Druid-cwP02’, port=8100, tlsPort=-1}].

2019-02-28T20:35:03,950 INFO [qtp440902120-79] io.druid.indexing.common.actions.LocalTaskActionClient - Performing action for task[index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_kgljgcop]: SegmentAllocateAction{dataSource=‘sdp-to-aps-0930’, timestamp=2014-08-01T10:55:39.000Z, queryGranularity=NoneGranularity, preferredSegmentGranularity={type=period, period=P1M, timeZone=UTC, origin=null}, sequenceName=‘index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_0’, previousSegmentId=‘null’, skipSegmentLineageCheck=‘true’}

2019-02-28T20:35:03,953 INFO [qtp440902120-79] io.druid.indexing.overlord.TaskLockbox - Added task[index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_kgljgcop] to TaskLock[index_kafka_sdp-to-aps-0930]

2019-02-28T20:35:03,953 INFO [qtp440902120-79] io.druid.indexing.overlord.MetadataTaskStorage - Adding lock on interval[2014-08-01T00:00:00.000Z/2014-09-01T00:00:00.000Z] version[2019-02-28T12:35:03.953Z] for task: index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_kgljgcop

2019-02-28T20:35:04,101 INFO [qtp440902120-79] io.druid.metadata.IndexerSQLMetadataStorageCoordinator - Found existing pending segment [sdp-to-aps-0930_2014-08-01T00:00:00.000Z_2014-09-01T00:00:00.000Z_2018-09-30T02:24:41.760Z_114] for sequence[index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_0] (previous = ) in DB

2019-02-28T20:35:04,309 INFO [qtp440902120-101] io.druid.indexing.common.actions.LocalTaskActionClient - Performing action for task[index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_kgljgcop]: SegmentAllocateAction{dataSource=‘sdp-to-aps-0930’, timestamp=2014-09-01T09:15:03.000Z, queryGranularity=NoneGranularity, preferredSegmentGranularity={type=period, period=P1M, timeZone=UTC, origin=null}, sequenceName=‘index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_0’, previousSegmentId=‘sdp-to-aps-0930_2014-08-01T00:00:00.000Z_2014-09-01T00:00:00.000Z_2018-09-30T02:24:41.760Z_114’, skipSegmentLineageCheck=‘true’}

2019-02-28T20:35:04,311 INFO [qtp440902120-101] io.druid.indexing.overlord.TaskLockbox - Added task[index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_kgljgcop] to TaskLock[index_kafka_sdp-to-aps-0930]

2019-02-28T20:35:04,311 INFO [qtp440902120-101] io.druid.indexing.overlord.MetadataTaskStorage - Adding lock on interval[2014-09-01T00:00:00.000Z/2014-10-01T00:00:00.000Z] version[2019-02-28T12:35:04.311Z] for task: index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_kgljgcop

2019-02-28T20:35:04,443 INFO [qtp440902120-101] io.druid.metadata.IndexerSQLMetadataStorageCoordinator - Found existing pending segment [sdp-to-aps-0930_2014-09-01T00:00:00.000Z_2014-10-01T00:00:00.000Z_2018-09-30T02:24:41.973Z_116] for sequence[index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_0] (previous = [sdp-to-aps-0930_2014-08-01T00:00:00.000Z_2014-09-01T00:00:00.000Z_2018-09-30T02:24:41.760Z_114]) in DB

2019-02-28T20:35:04,568 INFO [qtp440902120-103] io.druid.indexing.common.actions.LocalTaskActionClient - Performing action for task[index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_kgljgcop]: SegmentAllocateAction{dataSource=‘sdp-to-aps-0930’, timestamp=2014-10-08T02:26:38.000Z, queryGranularity=NoneGranularity, preferredSegmentGranularity={type=period, period=P1M, timeZone=UTC, origin=null}, sequenceName=‘index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_0’, previousSegmentId=‘sdp-to-aps-0930_2014-09-01T00:00:00.000Z_2014-10-01T00:00:00.000Z_2018-09-30T02:24:41.973Z_116’, skipSegmentLineageCheck=‘true’}

2019-02-28T20:35:04,571 INFO [qtp440902120-103] io.druid.indexing.overlord.TaskLockbox - Added task[index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_kgljgcop] to TaskLock[index_kafka_sdp-to-aps-0930]

2019-02-28T20:35:04,571 INFO [qtp440902120-103] io.druid.indexing.overlord.MetadataTaskStorage - Adding lock on interval[2014-10-01T00:00:00.000Z/2014-11-01T00:00:00.000Z] version[2019-02-28T12:35:04.571Z] for task: index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_kgljgcop

2019-02-28T20:35:04,704 INFO [qtp440902120-103] io.druid.metadata.IndexerSQLMetadataStorageCoordinator - Found existing pending segment [sdp-to-aps-0930_2014-10-01T00:00:00.000Z_2014-11-01T00:00:00.000Z_2018-09-30T02:24:42.570Z_115] for sequence[index_kafka_sdp-to-aps-0930_d1c4b2dbc5ca97f_0] (previous = [sdp-to-aps-0930_2014-09-01T00:00:00.000Z_2014-10-01T00:00:00.000Z_2018-09-30T02:24:41.973Z_116]) in DB

and then i find data are ingesting into druid, every taskDuration time out,druid will do this again,does it a kafka supervisor’s bug?

if anyone needs complete log file,please tell me in any way,thanks first