All task was killed after created

no matter what kind of task, like index-kafka or compact was killed;
overlord logs are:

2019-06-20T07:15:10,919 WARN [KafkaSupervisor-alpha_station_txack_record-Worker-0] org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor - Clearing task group [0] information as no valid tasks left the group

2019-06-20T07:15:25,124 ERROR [KafkaSupervisor-alpha_station_txack_record-Worker-0] org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor - Problem while getting checkpoints for task [index_kafka_alpha_station_txack_record_2c608478981da35_cfcibngo], killing the task

java.util.concurrent.ExecutionException: org.apache.druid.indexing.common.IndexTaskClient$TaskNotRunnableException: Aborting request because task [index_kafka_alpha_station_txack_record_2c608478981da35_cfcibngo] is not runnable

at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[?:1.8.0_212]

at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[?:1.8.0_212]

at org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor.verifyAndMergeCheckpoints(SeekableStreamSupervisor.java:1444) ~[druid-indexing-service-0.14.0-incubating.jar:0.14.0-incubating]

at org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor.lambda$verifyAndMergeCheckpoints$16(SeekableStreamSupervisor.java:1402) ~[druid-indexing-service-0.14.0-incubating.jar:0.14.0-incubating]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_212]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_212]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]

Caused by: org.apache.druid.indexing.common.IndexTaskClient$TaskNotRunnableException: Aborting request because task [index_kafka_alpha_station_txack_record_2c608478981da35_cfcibngo] is not runnable

at org.apache.druid.indexing.common.IndexTaskClient.submitRequest(IndexTaskClient.java:322) ~[druid-indexing-service-0.14.0-incubating.jar:0.14.0-incubating]

at org.apache.druid.indexing.common.IndexTaskClient.submitRequestWithEmptyContent(IndexTaskClient.java:220) ~[druid-indexing-service-0.14.0-incubating.jar:0.14.0-incubating]

at org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskClient.getCheckpoints(SeekableStreamIndexTaskClient.java:253) ~[druid-indexing-service-0.14.0-incubating.jar:0.14.0-incubating]

at org.apache.druid.indexing.seekablestream.SeekableStreamIndexTaskClient.lambda$getCheckpointsAsync$0(SeekableStreamIndexTaskClient.java:276) ~[druid-indexing-service-0.14.0-incubating.jar:0.14.0-incubating]

… 4 more

2019-06-20T07:15:25,132 WARN [KafkaSupervisor-alpha_station_txack_record-Worker-0] org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor - Clearing task group [0] information as no valid tasks left the group

2019-06-20T07:15:25,147 WARN [KafkaSupervisor-alpha_station_txack_record-Reporting-0] org.apache.druid.indexing.kafka.supervisor.KafkaSupervisor - Lag metric: Kafka partitions [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] do not match task partitions

2019-06-20T07:15:25,228 WARN [KafkaSupervisor-alpha_station_txack_record-Worker-0] org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor - Ignoring task [index_kafka_alpha_station_txack_record_2c608478981da35_koghokai], as probably it is not started running yet

2019-06-20T07:15:31,638 WARN [KafkaSupervisor-alpha_station_txack_record] org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor - Task [index_kafka_alpha_station_txack_record_2c608478981da35_koghokai] failed to return start time, killing task

2019-06-20T07:15:31,648 WARN [KafkaSupervisor-alpha_station_txack_record] org.apache.druid.indexing.seekablestream.supervisor.SeekableStreamSupervisor - Task [index_kafka_alpha_station_txack_record_2c608478981da35_koghokai] failed to return start time, killing task

``

And here is my supervisor:
{

“type”: “kafka”,

“dataSchema”: {

“dataSource”: “alpha_station_txack”,

“parser”: {

 "type": "string",

 "parseSpec": {

   "format": "json",

   "timestampSpec": {

     "column": "@timestamp",

     "format": "iso"

   },

   "dimensionsSpec": {

     "dimensions": [

       "msgId",

       "gwEUI",

       "devEUI",

       "downTx"

     ]

   }

 }

},

“metricsSpec” : ,

“granularitySpec”: {

 "type": "uniform",

 "segmentGranularity": "DAY",

 "queryGranularity": "NONE",

 "rollup": false

}

},

“tuningConfig”: {

“type”: “kafka”,

“logParseExceptions”: true,

“maxRowsInMemory”: 500000,

“intermediateHandoffPeriod”: “P1D”

},

“ioConfig”: {

“topic”: “alpha-station-txack”,

“replicas”: 1,

“taskDuration”: “PT12H”,

“completionTimeout”: “PT30M”,

“consumerProperties”: {

 "bootstrap.servers": "192.168.102.18:9092",

 "group.id": "alpha-druid"

}

}

}

``

在 2019年6月20日星期四 UTC+8下午3:18:15,林落写道:

Hi,
Let’s take a step back.

Are you able to ingest at least sample wikipedia data into your druid cluster?

Can you share your overlord and middleManager runtime.properties and jvm.config files?

Thank you.

–siva

Problem solved. I didn’t set Peon’s MaxDirectMemorySize

在 2019年6月22日星期六 UTC+8上午3:33:52,Siva Mannem写道: