Why rejectionPolicy doesn't swift to serverTime?

Hi guys!
I have a problem with a kafka indexing task. I am using push method to indexing data on druid, exactly Kafka indexing task (http://druid.io/docs/0.9.0/tutorials/tutorial-kafka.html and https://github.com/druid-io/tranquility/blob/master/docs/kafka.md).
Part of my Kafka spec file (kafka.json) is:

},
“tuningConfig”: {
“maxRowsInMemory”: “75000”,
“type”: “realtime”,
“windowPeriod”: “PT1M”,
rejectionPolicy” : {
“type” : “serverTime”
},
“intermediatePersistPeriod”: “PT1M”
}
},

When the task is running, I can see this in tranquility log:

“ioConfig” : {
“type” : “realtime”,
“plumber” : null,
“firehose” : {
“type” : “clipped”,
“interval” : “2017-09-07T14:18:00.000Z/2017-09-07T14:19:00.000Z”,
“delegate” : {
“type” : “timed”,
“shutoffTime” : “2017-09-07T14:21:00.000Z”,
“delegate” : {
“type” : “receiver”,
“serviceName” : “firehose:overlord:spark-ef-suricata-18-0000-0000”,
“bufferSize” : 100000
}
}
}
},
“tuningConfig” : {
“shardSpec” : {
“type” : “linear”,
“partitionNum” : 0
},
rejectionPolicy” : {
“type” : “none
},
“buildV9Directly” : false,

I am forcing to set “rejectionPolicy” to “serverTime” but it doesn’t switch.
The Firehose type is “clipped” but… Should not it to be “kafka-8.0”? I am indexing on realtime (this stream never stop).
When is the time for handoff (segmentgranularity + windowperiod + druidBeam.firehoseGracePeriod) I can see this in the peon log:
“”""
INFO [spark-ef-suricata-overseer-0] io.druid.segment.realtime.plumber.RealtimePlumber - Skipping persist and merge for entry
… min timestamp required in this run. Segment will be picked up in a future run.
INFO [task-runner-0-priority-0] io.druid.segment.realtime.plumber.CoordinatorBasedSegmentHandoffNotifier - Adding SegmentHandoffCallback for dataSource[spark-ef-suricata] Segment[SegmentDescriptor{interval=2017-09-07T11:03:00.000Z/2017-09-07T11:04:00.000Z, version=‘2017-09-07T11:03:19.067Z’, partitionNumber=0}]
2017-09-07T11:10:00,725 INFO [task-runner-0-priority-0] io.druid.segment.realtime.plumber.RealtimePlumber - Cannot shut down yet! Sinks remaining: spark-ef-suricata_2017-09-07T11:03:00.000Z_2017-09-07T11:04:00.000Z_2017-09-07T11:03:19.067Z
INFO [coordinator_handoff_scheduled_0] io.druid.segment.realtime.plumber.CoordinatorBasedSegmentHandoffNotifier - Still waiting for Handoff for Segments : [[SegmentDescriptor{interval=2017-09-06T09:00:00.000Z/2017-09-06T10:00:00.000Z, version=‘2017-09-06T09:00:39.094Z’, partitionNumber=0}]]
“”""
I have this problem only with this task, the other tasks are working well!!! HDFS is fine, I can see the segments and logs there.
Is possible that this task needs another config (tuningConfig…) in kafka.json?
My stream is like this:
INFO [KafkaConsumer-CommitThread] c.m.t.k.KafkaConsumer [Logger.java:70] Flushed {spark-ef-suricata={receivedCount=10271, sentCount=10271, failedCount=0}} pending messages in 0ms and committed offsets in 2ms.

Where a one event is:

 {"dns_rcode": "NOERROR", "endpoint": "000-001-001-001", "event_type": "dns", "dns_rrtype": "CNAME", "dns_rrname": "www-abc-es.sslproxy.gigya.com", "timestamp": "2017-09-07T17:54:45.139Z", "geo_ip_dst": {"lat": 37.386, "lon": -122.0838}, "proto": "UDP", "in_iface": "eth1", "port_dst": 53, "dns_type": "answer", "dns_ttl": 0, "geo_ip_src": {"lat": "", "lon": ""}, "ipv4_src": "10.200.252.1", "port_src": 25333, "flow_id": 139943327273648, "dns_id": 12610, "probe_ip": "10.200.2.100", "dns_rdata": "c-gigya-abc-es-74568764.eu-west-1.elb.amazonaws.com", "ipv4_dst": "8.8.8.8"}

Attached my config files and kafka.json task.

Please I need help!!! Is a production environment

thanks a lot!!!

common.runtime.properties (1.1 KB)

middlemanager.runtime.properties (1.77 KB)

overlord.runtime.properties (405 Bytes)

kafka.json (6.33 KB)

Any idea?

2017-09-11 17:03:29,550 WARN [ClusteredBeam-ZkFuturePool-d0fa1525-e0c7-415a-b4b3-191c0de8049b] c.m.t.d.DruidBeamMaker [?:?] DruidTuning key[rejectionPolicy] for task[index_realtime_spark-ef-suricata_2017-09-11T16:00:00.000Z_0_0] overridden from[Map(type -> serverTime)] to[Map(type -> none)].

well… the problem was -XX:MaxDirectMemory at config peon. When the tasks of peon does merge and persist it need some more of memory… thank everybody