com.metamx.common.ISE: Segments not covered by locks for task

Hi guys,

Using druid 0.9.1

Facing one issue -

I am running indexer on local overlord. But throwing exception while putting segment in sql. I am seeing it pushed index file in deep storage S3.

2016-07-27T23:36:39,359 INFO [qtp1384913450-26] io.druid.indexing.common.actions.LocalTaskActionClient - Performing action for task[index_test_cluster_1_2016-07-27T23:36:26.550Z]:

2016-07-27T23:36:39,362 ERROR [qtp1384913450-26] com.sun.jersey.spi.container.ContainerResponse - The RuntimeException could not be mapped to a response, re-throwing to the HTTP container

com.metamx.common.ISE: Segments not covered by locks for task: index_test_cluster_1_2016-07-27T23:36:26.550Z

at io.druid.indexing.common.actions.TaskActionToolbox.verifyTaskLocks(TaskActionToolbox.java:76) ~[druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT]

at io.druid.indexing.common.actions.SegmentTransactionalInsertAction.perform(SegmentTransactionalInsertAction.java:101) ~[druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT]

at io.druid.indexing.common.actions.SegmentInsertAction.perform(SegmentInsertAction.java:74) ~[druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT]

at io.druid.indexing.common.actions.SegmentInsertAction.perform(SegmentInsertAction.java:41) ~[druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT]

at io.druid.indexing.common.actions.LocalTaskActionClient.submit(LocalTaskActionClient.java:64) ~[druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT]

at io.druid.indexing.overlord.http.OverlordResource$3.apply(OverlordResource.java:326) ~[druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT]

at io.druid.indexing.overlord.http.OverlordResource$3.apply(OverlordResource.java:315) ~[druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT]

at io.druid.indexing.overlord.http.OverlordResource.asLeaderWith(OverlordResource.java:658) ~[druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT]

at io.druid.indexing.overlord.http.OverlordResource.doAction(OverlordResource.java:312) ~[druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT]

can anyone please help to figure out this problem?

is it a because of task not submitting to sql?

2016-07-27T23:20:27,141 INFO [task-runner-0-priority-0] io.druid.indexing.common.actions.RemoteTaskActionClient - Submitting action for task[index_test_cluster_2016-07-27T23:20:16.653Z] to overlord[http://xxx.xxx.xxx.xxx:48080/druid/indexer/v1/action]: SegmentInsertAction{segments=[DataSegment{size=3892370, shardSpec=NoneShardSpec, metrics=[numz], dimensions=[ts, lres, mDist, cc, csz, dmk, crev, snrt, cown, ntDist, sagem, sagef, hsp, ucid], version=‘2016-07-28T00:01:00.000Z’, loadSpec={type=s3_zip, bucket=bucket, key=cluster-segement/test_cluster/2016-07-28T00:01:00.000Z_2016-07-28T00:02:00.000Z/2016-07-28T00:01:00.000Z/0/index.zip}, interval=2016-07-28T00:01:00.000Z/2016-07-28T00:02:00.000Z, dataSource=‘test_cluster’, binaryVersion=‘9’}]}

2016-07-27T23:20:27,190 WARN [task-runner-0-priority-0] io.druid.indexing.common.actions.RemoteTaskActionClient - Exception submitting action for task[index_test_cluster_2016-07-27T23:20:16.653Z]

java.io.IOException: Scary HTTP status returned: 500 Server Error. Check your overlord[xxx.xxx.xxx.xxx:48080] logs for exceptions.

at io.druid.indexing.common.actions.RemoteTaskActionClient.submit(RemoteTaskActionClient.java:123) [druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT]

at io.druid.indexing.common.TaskToolbox.publishSegments(TaskToolbox.java:224) [druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT]

at io.druid.indexing.common.task.IndexTask.run(IndexTask.java:280) [druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT]

at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT]

at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.1-SNAPSHOT.jar:0.9.1-SNAPSHOT]

at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_101]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_101]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_101]

at java.lang.Thread.run(Thread.java:745) [?:1.7.0_101]

2016-07-27T23:20:27,195 INFO [task-runner-0-priority-0] io.druid.indexing.common.actions.RemoteTaskActionClient - Will try again in [PT4.709S].

Hi Jitesh,
If a task tries to insert a segment not covered by task lock, it is expected to fail.

Check overlord logs for more info on why the lock was not present or released before the segment was published.

usually, it can be due to ZK disconnects, and overlord assuming the task to be FAILED due to it.

try searching overlord logs with the task_Id for more details.

Cheers,

Nishant

It was sql problem. Now its working fine.

Thank you Nishant!!!1