Historical errors when ingesting

I’m able to successfully trigger and complete an ingestion. Coordinator reports:

2018-06-19T13:21:11,742 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_hadoop_downstream_events_redirects_stitched_2018-06-19T13:12:26.013Z",
  "status" : "SUCCESS",
  "duration" : 520849
}

However, historical has the following errors:

2018-06-19T13:21:40,568 DEBUG [ZkCoordinator-0] io.druid.segment.loading.SegmentLoaderLocalCacheManager - Unable to make parent file[/var/druid/segment-cache/downstream_events_redirects_stitched/2018-06-10T00:00:00.000Z_2018-06-11T00:00:00.000Z/2018-06-19T13:12:26.018Z/0]
2018-06-19T13:21:40,568 ERROR [ZkCoordinator-0] io.druid.segment.loading.SegmentLoaderLocalCacheManager - Failed to load segment in current location /var/druid/segment-cache, try next location if any: {class=io.druid.segment.loading.SegmentLoaderLocalCacheManager, exceptionType=class io.druid.segment.loading.SegmentLoadingException, exceptionMessage=Unable to create marker file for [/var/druid/segment-cache/downstream_events_redirects_stitched/2018-06-10T00:00:00.000Z_2018-06-11T00:00:00.000Z/2018-06-19T13:12:26.018Z/0], location=/var/druid/segment-cache}
io.druid.segment.loading.SegmentLoadingException: Unable to create marker file for [/var/druid/segment-cache/downstream_events_redirects_stitched/2018-06-10T00:00:00.000Z_2018-06-11T00:00:00.000Z/2018-06-19T13:12:26.018Z/0]
at io.druid.segment.loading.SegmentLoaderLocalCacheManager.loadInLocationWithStartMarker(SegmentLoaderLocalCacheManager.java:192) ~[druid-services-0.10.1-selfcontained.jar:0.10.1]
at io.druid.segment.loading.SegmentLoaderLocalCacheManager.loadSegmentWithRetry(SegmentLoaderLocalCacheManager.java:154) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegmentFiles(SegmentLoaderLocalCacheManager.java:130) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at io.druid.segment.loading.SegmentLoaderLocalCacheManager.getSegment(SegmentLoaderLocalCacheManager.java:105) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at io.druid.server.SegmentManager.getAdapter(SegmentManager.java:197) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at io.druid.server.SegmentManager.loadSegment(SegmentManager.java:158) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at io.druid.server.coordination.ZkCoordinator.loadSegment(ZkCoordinator.java:323) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at io.druid.server.coordination.ZkCoordinator.addSegment(ZkCoordinator.java:368) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at io.druid.server.coordination.SegmentChangeRequestLoad.go(SegmentChangeRequestLoad.java:44) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at io.druid.server.coordination.ZkCoordinator$1.childEvent(ZkCoordinator.java:158) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:522) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:516) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:93) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:84) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:513) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at org.apache.curator.framework.recipes.cache.PathChildrenCache$9.run(PathChildrenCache.java:773) [druid-services-0.10.1-selfcontained.jar:0.10.1]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_171]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_171]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_171]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_171]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_171]
Caused by: java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native Method) ~[?:1.8.0_171]
at java.io.File.createNewFile(File.java:1012) ~[?:1.8.0_171]
at io.druid.segment.loading.SegmentLoaderLocalCacheManager.loadInLocationWithStartMarker(SegmentLoaderLocalCacheManager.java:187) ~[druid-services-0.10.1-selfcontained.jar:0.10.1]
… 24 more
2018-06-19T13:21:40,569 INFO [ZkCoordinator-0] io.druid.segment.loading.SegmentLoaderLocalCacheManager - Deleting directory[/var/druid/segment-cache/downstream_events_redirects_stitched/2018-06-10T00:00:00.000Z_2018-06-11T00:00:00.000Z/2018-06-19T13:12:26.018Z/0]
2018-06-19T13:21:40,569 INFO [ZkCoordinator-0] io.druid.segment.loading.SegmentLoaderLocalCacheManager - Deleting directory[/var/druid/segment-cache/downstream_events_redirects_stitched/2018-06-10T00:00:00.000Z_2018-06-11T00:00:00.000Z/2018-06-19T13:12:26.018Z]
2018-06-19T13:21:40,569 INFO [ZkCoordinator-0] io.druid.segment.loading.SegmentLoaderLocalCacheManager - Deleting directory[/var/druid/segment-cache/downstream_events_redirects_stitched/2018-06-10T00:00:00.000Z_2018-06-11T00:00:00.000Z]
2018-06-19T13:21:40,569 INFO [ZkCoordinator-0] io.druid.segment.loading.SegmentLoaderLocalCacheManager - Deleting directory[/var/druid/segment-cache/downstream_events_redirects_stitched]
2018-06-19T13:21:40,569 INFO [ZkCoordinator-0] io.druid.segment.loading.SegmentLoaderLocalCacheManager - Asked to cleanup something[DataSegment{size=3248190, shardSpec=NoneShardSpec, metrics=[count, sumConversionTotalTransactionValue, minConversionTotalTransactionValue, maxConversionTotalTransactionValue, sumConversionTotalTicketValue, minConversionTotalTicketValue, maxConversionTotalTicketValue], dimensions=[serviceName, trackingSourceId, eventType, conversionTotalTransactionValue, conversionTotalTransactionValueCurrency, conversionTotalTicketValueCurrency, conversionTotalTicketValue, market, language, cabinClass, businessDomain, hardware, platform, achievedDepth], version=‘2018-06-19T13:12:26.018Z’, loadSpec={type=s3_zip, bucket=XXXXX, key=druid/segments/downstream_events_redirects_stitched/2018-06-10T00:00:00.000Z_2018-06-11T00:00:00.000Z/2018-06-19T13:12:26.018Z/0/index.zip, S3Schema=s3n}, interval=2018-06-10T00:00:00.000Z/2018-06-11T00:00:00.000Z, dataSource=‘downstream_events_redirects_stitched’, binaryVersion=‘9’}] that didn’t exist. Skipping.
2018-06-19T13:21:40,569 WARN [ZkCoordinator-0] io.druid.server.coordination.BatchDataSegmentAnnouncer - No path to unannounce segment[downstream_events_redirects_stitched_2018-06-10T00:00:00.000Z_2018-06-11T00:00:00.000Z_2018-06-19T13:12:26.018Z]

Can anyone suggest what the issue may be, and how I could go about fixing it?

Turns out the issue was that my /var/druid/segment-cache directory did not exist. Manually created it and this solved the problem

yeah, janek is right, you need to make sure the dir is existed, historical nodes need to persist the segments to segment-cache from deep storage and put the indexing mapping into memory.

在 2018年6月20日星期三 UTC+8上午12:18:55,janek.lasoc…@skyscanner.net写道: