is there a way to update metadata tables

I ran the HadoopIndexer on a large dataset. The MR job ran fine, I can see the index files in hdfs.
But the session I started the task from closed while the MR job was running and so the last 2 steps of HadoopDruidIndexerJob(https://github.com/druid-io/druid/blob/20fdb627d99b3cf0d29c85270f801868d8c4d7c8/indexing-hadoop/src/main/java/io/druid/indexer/HadoopDruidIndexerJob.java) didn’t run: metadataStorageUpdaterJob, IndexGeneratorJob.getPublishedSegments

Is it possible to run just only the metadataStorageUpdaterJob now. I am trying to avoid having to rerun the MR job again, it ran for 6 hours.

regards,

Harish.

Make sure you configurate “metadataUpdateSpec” in your hadoop indexing task file and include mysql extension.

在 2015年8月13日星期四 UTC+8下午1:06:58,har…@sparklinedata.com写道:

You may be able to hack up the code to do this, assuming that the intermediate files weren’t deleted yet. But there’s no official option to resume a partially completed job.