Indexing Service doesn't recognize / use the correct partitionSpec

I wrote a new extension that supports multi-dimension partitioning. So in order to test that end-end, I am using the wikipedia example.

The extension jar is included in the classpath for coordinator, historical, overlord and broker.

Here the task given to the indexing service - https://gist.github.com/praveev/fed9065782db0d555237#file-what-i-give-json-L61 and here’s what i see printed in the overlord console logs https://gist.github.com/praveev/89e280b7440173f9bc63#file-what-i-see-json-L61

The indexing service somehow doesn’t recognize my plugin defaults to using hashed partitionSpec. No error is printed out.

I first made sure that the partitionSpec class is in the jar

jar -tvf druid-plugins-0.0.2.10.jar | grep -i PartitionsSpec
  1979 Fri Mar 04 05:37:02 PST 2016 io/druid/query/multiDim/partition/MultiDimensionPartitionsSpec.class

From IntelliJ, I ran the Hadoop Druid Indexer job with this spec https://gist.github.com/praveev/269d9f6bf1ce16b6f50c#file-ingestion-test-json-L52 and it ingested just fine. But I had to tweak the code a bit to make this work as I didn’t want it to write to metadata storage. Hence the null in metadataUpdateSpec.

This confirms that my druid plugin is hooked up fine. But I checked my DruidModule anyways.

public class DruidPluginsModule implements DruidModule
{
  @Override
  public List<? extends Module> getJacksonModules()
  {
    return ImmutableList.of(
        new SimpleModule().registerSubtypes(
            MultiDimensionPartitionsSpec.class,
            MultiDimensionShardSpec.class
        )
    );
  }

  @Override
  public void configure(Binder binder)
  {
  }
}

Any ideas why the indexing service doesn’t recognize my new partitionSpec type?

Thanks

Thanks @himanshu. Tis simply had to do with the fact that my shell wasn’t expanding ~ into full path when including the jar.