Connecting spark 1.6.0 with druid 0.10.0 using sparkline jars

Hi,

spark 1.6.0 (windows 10)

druid 0.10.0 (ubuntu)

I am trying to connect druid with spark using sparkline jars(spark-accelerator jar).

https://github.com/SparklineData/spark-druid-olap/wiki/Quick-Start-Guide#setup-and-start-the-thriftserver

I have created a base Fact table successfully using query

CREATE TABLE sparkDruidBase(

time_1 string, url_1 string, user_1 string,

latencyMs_1 integer)

USING com.databricks.spark.csv

OPTIONS (path “C:\Users\Helical\Desktop\sparkdruid\DruidIngestionTesting”,

header “false”, delimiter “|”);

``

and also created another fact table to map with base table using query

CREATE TABLE if not exists sparkDruidtable

  USING org.sparklinedata.druid

  OPTIONS (sourceDataframe "sparkDruidBase",

  timeDimensionColumn "time_1",

  druidDatasource "sparkDruidDataSource",

  druidHost "138.197.132.51",

druidPort “8082”,

  zkQualifyDiscoveryNames "true",

columnMapping '{ “latencyMs_1”:“latencyMs_1” } ',

  numProcessingThreadsPerHistorical '1',

  starSchema ' {   "factTable" : "sparkDruidtable",   "relations" : []  }     ');

``

but at the time of execution of query it is fetching the data from local file mentioned in base table path. I added new records to the druid data sourcebut at the time of query execution it is fetching data from local csv file mentioned in base table query.****

I started a thrift server using command

spark-submit --verbose --class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --hiveconf hive.server2.thrift.bind.host localhost --hiveconf hive.server2.thrift.port=20000 --packages com.sparklinedata:accelerator_2.10:0.2.1,com.databricks:spark-csv_2.10:1.1.0,SparklineData:spark-datetime:0.0.2 --jars E:\spark-1.6.0-bin-hadoop2.6\lib\accelerator_2.10-0.2.1.jar

``

Please help.

regards

Abhishek