No datasource in metadata table

Hello Everyone,

I’m using druid with a postgresql metastore and want to access data from Superset in the long run.

After ingesting the quickstart tutorials wikipedia dataset the druid_datasource metadata table remains empty… hence (in my understanding) Superset do not register the datasource so I cannot access anything.

Segments are present, I can query the data using curl, and results are identical to quickstart tutorial.

What should populate the druid_datasource table (if anything)? But any ideas are welcome.

Thanks

Hi Sandor. I have posted the same question to the community as I have experienced the same situation with MySQL as the metadata DB.

I haven’t received a clear answer as to why some data source ingestions create an entry in druid_datasource and others do not.

When I reference the UI on the coordinator nodes, I see all the data sources I expected to see in the MySQL table. Because of this…I am assuming Druid is behaving and functioning properly.

From Siva mannem replying to my previous posts:

"Hi Chris,
If you are ingesting data into druid via Kafka streaming and let’s say you paused the ingestion for sometime and you resumed the ingestion later,

then it is the druid_dataSource table entries in mysql metadata which helps to pick up the ingestion from where it left off.

May be the name of the table might be bit confusing . That table is not meant to store your list of data sources.

Hope this helps."

But the implication of

" it is the druid_dataSource table entries in mysql metadata which helps to pick up the ingestion from where it left off "

is that the druid_dataSource table table should have something in it…

Or maybe it is only populated if something is sent in through Kafka?! Then the interfacing from superset is pretty bad (if I’m not mistaken and that is what it checks).

And my question is totally unrelated to kafka as I use the Firehose to load the tutorial dataset.

Ah, thanks Chris,

I misunderstood you…

So this is a Superset question as the table is not for that purpose.

My data ingestion is with tranquility. I am not seeing a druid datasource in the my SQL table when I’ve set up an ingestion spec for tranquility and successfully ingested data.

I think perhaps my ingestion from Kafka has created entries in the datasource table. I’ve been testing most recently with tranquility and that’s where I became concerned that I was not seeing datasource and trees in the my C pool table. When I query the historical the coordinators and also looking deep storage it appears Druid is working as it should.

I happened to be looking at the source to answer this question for myself today and I am pretty sure that the only codepaths that read or write druid_datasources are the Kafka (or Kinesis in 0.14) indexing service supervisors and the materialized view service. They’re not even otherwise exposed via an HTTP API or anything.

Great info David