Here’s the error code.
“error”:“Could not resolve type id ‘mysql’ as a subtype of org.apache.druid.metadata.SQLFirehoseDatabaseConnector: known type ids = (for POJO property ‘database’)\n at [Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 43, column: 19] (through reference chain: org.apache.druid.indexing.common.task.batch.parallel.ParallelIndexSupervisorTask[“spec”]->org.apache.druid.indexing.common.task.batch.parallel.ParallelIndexIngestionSpec[“ioConfig”]->org.apache.druid.indexing.common.task.batch.parallel.ParallelIndexIOConfig[“firehose”]->org.apache.druid.segment.realtime.firehose.SqlFirehoseFactory[“database”])”
I’ve installed mysql extension as per the documentation. Please note that I’m new to druid
I did a rambling blog for a Raspberry Pi cluster where I talk through what I did to get it to work for Deep Storage – really it just expands on the docs but it might help you… but it may be good just to check that all has been installed oK?
You can also use the status APIs to check what extensions are loaded on each node. That can be helpful to just know that things are all running OK on each of your Druid processes:
Hi this is the following configuration of commom.runtime.settings. Please note that I’m using a single server deployment. My main objective is to transfer data from my mysql db to druid. When I uncomment the sql lines in the configuration provided below , druid shows a java runtime exception error.
# For Derby server on your Druid Coordinator (only viable in a cluster with a single Coordinator, no fail-over):
# For MySQL (make sure to include the MySQL JDBC driver on the classpath):
# # druid.metadata.mysql.driver.driverClassName=org.mariadb.jdbc.Driver
Ah OK: so the metadata database can be left as it is in the default quickstart configuration files. That means that the metadata database will be at the default, which is to use a local Derby database. You only need to change that if you want to use something other than Derby for your metadata database, which is what you deffo need to do when you ramp up from a single node quickstart.
I believe the first step is to get the extension:
You then need to make sure that you include the mySql extension in your common runtime properties configuration:
Just a side note to take care to amend the right configuration files for the quickstart – depending on which quickstart configuration that you use to start Druid, the specific files are in conf/druid/single-server/. Though it looks like you’re already doing that
When you restart Druid, you can check that the extension is loaded OK in the console or by using the Status API.
There’s an example of the ingestion specification that you’d put together here:
I’ve got it to work. SQL Ingestion works for both MySQL and PostressSQL. I’m currently thinking of writing a MSSQL Server extension that can also be used as a method for MSSQL Data Injestion. The source code for MySQL Metadata Connector is available on github. So I’m thinking about modifying the code to work with MSSQL as well.