SQL Server Integration in druid MetaData

Hi Guys ,

Do we have option to use any other database ?

Like we have configured with PostgreSQL, Mysql, etc.

Can we configure druid with Sql Server ?

Thanks.

Jitesh

There is nothing that would forbid using MSSQL or Oracle. The problem is that we lack a metadata plugin for those metadata stores.

I am trying to add extension like postgresql but it is not working.

I create separate Extension-MSServer-Metadata-Storage JAR.

Any necessary things we should add in JAR or main POM.XML separately in initialization of druid Module if we want to add MS server MetaData.

Thanks Jitesh

Hi Jitesh, you can use postgresql extension as an example for building a metadata storage, and also follow the instructions in: http://druid.io/docs/0.7.0/Modules.html

Make sure you create the DruidModule file in META-INF/services

There are a few poms that also need to be updated. You can search for postgres in the code to find all the instances where your metadata store needs to be added.

Thank you very much Fangjin.

I will apply this on jar then Let you know it is running or not.

Hi Fangjin,

META-INF/MANIFEST.MF file is there. No META-INF/services folder in my work space . Can you please tell me more on initialization area?

Thanks Jitesh

Jitesh, the DruidModule file is found here for postgresql:

https://github.com/druid-io/druid/blob/master/extensions/postgresql-metadata-storage/src/main/resources/META-INF/services/io.druid.initialization.DruidModule

You’ll need to create something similar for your extensions.

Please read the extensions document and let me know specific questions you have

Thanks Fangjin.

It works. Now I can use MS SQL Server for storage.

Great to hear Jitesh! You guys should consider contributing the plugin back to the Druid community :slight_smile:

Thanks Fangjin.

I will contribute to Druid community without fail.

I have problem in Static-S3 firehose. Below exception is showing.Please help to solve this.

I am doing indexer Auto-Scaling with static-s3 firehose.

Its throwing exception like below.

FireHose reading events from S3 thats why connection are in very important mode.

this is coming after 1 Spill complete.

2015-03-23T10:24:51,288 INFO [task-runner-0] io.druid.segment.LoggingProgressIndicator - [/tmp/persistent/task/index_ec2_test_cluster_2015-03-23T10:16:19.241Z/work/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z_0/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z/spill0]: [make zip] has completed. Elapsed time: [8,641] millis

2015-03-23T10:24:51,288 INFO [task-runner-0] io.druid.segment.LoggingProgressIndicator - [/tmp/persistent/task/index_ec2_test_cluster_2015-03-23T10:16:19.241Z/work/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z_0/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z/spill0]: [make dimension columns] has completed. Elapsed time: [285,785] millis

2015-03-23T10:24:51,288 INFO [task-runner-0] io.druid.segment.LoggingProgressIndicator - [/tmp/persistent/task/index_ec2_test_cluster_2015-03-23T10:16:19.241Z/work/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z_0/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z/spill0]: Starting [make metric columns]

2015-03-23T10:24:51,288 INFO [task-runner-0] io.druid.segment.LoggingProgressIndicator - [/tmp/persistent/task/index_ec2_test_cluster_2015-03-23T10:16:19.241Z/work/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z_0/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z/spill0]: Starting [make column[numz]]

2015-03-23T10:24:59,518 INFO [task-runner-0] io.druid.segment.LoggingProgressIndicator - [/tmp/persistent/task/index_ec2_test_cluster_2015-03-23T10:16:19.241Z/work/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z_0/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z/spill0]: [make column[numz]] has completed. Elapsed time: [8,229] millis

2015-03-23T10:24:59,518 INFO [task-runner-0] io.druid.segment.LoggingProgressIndicator - [/tmp/persistent/task/index_ec2_test_cluster_2015-03-23T10:16:19.241Z/work/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z_0/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z/spill0]: [make metric columns] has completed. Elapsed time: [8,229] millis

2015-03-23T10:24:59,518 INFO [task-runner-0] io.druid.segment.LoggingProgressIndicator - [/tmp/persistent/task/index_ec2_test_cluster_2015-03-23T10:16:19.241Z/work/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z_0/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z/spill0]: Starting [building index.drd]

2015-03-23T10:24:59,520 INFO [task-runner-0] io.druid.segment.LoggingProgressIndicator - [/tmp/persistent/task/index_ec2_test_cluster_2015-03-23T10:16:19.241Z/work/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z_0/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z/spill0]: [building index.drd] has completed. Elapsed time: [1] millis

2015-03-23T10:24:59,521 INFO [task-runner-0] io.druid.segment.LoggingProgressIndicator - [/tmp/persistent/task/index_ec2_test_cluster_2015-03-23T10:16:19.241Z/work/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z_0/ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z/spill0] complete. Elapsed time: [313,023] millis

2015-03-23T10:24:59,521 INFO [task-runner-0] io.druid.segment.ReferenceCountingSegment - Closing ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z

2015-03-23T10:24:59,522 INFO [task-runner-0] io.druid.segment.ReferenceCountingSegment - Closing ec2_test_cluster_2015-03-19T00:09:00.000Z_2015-03-19T00:10:00.000Z_2015-03-23T10:16:19.242Z, numReferences: 0

2015-03-23T10:24:59,981 ERROR [task-runner-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[IndexTask{id=index_ec2_test_cluster_2015-03-23T10:16:19.241Z, type=index, dataSource=ec2_test_cluster}]

java.lang.IllegalStateException: java.net.SocketException: Connection reset

at org.apache.commons.io.LineIterator.hasNext(LineIterator.java:107) ~[druid-services-0.7.0-selfcontained.jar:0.7.0]

at io.druid.data.input.impl.FileIteratingFirehose.hasMore(FileIteratingFirehose.java:34) ~[druid-services-0.7.0-selfcontained.jar:0.7.0]

at io.druid.indexing.common.task.IndexTask.generateSegment(IndexTask.java:343) ~[druid-services-0.7.0-selfcontained.jar:0.7.0]

at io.druid.indexing.common.task.IndexTask.run(IndexTask.java:185) ~[druid-services-0.7.0-selfcontained.jar:0.7.0]

at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:235) [druid-services-0.7.0-selfcontained.jar:0.7.0]

at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:214) [druid-services-0.7.0-selfcontained.jar:0.7.0]

at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_75]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_75]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_75]

at java.lang.Thread.run(Thread.java:745) [?:1.7.0_75]

Caused by: java.net.SocketException: Connection reset

at java.net.SocketInputStream.read(SocketInputStream.java:196) ~[?:1.7.0_75]

at java.net.SocketInputStream.read(SocketInputStream.java:122) ~[?:1.7.0_75]

at sun.security.ssl.InputRecord.readFully(InputRecord.java:442) ~[?:1.7.0_75]

at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:554) ~[?:1.7.0_75]

at sun.security.ssl.InputRecord.read(InputRecord.java:509) ~[?:1.7.0_75]

at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:934) ~[?:1.7.0_75]

at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:891) ~[?:1.7.0_75]

at sun.security.ssl.AppInputStream.read(AppInputStream.java:102) ~[?:1.7.0_75]

at org.apache.http.impl.io.AbstractSessionInputBuffer.read(AbstractSessionInputBuffer.java:204) ~[druid-services-0.7.0-selfcontained.jar:0.7.0]

at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:182) ~[druid-services-0.7.0-selfcontained.jar:0.7.0]

at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:138) ~[druid-services-0.7.0-selfcontained.jar:0.7.0]

at org.jets3t.service.io.InterruptableInputStream.read(InterruptableInputStream.java:78) ~[druid-services-0.7.0-selfcontained.jar:0.7.0]

at org.jets3t.service.impl.rest.httpclient.HttpMethodReleaseInputStream.read(HttpMethodReleaseInputStream.java:146) ~[druid-services-0.7.0-selfcontained.jar:0.7.0]

at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283) ~[?:1.7.0_75]

at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325) ~[?:1.7.0_75]

at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177) ~[?:1.7.0_75]

at java.io.InputStreamReader.read(InputStreamReader.java:184) ~[?:1.7.0_75]

at java.io.BufferedReader.fill(BufferedReader.java:154) ~[?:1.7.0_75]

at java.io.BufferedReader.readLine(BufferedReader.java:317) ~[?:1.7.0_75]

at java.io.BufferedReader.readLine(BufferedReader.java:382) ~[?:1.7.0_75]

at org.apache.commons.io.LineIterator.hasNext(LineIterator.java:96) ~[druid-services-0.7.0-selfcontained.jar:0.7.0]

… 9 more

Thanks.

Jitesh

Hi Jitesh, I believe this is the same conversation as https://groups.google.com/forum/#!topic/druid-user/PMOL5leSf3k, so let’s have the conversation there :slight_smile: