Batch Ingestion of data in Parquet format

I have data in Parquet format. I want to batch ingest that data using Hadoop Indexer. Is it possible to do that or can druid be extended to ingest data from Apache Parquet format?

Hi Saksham,

Druid does not support ingesting data from Parquet out of the box right now, However, It is possible to implement a custom InputRowParser and hadoop input format to ingest data directly from parquet.

you can also go through more detailed discussion on this here -

https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!searchin/druid-user/parquet/druid-user/-a1BPJQ_9HY/cWGCZ7uDrCIJ