Hadoop batch ingestion using parquet format for multi value dimensions

I have a parquet file with one of the dimensions as array. While other dimensions are ingested successfully, for this dimension, I am getting a result like this:

“result” : [ {

“Count” : 8.0,

“Dimension Value” : “{“array_element”: 217}”

} ]

Does druid not support batch ingestion when data is in parquet format when for multi value dimension? On adding some debug logs, I found the class of the dimension value to be org.apache.avro.generic.GenericData$Record.