Avro file ingestion

Hi -

How should the ingestion config file look like if I want to ingest avro data and that too append to the existing data source. What I have currently written is below - but I am getting an exception.

I also tried hadoop but then I keep getting Filetype s3n not supported exception.

Can someone please tell me what is wrong with below config.

"ioConfig" : {
    "type" : "index",
    "firehose" : {
        "type": "local",

        "inputFormat": "io.druid.data.input.avro.AvroValueInputFormat",
        "baseDir" : "<directory>",
        "filter" : "<someFile>.avro"
    },
    "appendToExisting" : true
}

"tuningConfig" : {
     "type" : "index",
     "partitionsSpec" : {
         "type" : "hashed",
         "targetPartitionSize" : 5000000
     },
     "jobProperties" : {
         "avro.schema.input.value.path" : "<path>/<sameSchemaFile_Used_To_Convert_Parquet_To_Avro>.avsc"
     }
}

The exception I am getting is -

java.lang.UnsupportedOperationException: makeParser not supported
at io.druid.data.input.avro.AvroParseSpec.makeParser(AvroParseSpec.java:64) ~[?:?]
at io.druid.data.input.impl.StringInputRowParser.initializeParser(StringInputRowParser.java:135) ~[druid-api-0.12.0.jar:0.12.0]

    at io.druid.data.input.impl.StringInputRowParser.startFileFromBeginning(StringInputRowParser.java:141) ~[druid-api-0.12.0.jar:0.12.0]
at io.druid.data.input.impl.FileIteratingFirehose.getNextLineIterator(FileIteratingFirehose.java:91) ~[druid-api-0.12.0.jar:0.12.0]
at io.druid.data.input.impl.FileIteratingFirehose.hasMore(FileIteratingFirehose.java:67) ~[druid-api-0.12.0.jar:0.12.0]
at io.druid.indexing.common.task.IndexTask.generateAndPublishSegments(IndexTask.java:660) ~[druid-indexing-service-0.12.0.jar:0.12.0]