I am trying to get binary avro data ingested in druid using tranquility. Looks like this can only be done using schema repo. I need to encode schema in the avro file for other hadoop ecosystem tools anyways (hive, spark etc). May be there is a way to make them use schema repo as well, however at this point its a luxury for us.
Looking at the code - https://github.com/druid-io/druid/blob/master/extensions-core/avro-extensions/src/main/java/io/druid/data/input/avro/SchemaRepoBasedAvroBytesDecoder.java schema repo seems to be a mandatory requirement and there is no way to specify the schema inline in the kafka.json. Is this correct? Or are there any alternatives?