How to deal with nested Parquet schema

Hey,

we are using only hadoop indexing and we have this parquet schema to index :

message base {
required int64 timestamp;
required binary cid (ENUM);
required binary dvc (ENUM);
required binary geo (ENUM);
required group kvs (MAP) {
repeated group kv (MAP_KEY_VALUE) {
required binary key (ENUM);
required binary value (ENUM);
}
}
}

``

And as you can see, it uses a parquet MAP type for dimensions that are rather “dynamic”, ie. client is free to define these dimensions, that’s why it is in MAP.

I was reading the druid-parquet-extensions and it doesn’t look that the dimensions in MAP could be ingested.

So that I would have to pretty much write my own extension for that purpose. Or do you know of some other way how to do it?