Limitation of ingestion of Parquet files


I am using 0.14.2 version of Druid, and as per the documentation (**Reindexing and Delta Ingestion with Hadoop Batch Ingestion **section on the page), we need to use the **multi **inputSpec if we want to do delta ingestion for parquet files i.e. add new data to the existing interval. The indexing task would read all existing segments and new files specified and re-index them together for that interval.

Is there a way Druid provides to ingest parquet files in parallel i.e. something similar to Batch ingestion which uses appendToExistingflag to allow multiple indexing tasks to write to the same interval?


Vinay Patil