Run Append mode - Hadoop Ingestion in parallel

Hi,
Following is my scenario description :-
On Dir1, files are continuously populating. These files contain variable dates (recent dates)
I want to continuously append these rows to the datacube DC1 with day granularity
Following is the solution being worked upon:-
There is a cron running every hour which picks up unique dates in those files and runs ingestion for every date in append mode
Eg: If file1 contains :- 5 rows of 22nd August and 5 rows of 24th August,
2017-08-22.csv file will cater to json file J1 in append mode for segment interval 2017-08-22
2017-08-24.csv file will cater to json file J2 in append mode for segment interval 2017-08-24
"ioConfig" : {
“type” : “hadoop”,
“inputSpec” : {
“type” : “multi”,
“children”: [
{
“type” : “dataSource”,
“ingestionSpec” : {
“dataSource”: “DB1”,
“intervals”: ["<INPUT_DATE_WORKED_UPON>"]
}
},
{
“type” : “static”,
“paths” : “<PATH_TO_INPUT_FILE>/”
}
]
}
}

Problem:
It is not necessary that segment for that date is present. As for every new day, there will always be a new date segment.
So when I use above config, I get an error that segment does not exist when control goes to the “children” block in ioConfig
Solution:
Check if segment exists for that date…in mysql table druid_segments where used=1
If Exists then above config is fine
If not exists then above config will not work as 1st child in ioConfig is speaking of segment which is not yet created. At this point, basic hadoop ingestion will be used without “multi” type
Again a Problem:
I want to run in parallel. The files contain various dates.
So if thread1 contains 22nd,23rd August…and thread2 contains 23rd,24th August, and both execute the condition if 23rd August segment interval exists or not in mysql table, they both will get a “0” answer for the 1st time and both will try to reindex from scratch with their respective records.

How can I overcome this problem ?