When I do the following things, what I expect is … but instead … happens.
- Step 1:- Create a new Datasource with 1 replica and 2 task to load Data from Kafka topic (having 2 partitions)
- Step 2 :- My 2 middle manager has 3 cpu each and worker slot = 10, 2 broker nodes , 1 for other services
- Step 3:- when segment gets full, data drop is seen for few records. trying 100k records in 1 minute
That means … whenever segment flush happens task ,i assume task is still pushing the data to old segments and hence data drop is happening till new segment is active.
Please … can you please help with the configuration i am missing anything, also when i increase the replica count ,my data from deep storage is not loaded,
and it dose not show in segment tab whether pending to load so it completely drops that segment
Things I've tried
- My experiment 1
Increasing middle manager, Increasing the replica count
- My experiment 2
tried to enable from curl POST to co-ordinator
- My experiment 3