I’m new to Druid, and am loading up 1.5m rows (around 1Gb) of data from a CSV file, with timestamps spanning a year or so.
It seems very slow to ingest the data - taking around 25 seconds to create a segment for each day.
Is this normal? I’m only using a simple setup at this point - 1 coordinator, 1 historical node, 1 indexing service node.
Is there a more optimal way to load the data? Maybe having separate CSV files per day? I notice that it looks like it parses the entire file twice per segment.
Thanks in advance,