I’ve setup a druid cluster and started some ingestion jobs using s3.
I was able to import 1 days raw data (35 GB) but now when I try to load more data druid is throwing out of memory exception.
I’ve noticed the following things:
- The raw data which was indexed and stored in deep storage is around 2.2GB now but the segment-cache is 5.1GB. How is this possible?
- Druid is only using my 8GB of EBS volume and not using the 3.2TB of SSD that I have provisioned for the data server.
Am I missing some configuration?