HI,
I am using druid version (24.0.0) and using Multi-Query-Engine feature to ingest data into druid from AWS S3 bucket.
When i run the query , i get the error that
Size of the broadcast tables exceed the memory reserved for them (memory reserved for broadcast tables = 108970406 bytes)
Is there anyway i can increase this broadcast memory in configuration? One of the joining tables used in the query which is a dimension table is big in size(128 MB)
We could progress with the query after making the above suggested changes. But we are stuck in this screen from console for a very long time saying “Query complete, waiting for segments to be loaded… (stop waiting)”
Hi @D_K . Looks like everything ran successfully. Now the segments are waiting to be loaded on the historicals.
If this is taking too much time, please check your coordinator’s health as the coordinator assigns segments to historicals.
Generally, coordinator duty cycles become too slow, or you have run out of disk on the historicals when this symptom is observed.