BroadcastTablesTooLarge

HI,
I am using druid version (24.0.0) and using Multi-Query-Engine feature to ingest data into druid from AWS S3 bucket.
When i run the query , i get the error that

Size of the broadcast tables exceed the memory reserved for them (memory reserved for broadcast tables = 108970406 bytes)

Is there anyway i can increase this broadcast memory in configuration? One of the joining tables used in the query which is a dimension table is big in size(128 MB)

Its basically (37.5%*.3%)*peon memory.
So I would say increase the peon heap size by 30-40 % and it would work.

Also, there is a bug in the 24 release where lookup memory is not taken into account correctly.

This fix will be part of the 25.0 release which would happen in another 2 weeks.

thank you very much Karan. We will try increasing the peon memory

Hi @KaranKumar

We could progress with the query after making the above suggested changes. But we are stuck in this screen from console for a very long time saying “Query complete, waiting for segments to be loaded… (stop waiting)”

I can see that the segments are written to S3 in the designated path. Is there anything else to be done?

Thanks in advance.

Hi @D_K . Looks like everything ran successfully. Now the segments are waiting to be loaded on the historicals.

If this is taking too much time, please check your coordinator’s health as the coordinator assigns segments to historicals.
Generally, coordinator duty cycles become too slow, or you have run out of disk on the historicals when this symptom is observed.

1 Like