Resource limit exceeded

I am new to druid .

I am using druid 0.11.0 and groupBy v2. I want to query data above the limit of 500k in groupBy but when i do that error occurs as.

{
“error”: “Resource limit exceeded”,
“errorMessage”: “Not enough dictionary space to execute this query. Try increasing druid.query.groupBy.maxMergingDictionarySize or enable disk spilling by setting druid.query.groupBy.maxOnDiskStorage to a positive number.”,
“errorClass”: “io.druid.query.ResourceLimitExceededException”,
“host”: “ubuntu:8083”
}

Can anyone help figure out what should i do?

Hey Salman,

You can refer to this page for reference: http://druid.io/docs/latest/querying/groupbyquery.html#memory-tuning-and-resource-limits

Make sure to set those configuration parameters in both Broker and Historical (and Middle Manager, if you’re using a real-time node).

Suhas

I have set this configuration in broker runtime.properties

druid.query.groupBy.maxMergingDictionarySize=900000000

druid.query.groupBy.maxOnDiskStorage=100000

still i get this error

{

“error”: “Resource limit exceeded”,

“errorMessage”: “Not enough dictionary space to execute this query. Try increasing druid.query.groupBy.maxMergingDictionarySize or enable disk spilling by setting druid.query.groupBy.maxOnDiskStorage to a positive number.”,

“errorClass”: “io.druid.query.ResourceLimitExceededException”,

“host”: “ubuntu:8083”

}

Hey Salman,

Did you set it in Historical node too? This is an on-heap memory. So, make sure there’s enough heap allocated.

Suhas

yea i did set the these configurations in historical but when i ran the query on postman the postman stopped working and crashed every time i run it . i am running groupBy query on a 10 lakh data.

Can you please share your configurations on Broker and Historical?

Suhas

i have changed the config a lot of times and tried different config but it does not work

broker_configuration.txt (582 Bytes)

historical_configuration.txt (476 Bytes)

Can you also share their respective jvm.config? Please share your server’s configuration (like total memory and number of cores)?
Also, right off the bat, I can tell you haven’t set the druid.query.groupBy.maxMergingDictionarySize parameter.

Suhas

i tried setting druid.query.groupBy.maxMergingDictionarySize parameter but it did not help.

jvm_config.txt (499 Bytes)

Hi Druid,

Hi Salman,

have you restarted historicals and brokers after changing the configurations?

Jihoon

Yes i restarted the historical and broker configuration … but do i have to change the configuration of historical & broker in conf-quickstart or conf ??? I am still confused about this.

Hey Jihoon,
I have already tried these two options but nothing worked for me .

"

  1. Increasing druid.processing.buffer.sizeBytes. You need to set for your all historicals (http://druid.io/docs/latest/configuration/historical.html) and brokers (http://druid.io/docs/latest/configuration/broker.html). If you have realtimes, you need to set for them as well (http://druid.io/docs/latest/configuration/realtime.html).

  2. Increasing druid.query.groupBy.maxOnDiskStorage to enable disk spilling (http://druid.io/docs/latest/querying/groupbyquery.html) "

Hey Druid,
When I am running groupBy query it is taking too much time I have already increased druid.query.groupBy.maxOnDiskStorage and druid.processing.buffer.sizeBytes but the query time is very large .how to lower the time it takes to query the data .

Hey Salman,

One of the easiest ways is to make sure each of the segments are at least 300-500 MB.

Suhas

I have only 1 segment with size of 182Mb. still it takes too much time

Hi Salman.

What is your broker and historical node server spec(physical total memory size and number of cores)?

Does all node process running on single server or different server for each node?

Hi Salman, conf-quickstart is configurations for quickstart guide(http://druid.io/docs/0.12.1/tutorials/quickstart.html).

If you setup cluster for prod or PoC, use conf directory.