Load Balancing, critical performances loss

Hey guys,

We have a big problem with the segments load balancing between historical nodes.

When we query on data, the first run has a good time but then the segments load balancing begin et the historical nodes spend so much time in “scanning segments”.

We can’t deal with this performance issue.

For example, a TopN answer in 1500ms in the first run and then the time is random between 2000 and 11000ms …

We use r3.2xlarge for historicals (8 cores for the hardware and 7 workers on the historical).

Here are the jvm.config for historical nodes:











8 vCPU - 61 Go Memory - 160 Go SSD

Any idea or help for this ?



Hey Ben,

The first step here is to figure out what performance bottleneck you’re running into. The first things to check are:

  1. Is it a hardware performance ceiling (CPU pegged, disk i/o too high are the most common)?

  2. Is it GCs in your JVMs?

  3. Is it that you’re not making good use of your existing resources (too few processing or http threads, too small processing buffers)?

Based on pinning that down, you can move on to fixing the root cause.

We finally find out what was our problem !
The historical nodes write in memcached before returning the result to the broker node, causing huge delay because the memcached instance was too small !

Can you confirm that the historical nodes write in cache before returning the result to the broker node ?



By Default historical nodes should not write to cache, unless you have explicitly configured them to do so by setting

See http://druid.io/docs/latest/configuration/historical.html

So what’s recommanded ?
The broker is the one which populate cache?
Because i read that the historicals have to populate and not the Broker.
By the way, i just wanted to know why the historicals don’t return the result before writing in cache.

Hi Ben,

Historical nodes only write results to cache after a query has completed. One quick way you can verify the cache is the problem is to set “useCache”:“false” and “populateCache”:“false” in your query context according to http://druid.io/docs/ (context is a key you can set in your query json).

You can set either the historicals or the brokers to use the cache. There are different tradeoffs. http://druid.io/docs/

I would suggest understanding why the cache is the bottleneck though if that is the problem.

Hi Fangjin,

Cache was the problem in our case. The historical nodes write in full memcache, so it waits for memcache to free some space before writing in and returning result to the broker.

We find out because no CPUs was working on historical nodes and with the metrics we saw that the problem did not come from historical but from cache.

So here’s my question: if the cache is enable on historical, when the query is complete, historical will write in cache and then return to the broker?

Our configuration was: useCache: true and populateCache: true on historical, useCache: true and populateCache: false on broker.



Hi Ben, yes, if cache is enabled in historical, it should write to cache and return result.

Hi Benjamin,

I believe you can also try writing to memcache asyncronously by setting -

druid.historical.cache.numBackgroundThreads to a value greater than 0(default) in order to enable background caching.

I would suggest try setting this to a value greater than or equal to your processing threads and see if it helps.



Thanks Nishant, we’ll try it asap !

By the way, why .numBackgroundThreads is not commented on doc anywhere ??

Good question, The initial PR for this feature had docs (https://github.com/druid-io/druid/pull/936)
Looks like they got lost somehow in doc restructuring or merging.

Could you create a github issue or PR for this ?



The docs were removed because the feature is unstable and not encouraged to be used in production.