Multiple read on segment decrease performance.

Hey guys,

This is our question:

We have two different TopN queries working on the same segment.

When we execute this queries separately, they take respectively:

1- 1,6s

2- 1,4s

When we now execute both at the same time, times are as follow:

1- 2,6s

2- 2,1s

It seems like the time is mostly on historical, not on broker.

Historical node: r3.8xlarge

Druid version: 0.9.1.1

Any idea?

I will investigate on the metrics tomorrow, just want to know if someone has an explanation.

Thanks,

Ben

Metrics don’t help.
Historical time, cpu time, segment and cache time increase, and wait time still near to 0 …

I really don’t get it. Need help on this.

Thanks,

Ben

Any idea guys ?

What about the number of segments pending?

That is a really long segment scan time for one segment. Can you provide more information like the on-disk size of the segment, and approximate cardinality of the dimension you’re taking the topN on?

Also what aggregators are you using?

Hi Charles, i’m Julien, i’m working with Ben.

the segment size is : 474 MB, it contains on day of web analytic sessions for one of our customer (6 721 354 rows).

For the first query, the dimension cardinality is 156, we have a hyperunique aggregator and two long sum.

For the second query, the dimension cardinality is 194, we have a hyperunique agrgegator and two long sum.

We disabled the cache for our runs.

Julien

Hi Fangjin, the number of segments pending is 0 when doing our run.

Julien

Without the hyper unique agregator, the first query run in 450 ms.

We can run 10 times this query in parallel without any time decrease (the 10 queries finished in 450 ms). It’s perfect !

With the hyper unique agregator, i think that there is a kind of bottleneck, it is more difficult to launch many queries at the same time and to stay near of 1,6 s.

Any idea ?

Thanks

Julien

Do you have more GC overhead when you run more concurrent queries? HLL uses more memory than the “simple” aggregators so it’s possible that you’re hitting a limit there.