High query latency when sending multiple top n queries

I’m getting high latency when I send a bunch of consecutive topN queries (about 1m). At most 4 queries are sent at once. The query has a hyper-unique aggregation and granularity set to all. I get the following in my historical log.

2016-10-12T02:46:25,674 ERROR [qtp1970856042-48] com.sun.jersey.spi.container.ContainerResponse - The exception contained within MappableContainerException could not be mapped to a response, re-throwing to the HTTP container

java.lang.OutOfMemoryError: GC overhead limit exceeded

at java.util.PriorityQueue.(PriorityQueue.java:168) ~[?:1.8.0_101]

at io.druid.query.topn.TopNNumericResultBuilder.(TopNNumericResultBuilder.java:110) ~[druid-processing-0.9.1.1.jar:0.9.1.1]

at io.druid.query.topn.NumericTopNMetricSpec.getResultBuilder(NumericTopNMetricSpec.java:128) ~[druid-processing-0.9.1.1.jar:0.9.1.1]

at io.druid.query.topn.TopNBinaryFn.apply(TopNBinaryFn.java:126) ~[druid-processing-0.9.1.1.jar:0.9.1.1]

at io.druid.query.topn.TopNBinaryFn.apply(TopNBinaryFn.java:39) ~[druid-processing-0.9.1.1.jar:0.9.1.1]

at io.druid.common.guava.CombiningSequence$CombiningYieldingAccumulator.accumulate(CombiningSequence.java:212) ~[druid-common-0.9.1.1.jar:0.9.1.1]

at com.metamx.common.guava.BaseSequence.makeYielder(BaseSequence.java:104) ~[java-util-0.27.9.jar:?]

at com.metamx.common.guava.BaseSequence.toYielder(BaseSequence.java:81) ~[java-util-0.27.9.jar:?]

at io.druid.common.guava.CombiningSequence.toYielder(CombiningSequence.java:78) ~[druid-common-0.9.1.1.jar:0.9.1.1]

at com.metamx.common.guava.MappedSequence.toYielder(MappedSequence.java:46) ~[java-util-0.27.9.jar:?]

at io.druid.query.CPUTimeMetricQueryRunner$1.toYielder(CPUTimeMetricQueryRunner.java:93) ~[druid-processing-0.9.1.1.jar:0.9.1.1]

at com.metamx.common.guava.Sequences$1.toYielder(Sequences.java:98) ~[java-util-0.27.9.jar:?]

at io.druid.server.QueryResource.doPost(QueryResource.java:224) ~[druid-server-0.9.1.1.jar:0.9.1.1]

at sun.reflect.GeneratedMethodAccessor54.invoke(Unknown Source) ~[?:?]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_101]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_101]

at com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60) ~[jersey-server-1.19.jar:1.19]

at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$ResponseOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:205) ~[jersey-server-1.19.jar:1.19]

at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75) ~[jersey-server-1.19.jar:1.19]

at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) ~[jersey-server-1.19.jar:1.19]

at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) ~[jersey-server-1.19.jar:1.19]

at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) ~[jersey-server-1.19.jar:1.19]

at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) ~[jersey-server-1.19.jar:1.19]

at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) ~[jersey-server-1.19.jar:1.19]

at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) ~[jersey-server-1.19.jar:1.19]

at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) ~[jersey-server-1.19.jar:1.19]

at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) ~[jersey-server-1.19.jar:1.19]

at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) ~[jersey-servlet-1.19.jar:1.19]

at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) ~[jersey-servlet-1.19.jar:1.19]

at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) ~[jersey-servlet-1.19.jar:1.19]

at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) ~[javax.servlet-api-3.1.0.jar:3.1.0]

at com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:278) ~[guice-servlet-4.0-beta.jar:?]

``

I have daily segments and am querying 90 days of data. Each segment is about 300 mb. I am sending queries directly to historical since I don’t have realtime nodes and a single historical. config for the historical is the following:

-Xms2048m

-Xmx2048m

-XX:MaxDirectMemorySize=5120m

druid.processing.buffer.sizeBytes=314572800

druid.processing.numThreads=4

I have an 8gb machine with 8gb swap(ssd).

I am using imply and have the middle manager on the same machine.

Thanks,

Mo

Hi Mo, can you explain the use case of sending 1M topN queries consecutively?