Terrible performance on production cluster: slow queries

Hi,

We are running a Druid cluster in production for last few months. We have 5 node cluster (2 historical, 2 brokers, 2 middle managers, 1 coordinator and 1 overlord. I know for 2 coordinators and 2 overlords are recommended. But that is not something that is causing any immediate problems i think. Brokers are behind a load balancer)

The Druid cluster is hosted on IBM cloud Kubernetes service where pods get deployed over couple of worker nodes. Some of the druid processes could run on worker nodes which have other applications running. (I haven’t seen any OOM issues on any of the druid pods, just that the queries are extremely slow)

Recently we noticed that our APIs which internally fetches data from druid are quite time consuming. After looking at the time spend by the problematic API in New Relic we noticed that the connection to druid was taking lot of time (sometimes ridiculously high as 15+ seconds).

I tried following things:

  1. I initially thought this could be due to less number of brokers so i tried scaling them first. That didn’t have any effect.

  2. Since broker and historical processes benefit greatly from CPU and RAM, i tried adjusting some of the parameters for these processes such as : druid.server.http.numThreads, druid.broker.http.numConnections, druid.processing.buffer.sizeBytes, druid.processing.numMergeBuffers, druid.processing.numThreads and adjusted -Xms,-Xmx, XX:MaxDirectMemorySize accordingly. That has not helped yet. I may not have set the best possible configs.

Cluster details:

-Datasource being queried is hardly 12MB.

-The number of segments is 1634

-Data exists from 5th February 2019

  • Size of each segments is few KBs ( i wonder if this could be a problem)

  • Caching is enabled on historical and disabled on broker.

The segment granularity is 15 minutes and the datasource consists of 20 dimensions with around 7-8 high cardinality dimensions ( UUIDs) and rest are relatively lower cardinality, things such as browserName, osType etc. We mostly query data with query granularity of DAY but we have kept segment granularity as 15 minutes incase we decide to support lower granularity such as HOUR or 15 Minute. The only query we use is groupBy query.

What i am unable to understand is that, with a relatively low size of datasource, why would queries take more than few milliseconds?

Queries:

  1. Is it because of the segment granularity of 15 minutes and multiple dimensions (some with high cardinality) leading to the low sized segments?

  2. If there are multiple segments (15 minutes buckets/granularity each) each sized few KBs and if the query is with granularity DAY or HOUR or WEEK, does merging the individual segments take lot of time on historicals/broker. How can this speed up? should any queries really take more than few milliseconds?

I also enabled metrics in order to determine where the bottleneck could be. Attaching the metrics for the broker node Metric-broker.txt( I copied the metrics for one the queries to avoid large file) . The metrics such as query/time, query/node/time doesn't look good. I think it takes lot of time for the data to come from historical to broker.

Can someone take a look at the metrics for one of the queries and help identify the issue?

**I am not able to figure out if the issue is because of the cluster or the size of the data segments and the segment granularity being used.**

Thanks,

Prathamesh

Metric-broker.txt (225 KB)

Hey Prathamesh,

I feel like this has been addressed in some of your other threads, but just to respond to this one:

Your poor query performance is absolutely related to the size of your segments. Note that query granularity and segment granularity are **two unrelated concepts** - query granularity at ingestion time determines the minimum granularity of the time buckets for aggregations, whereas segment granularity determines how your data is partitioned when written to disk. you can have segment granularity of DAY and still do minute, second, or millisecond query granularity. You should definitely look into re-ingesting (if you still have the source data) or re-indexing those segments with a broader segment granularity so that you get much larger segments. General guidance is that segments should be between 300-700MB in size, so if you're only working with 12MB, this should be a single segment.

(Each segment can only be handled by a single processing thread, so by putting your data into a single segment, you will only be using one thread in your query which may be sub-optimal right now. But presumably, you will be ingesting significantly more data than this in the future, at which point you'll greatly benefit from having properly sized segments.)

Hey David,

Yes. This has been addressed already by you.

Just in case anyone finds this thread here are related discussions:
Druid internals: Connection pooling on brokers and historicals - https://groups.google.com/forum/#!topic/druid-user/sjhVuqZY_qU

Acceptable query time for data worth 6 months - https://groups.google.com/forum/#!topic/druid-user/znXXHgMh1EQ

concurrent queries: Scale broker nodes or historical nodes - https://groups.google.com/forum/#!topic/druid-user/z5T1kUvcXsQ

Number of dimensions & granularity affecting query performance - https://groups.google.com/forum/#!topic/druid-user/AHiz7Kl5Y1k

Thanks,

Prathamesh