Cannot construct instance of `org.apache.druid.segment.loading.LocalLoadSpec

Hi Druid,

I am using 0.19.0 Druid and installed druid on cluster mode with almost a default config.

  • 3 data nodes (i3.4xlarge)
  • 1 master node (m5.2xlarge)
  • 1 query node (m5.2xlarge)

Tried to ingest twice. But only one is successful.Screen Shot 2020-09-08 at 12.05.24 PM.png

I saw this error in the historical.log when it fails.

I found the issue.
When using the local disk as deep storage on cluster mode, it does not work because druid is trying to look for all segments of each data node.
But each data node has some segments as expected.

When I changed it from local to s3, the error is gone.

For local disk (only viable in a cluster if this is a network mount): <------ It does not work on cluster mode.

For S3:

2020년 9월 8일 화요일 오후 12시 9분 4초 UTC-7에 tiny657님이 작성:

Hi @tiny657, I’m glad you were able to resolve your own issue. How’s your Druid project going? Do you need more help? Thanks, Matt

Hi Matt,
Thank you for your reply.

I did the performance test before production. I would like to confirm this performance test result.

· Master: one m5.2xlarge

· Query: one m5.8xlarge

· Data: three i3.4xlarge

Changed config:

druid.query.groupBy.maxOnDiskStorage=20,000,000,000 (Without this option, got the exception because I think groupBy query uses over 2G memory.)

50M rows (3G bytes, 6 dimensions). Ingested it into S3, not local storage.
All same timestamp because I just put the data for one date.Ingestion spec:

“tuningConfig”: {
“type”: “index_parallel”,
“maxNumConcurrentSubTasks”: 16,
“partitionsSpec”: {
“type”: “hashed”,
“numShards”: 10,
“partitionDimensions”: [“metric_id”, “user_id”]

Selection query:

select metric_id, user_id, count(distinct event_value)
from “table”
group by metric_id, user_id

It took around 30 seconds.
Is it the right performance with the above config?

2020년 9월 17일 목요일 오후 3시 14분 52초 UTC-7에 matt…@imply.io님이 작성:


So Druid performance is a function of some key things - and because Druid is horizontally scalable there is no real answer to “how fast should Druid be?” because … it depends!

This is my really simple (maybe it is too simple!!!) checklist when looking at Druid performance.

  1. How big your segments are - the number of rows (5 million) and how big they are (300-700GB) - see
  2. How many segments you scan in a query - and how many cores you have. Query Parallelisation in Druid’s Historical processes is per segment, per core - so if you scan a segment in 1 second and what you want is a query to run in 1 second, you will need 1 core per segment.
  3. The query pattern, i.e. filtering, sorting, and grouping - one interesting thing, for example, is that you should have numeric values you’re only ever filtering by (e.g. an Application Id or a User Id) stored as strings so they get indexes
  4. Query functions you might use - not just like UPPER or statistical things like COUNT but also any datasketch stuff - these make the query harder to compute and for intermediate results to be computed (Historicals) and possibly difficult for the final result to be merged (Broker)

It is probably a good idea to find out how many segments you have that are being scanned first, then to see what size they are - that is a good first check - and then compare that number to the number of cores. You can then ask yourself - ok - if I have 50 segments and I need the results in under a second, how many cores should I have?

My gut is that if you’re having to go to the point of amending the GROUP BY engine configuration, maybe you have a problem with segment sizes?

Thank you for the detailed information.
Druid doc recommends around 5M (or 500M bytes) per segment.

In my case, I chose 10 segments because my data size is 50M rows(3G bytes).

To use more CPU, need to use more than 10 segments.

Here is my assumption why my GroupBy query is slow.

  1. We did not use the timestamp in my selection query.
  2. We used disk for groupBy selection query while running selection query because 2G is not enough for our query.

Hey, have you checked our doc for cluster deployment (
That doc shows some examples for basic cluster setup. You can get some ideas from them especially for processing/merge buffers and processing threads.

In general, it’s recommended to set druid.processing.threads to (number of cpu cores - 1).

For druid.processing.numMergeBuffers, 32 seems overkill since only groupBy queries will use merge buffers. I would recommend beginning with 4 merge buffers.

You also need to set druid.processing.buffer.sizeBytes. A rule of thumb for buffer size is 500 MB.
Assuming that you didn’t set buffer size, I think that is probably why you saw disk spill since the buffer size will be computed by (max direct memory size / (numProcessingThreads + numMergeBuffers + 1)) if it is not explicitly set.
(Even when you have 100 GB of direct memory, the computed buffer size would be only (100 GB / (32 + 64 + 1)) = 1 GB in your setting!!)

You will probably not need disk spill after you set those properties since your data is pretty small.

Hope this helps.

Thank you for your detailed explanation.
Changed the configuration based on the document you mentioned.
Set up druid and test the configure like below.

For historical,
I tested the config for runtime properties.
druid.processing.numThreads from 15 to 64.
druid.processing.numMergeBuffers from 4 to 32.
druid.processing.buffer.sizeByte from 500M to 2G. (I can’t increase this number over 2G.)

I tested jvm.config.
-Xms8g ~ 24g
-Xms8g ~ 24g

-MaxDirectMemorySize 13g ~ 50g

Got the following error. So I added maxOnDiskStorage config.

Resource limit exceeded / Not enough aggregation buffer space to execute this query.
Try increasing druid.processing.buffer.sizeBytes or enable disk spilling by setting druid.query.groupBy.maxOnDiskStorage to a positive number

Hmm, maybe it’s because you are computing distinct counts.
Druid uses an approximate distinct count algorithm by default which is much more memory-efficient than an exact algorithm, but maybe it still could use much memory.
Try using APPROX_COUNT_DISTINCT_DS_HLL or APPROX_COUNT_DISTINCT_DS_THETA which uses DataSketches. They are supposed to be more memory-efficient.

Also, assuming that the query you tested is not your actual query pattern, try testing other queries which have a filter on timestamp.
Druid can handle those queries more efficiently than other queries.

In general, queries can be very slow when they hit disk. I would suggest to not enable disk spill unless that’s what you want (you don’t care about query speed, but just want to see the result).
If you do care about query speed, consider other options such as optimizing your queries and cluster setup, scaling your cluster, etc, instead of enabling disk spill.

For the setup you used for testing, please read our doc more carefully and optimize it.
I would suggest to not change both the number of processing threads and the number of merge buffers.
You have 3 segments in each data node, 15 processing threads must be enough.
For merge buffers, one groupBy query uses one merge buffer in historicals. 32 merge buffers is definitely an overkill.
Druid uses ByteBuffer for processing/merge buffers, that is why you cannot increase its size beyond 2 GB.

Thank you for your detailed explanation.
With your guide and doc, I am trying various things to find the performance bottleneck.

Thank you again.

2020년 9월 19일 토요일 오후 2시 46분 34초 UTC-7에 ghoo…@gmail.com님이 작성: also explains basic cluster tuning.
Please let us know if you still see slow query performance after tuning.