Exception on query: OperationException: SERVER: Too large.

Hi there Druid users!
I’m using Druid 0.8.0 with memcached in production. Today I experienced an awful exception when querying Druid (yeah, autumn has started nicely…;)) :

2015-09-23 09:13:42.460 ERROR net.spy.memcached.protocol.binary.StoreOperationImpl: Error: Too large.

Which as I believe means, that Druid tries to write to memcached too large objects, exceeding size, that memcached can handle per row.

My tracks to solving this issues are following:

  1. I noticed that Druid by default is configured with max object size 50 Mb (according to this link: http://druid.io/docs/latest/configuration/, druid.cache.maxObjectSize property), so it allows up to 50 Mb to be send and stored in memcached.

  2. I launched memcached with defaults, which means 1Mb. So memcached rejected the store operation.

Do you agree with my understanding?

As I understand now, I have to relaunch memcached with 50Mb object size (the -I option), as I believe it is valuable and normal for Druid to store large objects.

So the most important question is: **Is that (setting 50Mb object size in memcached) what you do as well in production? **

Anyway, I think it would be valuable to document that this setting is required on memcached with default configuration of Druid (Or to change default config of Druid to match defaults of memcached. )

For interested, I attach more of the logs to the post.

Thanks for any help!

Krzysztof Zarzycki

exception_Druid (5.43 KB)

Either one is fine, you can increase the memcached object size, or reduce the druid object size.
How much you need depends on your average result size and what you care most about.

Given that you’ve encountered this issue, your results are likely above 1M and you probably want to adjust memcached settings.

That being said if you rarely have object above that size, it could hurt your cache if a few large results end up evicting lots of small results from cache.

Thanks Xavier,
I set object size to 50Mb.

I’m sorry, you say it is very inefficient? Maybe I don’t understand it perfectly - is it that memcached reserves 50Mb per object no matter how much it really takes?

And I believe it is worth saying something about this configuration in Druid documentation (or change the default object size) as more people might hit this issue.



Hi Krzysztof, this is a great chance to contribute to the Druid docs: