Hi there Druid users!
I’m using Druid 0.8.0 with memcached in production. Today I experienced an awful exception when querying Druid (yeah, autumn has started nicely…;)) :
2015-09-23 09:13:42.460 ERROR net.spy.memcached.protocol.binary.StoreOperationImpl: Error: Too large.
Which as I believe means, that Druid tries to write to memcached too large objects, exceeding size, that memcached can handle per row.
My tracks to solving this issues are following:
I noticed that Druid by default is configured with max object size 50 Mb (according to this link: http://druid.io/docs/latest/configuration/, druid.cache.maxObjectSize property), so it allows up to 50 Mb to be send and stored in memcached.
I launched memcached with defaults, which means 1Mb. So memcached rejected the store operation.
Do you agree with my understanding?
As I understand now, I have to relaunch memcached with 50Mb object size (the -I option), as I believe it is valuable and normal for Druid to store large objects.
So the most important question is: **Is that (setting 50Mb object size in memcached) what you do as well in production? **
Anyway, I think it would be valuable to document that this setting is required on memcached with default configuration of Druid (Or to change default config of Druid to match defaults of memcached. )
For interested, I attach more of the logs to the post.
Thanks for any help!
exception_Druid (5.43 KB)