groupBy/topN failure

Hello everyone,

These lasts weeks we’ve been seeing a weird behaviour on some groupBy queries. They randomly fail with the following error:

net.jpountz.lz4.LZ4Exception: Error decoding offset 35777 of input buffer

at net.jpountz.lz4.LZ4JNIFastDecompressor.decompress( ~[lz4-1.3.0.jar:?]

at$LZ4Decompressor.decompress( ~[druid-processing-0.9.2.jar:0.9.2]

at ~[druid-processing-0.9.2.jar:0.9.2]

at ~[druid-processing-0.9.2.jar:0.9.2]

at ~[druid-processing-0.9.2.jar:0.9.2]

at$BufferIndexed._get( ~[druid-processing-0.9.2.jar:0.9.2]

at$1.get( ~[druid-processing-0.9.2.jar:0.9.2]

at$CompressedVSizeIndexedInts.loadBuffer( ~[druid-processing-0.9.2.jar:0.9.2]

at$CompressedVSizeIndexedInts.get( ~[druid-processing-0.9.2.jar:0.9.2]

at io.druid.segment.column.SimpleDictionaryEncodedColumn.getSingleValueRow( ~[druid-processing-0.9.2.jar:0.9.2]

at io.druid.segment.QueryableIndexStorageAdapter$CursorSequenceBuilder$1$1QueryableIndexBaseCursor$2$1.get( ~[druid-processing-0.9.2.jar:0.9.2]

at io.druid.query.groupby.epinephelinae.GroupByQueryEngineV2$ ~[druid-processing-0.9.2.jar:0.9.2]

at io.druid.query.groupby.epinephelinae.GroupByQueryEngineV2$ ~[druid-processing-0.9.2.jar:0.9.2]

This log is from one of the historical servers. This groupBy is on two dimensions but this behavior happened also with single dimension. TopN queries have the same issue.

The weird thing about this is that after two minutes we ran the same query again, without doing anything on the cluster, and it succeeded. Nothing was changed.

We are using 0.9.2 version of Druid, and groupBy failed either with v1 or v2 version.

What can we do to find more information about the error? Do you know what is happening here?


Hey Federico,

I wonder if one of your segment files on one of your historicals was corrupt. The query might work sometimes (if a different historical was picked) and fail sometimes (if the historical with the bad copy was picked). Also if the bad segment is moved, that would involve re-downloading it, and the query should then always work (since the bad copy is gone).

At this current time, does the query fail at all, ever? If so, could you try tracking down a specific segment-granular “intervals” that causes it to fail, and then try tracking that down to a bad segment on a bad historical node?

Hi Gian,

That could be a good possibility. It is very hard to track it down though, the query failed for like 4 or 5 consecutive times, but then started to work all the time with no exception.

It happened with another query also, but the same behavior occurred. After failing a couple of times it doesn’t fail anymore.

Any thoughts about how could I get the bad segment? Maybe from logs?

Thanks for the help !!

It’s tough to tell from logs; the best way is probably to narrow down through “intervals” which segments may be involved, and then try running dump-segment on those segments as they are found in the historical node caches:

If the segments are corrupt then dump-segment should fail.

I followed instructions on your link and came to the following error:

Exception in thread “main” java.lang.RuntimeException: /var/druid/cache/historical/source/2016-11-23T12:00:00.000Z_2016-11-23T13:00:00.000Z/2016-11-23T12:00:19.081Z/index.drd

It seems that index.drd isn’t present on any of the cached segments. Could it be that I have to download the segment from HDFS, extract and then point to the folder of the extrated files?

Thanks for your help !

Hi Gian, were you able to see my earlier post? The error is still happening (and quite frequently). Weird thing is that sometimes after a couple of retries the query succeeds.


Sorry I missed it. That error is a bit misleading, it just means there’s no segment files in the directory you provided. You’re probably just missing specifying the partition number of the segment. Try using /var/druid/cache/historical/source/2016-11-23T12:00:00.000Z_2016-11-23T13:00:00.000Z/2016-11-23T12:00:19.081Z/X/ where the final X/ is the partition directory (it’ll be a number like 0, 1, 2, etc… if there are multiple, each one will have its own segment in it). Then the dump should work.

Thanks Gian, that was quick.

Actually files are there, but not the one that the process is looking for. Here is what the folder you mentioned looks like:




Only those 3 files (00000.smoosh is the big sized one, should have all the segment data). But there is no “index.drd”. Am I missing something?


dump-segment only looks for index.drd if version.bin is missing, and you do have version.bin. So that’s why I suggest double-checking the directory parameter you’re giving to dump-segment.

It worked!. So as you said, the path was missing the /0 folder which is present on everyone. Silly me that didn’t check that before.

Seems like a lot of work checking one by one, I could script it, but it could take a little bit. Isn’t there any other way to check about that error (from my first message) or segment integrity on another way?

Thanks a lot Gian, one more time :slight_smile:

I think scripting it is probably the best way. At least that’s what I would do.