Hi,
I am using firehose to aggregate by daily(aggregated for the whole day) data and it always fails while BUILDING the segment.
2019-07-31T11:06:59,942 INFO [appenderator_merge_0] org.apache.druid.segment.CompressedPools - Allocating new littleEndByteBuf[81,818]
2019-07-31T11:06:59,943 INFO [appenderator_merge_0] org.apache.druid.segment.CompressedPools - Allocating new littleEndByteBuf[81,819]
2019-07-31T11:07:02,078 INFO [task-runner-0-priority-0] org.apache.druid.segment.realtime.appenderator.AppenderatorImpl - Shutting down…
2019-07-31T11:07:02,081 INFO [appenderator_persist_0] org.apache.druid.segment.realtime.appenderator.AppenderatorImpl - Removing sink for segment[HSS_Monthly_2019-06-01T00:00:00.000Z_2019-07-01T00:00:00.0
00Z_2019-07-31T10:52:17.507Z].
2019-07-31T11:07:02,102 ERROR [task-runner-0-priority-0] org.apache.druid.indexing.common.task.IndexTask - Encountered exception in BUILD_SEGMENTS.
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Direct buffer memory
.
.
.
Caused by: java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:694) ~[?:1.8.0_152]
at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) ~[?:1.8.0_152]
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) ~[?:1.8.0_152]
at org.apache.druid.segment.CompressedPools$4.get(CompressedPools.java:105) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.CompressedPools$4.get(CompressedPools.java:98) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.collections.StupidPool.makeObjectWithHandler(StupidPool.java:116) ~[druid-common-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.collections.StupidPool.take(StupidPool.java:107) ~[druid-common-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.CompressedPools.getByteBuf(CompressedPools.java:113) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.data.DecompressingByteBufferObjectStrategy.fromByteBuffer(DecompressingByteBufferObjectStrategy.java:49) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.data.DecompressingByteBufferObjectStrategy.fromByteBuffer(DecompressingByteBufferObjectStrategy.java:28) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.data.GenericIndexed$BufferIndexed.bufferedIndexedGet(GenericIndexed.java:444) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.data.GenericIndexed$2.get(GenericIndexed.java:599) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.data.BlockLayoutColumnarDoublesSupplier$BlockLayoutColumnarDoubles.loadBuffer(BlockLayoutColumnarDoublesSupplier.java:109) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.data.BlockLayoutColumnarDoublesSupplier$1.get(BlockLayoutColumnarDoublesSupplier.java:65) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.data.ColumnarDoubles$1HistoricalDoubleColumnSelector.getDouble(ColumnarDoubles.java:58) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.selector.settable.SettableDoubleColumnValueSelector.setValueFrom(SettableDoubleColumnValueSelector.java:36) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.QueryableIndexIndexableAdapter$RowIteratorImpl.setRowPointerValues(QueryableIndexIndexableAdapter.java:323) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.QueryableIndexIndexableAdapter$RowIteratorImpl.moveToNext(QueryableIndexIndexableAdapter.java:299) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.ForwardingRowIterator.moveToNext(ForwardingRowIterator.java:62) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.ForwardingRowIterator.moveToNext(ForwardingRowIterator.java:62) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.MergingRowIterator.lambda$new$0(MergingRowIterator.java:84) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at java.util.stream.IntPipeline$9$1.accept(IntPipeline.java:343) ~[?:1.8.0_152]
at java.util.stream.Streams$RangeIntSpliterator.forEachRemaining(Streams.java:110) ~[?:1.8.0_152]
at java.util.Spliterator$OfInt.forEachRemaining(Spliterator.java:693) ~[?:1.8.0_152]
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) ~[?:1.8.0_152]
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) ~[?:1.8.0_152]
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545) ~[?:1.8.0_152]
at java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260) ~[?:1.8.0_152]
at java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438) ~[?:1.8.0_152]
at org.apache.druid.segment.MergingRowIterator.(MergingRowIterator.java:92) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.RowCombiningTimeAndDimsIterator.(RowCombiningTimeAndDimsIterator.java:108) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.IndexMergerV9.lambda$merge$2(IndexMergerV9.java:909) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.IndexMergerV9.makeMergedTimeAndDimsIterator(IndexMergerV9.java:1031) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.IndexMergerV9.makeIndexFiles(IndexMergerV9.java:179) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
at org.apache.druid.segment.IndexMergerV9.merge(IndexMergerV9.java:914) ~[druid-processing-0.13.0-incubating.jar:0.13.0-incubating]
I have provided -Xms=3g Xmx=8g MaxDirectMemorySize=15g memory configuraraion.
What is going wrong ?? why it tries to create too many DictionaryMergeIterator object and buffers ??
How can i modify my configuration to reduce the amount of memory being used while merging the indexes .
Thanks,