OutOfMemoryError

Hi~

I got a issue about OutOfMemoryError on realtime node (0.7.3) but I also have realtime node (0.6.73) to work the same thing.

I am testing to upgrade to 0.7.3 from 0.6.73.

Realtime node (0.6.73) is not a issue and the node is very stable for a long time (several months).

Druid version

0.7.3

JVM setting

java -server -verbosegc -XX:MaxPermSize=256M -Xss1024k -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/druid/heapdump -XX:+UseConcMarkSweepGC -Xloggc:/tmp/gc/log -Xmx4488m -Xms4488m -XX:NewSize=641m -XX:MaxNewSize=641m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.port=8050 -Djava.rmi.server.hostname=localhost -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager -Dlog4j.configurationFile=file:///home/ec2-user/streamlyzer/druid/bin/log4j2.xml -Djava.io.tmpdir=/tmp -Ddruid.realtime.specFile=…/…/configs/tomato//realtime1/realtime.spec -classpath jar/*:…/…/configs/tomato//druid_common:…/…/configs/tomato//realtime1 io.druid.cli.Main server realtime

**The logs on Realtime **

2015-06-12T09:39:02,178 ERROR [main-SendThread(:8020)] org.apache.zookeeper.ClientCnxn - from main-SendThread(:8020)

java.lang.OutOfMemoryError: Java heap space

2015-06-12T09:38:41,100 ERROR [main-SendThread(:8020)] org.apache.zookeeper.ClientCnxn - from main-SendThread(:8020)

java.lang.OutOfMemoryError: Java heap space

2015-06-12T09:41:49,298 ERROR [chief-data] io.druid.segment.realtime.RealtimeManager - Exception aborted realtime processing[data]: {class=io.druid.segment.realtime.RealtimeManager, exceptionType=class java.lang.OutOfMemoryError, exceptionMessage=Java heap space}

java.lang.OutOfMemoryError: Java heap space

at com.fasterxml.jackson.core.util.BufferRecycler.calloc(BufferRecycler.java:156) ~[druid-services-0.7.3-selfcontained.jar:0.7.3]

at com.fasterxml.jackson.core.util.BufferRecycler.allocCharBuffer(BufferRecycler.java:124) ~[druid-services-0.7.3-selfcontained.jar:0.7.3]

at com.fasterxml.jackson.core.io.IOContext.allocTokenBuffer(IOContext.java:181) ~[druid-services-0.7.3-selfcontained.jar:0.7.3]

at com.fasterxml.jackson.core.JsonFactory.createParser(JsonFactory.java:830) ~[druid-services-0.7.3-selfcontained.jar:0.7.3]

at com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:1833) ~[druid-services-0.7.3-selfcontained.jar:0.7.3]

at com.metamx.common.parsers.JSONParser.parse(JSONParser.java:115) ~[druid-services-0.7.3-selfcontained.jar:0.7.3]

at io.druid.data.input.impl.StringInputRowParser.parseString(StringInputRowParser.java:86) ~[druid-services-0.7.3-selfcontained.jar:0.7.3]

at io.druid.data.input.impl.StringInputRowParser.buildStringKeyMap(StringInputRowParser.java:73) ~[druid-services-0.7.3-selfcontained.jar:0.7.3]

at io.druid.data.input.impl.StringInputRowParser.parse(StringInputRowParser.java:40) ~[druid-services-0.7.3-selfcontained.jar:0.7.3]

at io.druid.data.input.impl.StringInputRowParser.parse(StringInputRowParser.java:19) ~[druid-services-0.7.3-selfcontained.jar:0.7.3]

at io.druid.firehose.szrkafka.SZRKafkaSevenFirehoseFactory$1.parseMessage(Unknown Source) ~[SZRKafka-seven.jar:?]

at io.druid.firehose.szrkafka.SZRKafkaSevenFirehoseFactory$1.nextRow(Unknown Source) ~[SZRKafka-seven.jar:?]

at io.druid.segment.realtime.RealtimeManager$FireChief.run(RealtimeManager.java:239) [druid-services-0.7.3-selfcontained.jar:0.7.3]

2015-06-12T09:39:48,009 ERROR [CuratorFramework-0] org.apache.curator.framework.imps.CuratorFrameworkImpl - Background exception was not retry-able or retry gave up

java.lang.OutOfMemoryError: Java heap space

at java.lang.String.toCharArray(String.java:2753) ~[?:1.7.0_55]

at java.util.zip.ZipCoder.getBytes(ZipCoder.java:78) ~[?:1.7.0_55]

at java.util.zip.ZipFile.getEntry(ZipFile.java:306) ~[?:1.7.0_55]

at java.util.jar.JarFile.getEntry(JarFile.java:226) ~[?:1.7.0_55]

at java.util.jar.JarFile.getJarEntry(JarFile.java:209) ~[?:1.7.0_55]

at sun.misc.URLClassPath$JarLoader.getResource(URLClassPath.java:840) ~[?:1.7.0_55]

at sun.misc.URLClassPath.getResource(URLClassPath.java:199) ~[?:1.7.0_55]

at java.net.URLClassLoader$1.run(URLClassLoader.java:358) ~[?:1.7.0_55]

at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[?:1.7.0_55]

at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_55]

at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[?:1.7.0_55]

at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[?:1.7.0_55]

at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[?:1.7.0_55]

at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[?:1.7.0_55]

at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:819) [druid-services-0.7.3-selfcontained.jar:0.7.3]

at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:802) [druid-services-0.7.3-selfcontained.jar:0.7.3]

at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$400(CuratorFrameworkImpl.java:61) [druid-services-0.7.3-selfcontained.jar:0.7.3]

at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:272) [druid-services-0.7.3-selfcontained.jar:0.7.3]

at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_55]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_55]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_55]

at java.lang.Thread.run(Thread.java:745) [?:1.7.0_55]

2015-06-12T09:47:20,492 INFO [TomatoRealtime1_ip-172-31-5-108-1434100264194-b9ff8e8c_watcher_executor] kafka.consumer.ZookeeperConsumerConnector - TomatoRealtime1_ip-172-31-5-108-1434100264194-b9ff8e8c stopping watcher executor thread for consumer TomatoRealtime1_ip-172-31-5-108-1434100264194-b9ff8e8c

2015-06-12T09:48:16,223 INFO [chief-data] kafka.consumer.ZookeeperConsumerConnector - TomatoRealtime1_ip-172-31-5-108-1434100264194-b9ff8e8c ZKConsumerConnector shutting down

2015-06-12T10:24:33,369 ERROR [MonitorScheduler-0] com.metamx.common.concurrent.ScheduledExecutors - Uncaught exception.

java.lang.OutOfMemoryError: Java heap space

2015-06-12T10:24:33,369 WARN [qtp571059610-42] org.eclipse.jetty.util.thread.QueuedThreadPool -

java.lang.OutOfMemoryError: Java heap space

2015-06-12T10:24:38,567 WARN [qtp571059610-42] org.eclipse.jetty.util.thread.QueuedThreadPool - Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$3@2959142b in qtp571059610{STARTED,40<=40<=40,i=36,q=0}

2015-06-12T10:30:34,360 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Starting merge and push.

2015-06-12T10:30:45,934 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Found [2] sinks. minTimestamp [2015-06-12T10:00:00.000Z]

2015-06-12T10:31:06,724 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Adding entry[1434099600000=Sink{interval=2015-06-12T09:00:00.000Z/2015-06-12T10:00:00.000Z, schema=io.druid.segment.indexing.DataSchema@62cc540b}] for merge and push.

2015-06-12T10:32:13,620 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Adding entry[1434096000000=Sink{interval=2015-06-12T08:00:00.000Z/2015-06-12T09:00:00.000Z, schema=io.druid.segment.indexing.DataSchema@62cc540b}] for merge and push.

2015-06-12T10:33:22,685 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Found [2] sinks to persist and merge

2015-06-12T10:36:04,200 ERROR [plumber_scheduled_0] com.metamx.common.concurrent.ScheduledExecutors - Uncaught exception.

java.lang.OutOfMemoryError: Java heap space

2015-06-12T10:35:55,798 WARN [qtp571059610-39] org.eclipse.jetty.util.thread.QueuedThreadPool -

java.lang.OutOfMemoryError: Java heap space

2015-06-12T10:37:16,223 WARN [qtp571059610-39] org.eclipse.jetty.util.thread.QueuedThreadPool - Unexpected thread death: org.eclipse.jetty.util.thread.QueuedThreadPool$3@2959142b in qtp571059610{STARTED,40<=40<=40,i=36,q=0}

2015-06-12T11:30:39,177 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Starting merge and push.

2015-06-12T11:30:39,177 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Found [2] sinks. minTimestamp [2015-06-12T11:00:00.000Z]

2015-06-12T11:30:39,177 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Adding entry[1434099600000=Sink{interval=2015-06-12T09:00:00.000Z/2015-06-12T10:00:00.000Z, schema=io.druid.segment.indexing.DataSchema@62cc540b}] for merge and push.

2015-06-12T11:30:39,178 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Adding entry[1434096000000=Sink{interval=2015-06-12T08:00:00.000Z/2015-06-12T09:00:00.000Z, schema=io.druid.segment.indexing.DataSchema@62cc540b}] for merge and push.

2015-06-12T11:30:48,532 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Found [2] sinks to persist and merge

2015-06-12T11:31:03,676 INFO [data-2015-06-12T09:00:00.000Z-persist-n-merge] io.druid.segment.realtime.plumber.RealtimePlumber - Hydrant[FireHydrant{index=io.druid.segment.incremental.OnheapIncrementalIndex@24c06efb, queryable=io.druid.segment.ReferenceCountingSegment@7a614724, count=0}] hasn’t swapped yet, swapping. Sink[Sink{interval=2015-06-12T09:00:00.000Z/2015-06-12T10:00:00.000Z, schema=io.druid.segment.indexing.DataSchema@62cc540b}]

2015-06-12T11:31:03,676 INFO [data-2015-06-12T09:00:00.000Z-persist-n-merge] io.druid.segment.realtime.plumber.RealtimePlumber - DataSource[data], Interval[2015-06-12T09:00:00.000Z/2015-06-12T10:00:00.000Z], persisting Hydrant[FireHydrant{index=io.druid.segment.incremental.OnheapIncrementalIndex@24c06efb, queryable=io.druid.segment.ReferenceCountingSegment@7a614724, count=0}]

2015-06-12T11:31:22,473 INFO [data-2015-06-12T08:00:00.000Z-persist-n-merge] io.druid.segment.realtime.plumber.RealtimePlumber - Hydrant[FireHydrant{index=io.druid.segment.incremental.OnheapIncrementalIndex@5b45bc3f, queryable=io.druid.segment.ReferenceCountingSegment@3f49fae9, count=0}] hasn’t swapped yet, swapping. Sink[Sink{interval=2015-06-12T08:00:00.000Z/2015-06-12T09:00:00.000Z, schema=io.druid.segment.indexing.DataSchema@62cc540b}]

2015-06-12T11:31:32,097 INFO [data-2015-06-12T08:00:00.000Z-persist-n-merge] io.druid.segment.realtime.plumber.RealtimePlumber - DataSource[data], Interval[2015-06-12T08:00:00.000Z/2015-06-12T09:00:00.000Z], persisting Hydrant[FireHydrant{index=io.druid.segment.incremental.OnheapIncrementalIndex@5b45bc3f, queryable=io.druid.segment.ReferenceCountingSegment@3f49fae9, count=0}]

2015-06-12T12:30:38,880 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Starting merge and push.

2015-06-12T12:30:38,880 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Found [2] sinks. minTimestamp [2015-06-12T12:00:00.000Z]

2015-06-12T12:30:38,880 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Adding entry[1434099600000=Sink{interval=2015-06-12T09:00:00.000Z/2015-06-12T10:00:00.000Z, schema=io.druid.segment.indexing.DataSchema@62cc540b}] for merge and push.

2015-06-12T12:30:44,382 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Adding entry[1434096000000=Sink{interval=2015-06-12T08:00:00.000Z/2015-06-12T09:00:00.000Z, schema=io.druid.segment.indexing.DataSchema@62cc540b}] for merge and push.

2015-06-12T12:30:44,383 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Found [2] sinks to persist and merge

2015-06-12T12:30:44,383 INFO [data-2015-06-12T09:00:00.000Z-persist-n-merge] io.druid.segment.realtime.plumber.RealtimePlumber - Hydrant[FireHydrant{index=io.druid.segment.incremental.OnheapIncrementalIndex@24c06efb, queryable=io.druid.segment.ReferenceCountingSegment@7a614724, count=0}] hasn’t swapped yet, swapping. Sink[Sink{interval=2015-06-12T09:00:00.000Z/2015-06-12T10:00:00.000Z, schema=io.druid.segment.indexing.DataSchema@62cc540b}]

2015-06-12T12:30:44,383 INFO [data-2015-06-12T09:00:00.000Z-persist-n-merge] io.druid.segment.realtime.plumber.RealtimePlumber - DataSource[data], Interval[2015-06-12T09:00:00.000Z/2015-06-12T10:00:00.000Z], persisting Hydrant[FireHydrant{index=io.druid.segment.incremental.OnheapIncrementalIndex@24c06efb, queryable=io.druid.segment.ReferenceCountingSegment@7a614724, count=0}]

2015-06-12T12:31:12,475 INFO [data-2015-06-12T08:00:00.000Z-persist-n-merge] io.druid.segment.realtime.plumber.RealtimePlumber - Hydrant[FireHydrant{index=io.druid.segment.incremental.OnheapIncrementalIndex@5b45bc3f, queryable=io.druid.segment.ReferenceCountingSegment@3f49fae9, count=0}] hasn’t swapped yet, swapping. Sink[Sink{interval=2015-06-12T08:00:00.000Z/2015-06-12T09:00:00.000Z, schema=io.druid.segment.indexing.DataSchema@62cc540b}]

2015-06-12T12:31:20,877 INFO [data-2015-06-12T08:00:00.000Z-persist-n-merge] io.druid.segment.realtime.plumber.RealtimePlumber - DataSource[data], Interval[2015-06-12T08:00:00.000Z/2015-06-12T09:00:00.000Z], persisting Hydrant[FireHydrant{index=io.druid.segment.incremental.OnheapIncrementalIndex@5b45bc3f, queryable=io.druid.segment.ReferenceCountingSegment@3f49fae9, count=0}]

2015-06-12T13:30:38,880 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Starting merge and push.

2015-06-12T13:30:38,880 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Found [2] sinks. minTimestamp [2015-06-12T13:00:00.000Z]

2015-06-12T13:30:38,881 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Adding entry[1434099600000=Sink{interval=2015-06-12T09:00:00.000Z/2015-06-12T10:00:00.000Z, schema=io.druid.segment.indexing.DataSchema@62cc540b}] for merge and push.

2015-06-12T13:30:46,153 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Adding entry[1434096000000=Sink{interval=2015-06-12T08:00:00.000Z/2015-06-12T09:00:00.000Z, schema=io.druid.segment.indexing.DataSchema@62cc540b}] for merge and push.

2015-06-12T13:30:46,452 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Found [2] sinks to persist and merge

2015-06-12T13:30:46,452 INFO [data-2015-06-12T09:00:00.000Z-persist-n-merge] io.druid.segment.realtime.plumber.RealtimePlumber - Hydrant[FireHydrant{index=io.druid.segment.incremental.OnheapIncrementalIndex@24c06efb, queryable=io.druid.segment.ReferenceCountingSegment@7a614724, count=0}] hasn’t swapped yet, swapping. Sink[Sink{interval=2015-06-12T09:00:00.000Z/2015-06-12T10:00:00.000Z, schema=io.druid.segment.indexing.DataSchema@62cc540b}]

2015-06-12T13:30:46,453 INFO [data-2015-06-12T09:00:00.000Z-persist-n-merge] io.druid.segment.realtime.plumber.RealtimePlumber - DataSource[data], Interval[2015-06-12T09:00:00.000Z/2015-06-12T10:00:00.000Z], persisting Hydrant[FireHydrant{index=io.druid.segment.incremental.OnheapIncrementalIndex@24c06efb, queryable=io.druid.segment.ReferenceCountingSegment@7a614724, count=0}]

2015-06-12T13:31:40,872 INFO [data-2015-06-12T08:00:00.000Z-persist-n-merge] io.druid.segment.realtime.plumber.RealtimePlumber - Hydrant[FireHydrant{index=io.druid.segment.incremental.OnheapIncrementalIndex@5b45bc3f, queryable=io.druid.segment.ReferenceCountingSegment@3f49fae9, count=0}] hasn’t swapped yet, swapping. Sink[Sink{interval=2015-06-12T08:00:00.000Z/2015-06-12T09:00:00.000Z, schema=io.druid.segment.indexing.DataSchema@62cc540b}]

2015-06-12T13:31:48,965 INFO [data-2015-06-12T08:00:00.000Z-persist-n-merge] io.druid.segment.realtime.plumber.RealtimePlumber - DataSource[data], Interval[2015-06-12T08:00:00.000Z/2015-06-12T09:00:00.000Z], persisting Hydrant[FireHydrant{index=io.druid.segment.incremental.OnheapIncrementalIndex@5b45bc3f, queryable=io.druid.segment.ReferenceCountingSegment@3f49fae9, count=0}]

2015-06-12T14:30:38,880 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Starting merge and push.

2015-06-12T14:30:38,881 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Found [2] sinks. minTimestamp [2015-06-12T14:00:00.000Z]

2015-06-12T14:30:38,881 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Adding entry[1434099600000=Sink{interval=2015-06-12T09:00:00.000Z/2015-06-12T10:00:00.000Z, schema=io.druid.segment.indexing.DataSchema@62cc540b}] for merge and push.

2015-06-12T14:30:38,881 INFO [data-overseer-1] io.druid.segment.realtime.plumber.RealtimePlumber - Adding entry[1434096000000=Sink{interval=2015-06-12T08:00:00.000Z/2015-06-12T09:00:00.000Z, schema=io.druid.segment.indexing.DataSchema@62cc540b}] for merge and push.

Murry,

Can you make sure that you have "persistInHeap" set to false (or not
set at all, it should default to false) for your tuning config?

It also looks like your node might be having trouble persisting stuff.
It had a segment from 8-9 as well as a segment from 9-10 left to
persist before it could do the merge-n-push:

2015-06-12T13:30:46,452 INFO
[data-2015-06-12T09:00:00.000Z-persist-n-merge]
io.druid.segment.realtime.plumber.RealtimePlumber -
Hydrant[FireHydrant{index=io.druid.segment.incremental.OnheapIncrementalIndex@24c06efb,
queryable=io.druid.segment.ReferenceCountingSegment@7a614724,
count=0}] hasn't swapped yet, swapping.
Sink[Sink{interval=2015-06-12T09:00:00.000Z/2015-06-12T10:00:00.000Z,
schema=io.druid.segment.indexing.DataSchema@62cc540b}]
2015-06-12T13:31:40,872 INFO
[data-2015-06-12T08:00:00.000Z-persist-n-merge]
io.druid.segment.realtime.plumber.RealtimePlumber -
Hydrant[FireHydrant{index=io.druid.segment.incremental.OnheapIncrementalIndex@5b45bc3f,
queryable=io.druid.segment.ReferenceCountingSegment@3f49fae9,
count=0}] hasn't swapped yet, swapping.
Sink[Sink{interval=2015-06-12T08:00:00.000Z/2015-06-12T09:00:00.000Z,
schema=io.druid.segment.indexing.DataSchema@62cc540b}]

Are you perhaps bringing the node up in a new consumer group such that
it is pulling old data? If that is the case and thsi is necessary,
then you will likely need to fire up the node with a larger heap for
now and then set it back down once they catch up.

--Eric

I set like below.

I have no special things.

“ioConfig” : {

“type” : “realtime”,

“firehose”: {

“type”: “streamlyzer”,

“consumerProps”: {

“zk.connect” : “:8020”,

“zk.connectiontimeout.ms” : “15000”,

“zk.sessiontimeout.ms” : “45000”,

“zk.synctime.ms” : “5000”,

“groupid” : “TomatoRealtime1”,

“fetch.size” : “1048586”,

“autooffset.reset” : “largest”,

“autocommit.enable” : “false”

},

“feed”: “messagefront1”

},

“plumber”: {

“type”: “realtime”

}

},

“tuningConfig”: {

“type” : “realtime”,

“maxRowsInMemory”: 500000,

“intermediatePersistPeriod”: “PT10m”,

“windowPeriod”: “PT30m”,

“basePersistDirectory”: “/tmp/druid/basePersist”,

“rejectionPolicy”: {

“type”: “serverTime”

},

“shardSpec”: {

“type”: “linear”,

“partitionNum”: 1

}

}

We have 2 realtime nodes with linear shard spec.

2015년 6월 16일 화요일 오전 12시 51분 59초 UTC+9, Eric Tschetter 님의 말:

Have you tried running with a larger heap size and did it resolve the problem?

I already use this configuration.
I have no changing configuration and I didn’t try with the larger heap size yet.

Is the larger heap size only way to resolve it?

2015년 6월 17일 수요일 오전 3시 48분 46초 UTC+9, Eric Tschetter 님의 말:

Well, you could also consider lowering the maxRowsInMemory down a bit.
But, it's likely worth it to see if increasing the heap size will
resolve the problem or not.

One other thing would be to take a heap dump of the process and
analyze it with YourKit or something to see what is consuming all of
the memory.

--Eric