Recompiling druid 0.9.2-rc2 with Hadoop 2.6 getting OutOfMemoryError : Perm gen space - Indexing

Hi Team,

I was trying to set up druid with hadoop indexing, Our hadoop cluster is of hortonworks distribution (2.2) and the hadoop version is 2.6 and after facing all the fasterxml issues tried all the proposed fixes

Finally recompiled druid (0.9.2-rc2) with hadoop 2.6 and then with the configuration mapreduce.job.classloader = true was able to overcome the class compatible issue, but my indexing job fails continusly with OutOfMemoryError. Updated all the dependencies with druid stable(0.9.1.1) even there I’m getting the below error

My Middle manager configuration

[druid-0.9.2-rc2-SNAPSHOT]$ more conf/druid/middleManager/runtime.properties

druid.service=druid/middleManager

druid.port=8091

Number of tasks per middleManager

druid.worker.capacity=3

Task launch parameters

druid.indexer.runner.javaOpts=-server -Xmx6g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps

druid.indexer.task.baseTaskDir=/tmp/druid/task

HTTP server threads

druid.server.http.numThreads=25

Processing threads and buffers

druid.processing.buffer.sizeBytes=536870912

druid.processing.numThreads=2

Hadoop indexing

druid.indexer.task.hadoopWorkingPath=/tmp/druid/hadoop-tmp

druid.indexer.task.defaultHadoopCoordinates=[“org.apache.hadoop:hadoop-client:2.6.0”]

``

Note:

As of now I’m trying to test out the hadoop connectivity, through a single node druid cluster

StackTrace

2016-10-08T23:30:17,959 ERROR [task-runner-0-priority-0] com.hadoop.compression.lzo.LzoCodec - Cannot load native-lzo without native-hadoop

14.122: [Full GC 370M->32M(109M), 0.7436010 secs]

[Eden: 304.0M(932.0M)->0.0B(63.0M) Survivors: 30.0M->0.0B Heap: 370.4M(2016.0M)->32.5M(109.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.80 sys=0.04, real=0.74 secs]

14.866: [Full GC 32M->28M(95M), 0.5005300 secs]

[Eden: 0.0B(63.0M)->0.0B(54.0M) Survivors: 0.0B->0.0B Heap: 32.5M(109.0M)->28.4M(95.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.59 sys=0.01, real=0.50 secs]

2016-10-08T23:30:19,313 WARN [Thread-36] org.apache.hadoop.hdfs.DFSClient - DataStreamer Exception

java.lang.OutOfMemoryError: PermGen space

   at java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.7.0_79]

   at java.lang.ClassLoader.defineClass(ClassLoader.java:800) ~[?:1.7.0_79]

   at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) ~[?:1.7.0_79]

   at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) ~[?:1.7.0_79]

   at java.net.URLClassLoader.access$100(URLClassLoader.java:71) ~[?:1.7.0_79

   at java.net.URLClassLoader$1.run(URLClassLoader.java:361) ~[?:1.7.0_79]

   at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[?:1.7.0_79]

   at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_79]

   at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[?:1.7.0_79]

    at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[?:1.7.0_79]

    at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[?:1.7.0_79]

    at java.lang.Class.getDeclaredMethods0(Native Method) ~[?:1.7.0_79]

    at java.lang.Class.privateGetDeclaredMethods(Class.java:2615) ~[?:1.7.0_79]

    at java.lang.Class.getMethod0(Class.java:2856) ~[?:1.7.0_79]

    at java.lang.Class.getMethod(Class.java:1668) ~[?:1.7.0_79]

    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.getReturnProtoType(ProtobufRpcEngine.java:293) ~[hadoop-common-2.6.0.jar:?]

    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:258) ~[hadoop-common-2.6.0.jar:?]

    at com.sun.proxy.$Proxy184.getServerDefaults(Unknown Source) ~[?:?]

    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getServerDefaults(ClientNamenodeProtocolTranslatorPB.java:267) ~[hadoop-hdfs-2.6.0.jar:?]

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_79]

    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_79]

    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_79]

    at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_79]

    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[hadoop-common-2.6.0.jar:?]

    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[hadoop-common-2.6.0.jar:?]

    at com.sun.proxy.$Proxy185.getServerDefaults(Unknown Source) ~[?:?]

    at org.apache.hadoop.hdfs.DFSClient.getServerDefaults(DFSClient.java:1007) ~[hadoop-hdfs-2.6.0.jar:?]

    at org.apache.hadoop.hdfs.DFSClient.shouldEncryptData(DFSClient.java:2043) ~[hadoop-hdfs-2.6.0.jar:?]

    at org.apache.hadoop.hdfs.DFSClient.newDataEncryptionKey(DFSClient.java:2049) ~[hadoop-hdfs-2.6.0.jar:?]

    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:208) ~[hadoop-hdfs-2.6.0.jar:?]

    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:182) ~[hadoop-hdfs-2.6.0.jar:?]

    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1413) [hadoop-hdfs-2.6.0.jar:?]

2016-10-08T23:30:19,318 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.JobSubmitter - Cleaning up the staging area /user/sathsrinivasan/.staging/job_1474430133060_114280

15.375: [Full GC 28M->26M(88M), 0.4837330 secs]

[Eden: 1024.0K(54.0M)->0.0B(50.0M) Survivors: 0.0B->0.0B Heap: 28.8M(95.0M)->26.2M(88.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.58 sys=0.00, real=0.48 secs]

15.859: [Full GC 26M->26M(88M), 0.5309600 secs]

[Eden: 0.0B(50.0M)->0.0B(50.0M) Survivors: 0.0B->0.0B Heap: 26.2M(88.0M)->26.2M(88.0M)], [Perm: 83967K->83702K(83968K)]

[Times: user=0.60 sys=0.01, real=0.54 secs]

2016-10-08T23:30:20,338 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[HadoopIndexTask{id=index_hadoop_pageviews_2016-10-08T23:30:03.792Z, type=index_hadoop, dataSource=pageviews}]

java.lang.RuntimeException: java.lang.reflect.InvocationTargetException

    at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]

    at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:204) ~[druid-indexing-service-0.9.1.1.jar:0.9.1.1]

    at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:208) ~[druid-indexing-service-0.9.1.1.jar:0.9.1.1]

    at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.9.1.1.jar:0.9.1.1]

    at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.9.1.1.jar:0.9.1.1]

    at java.util.concurrent.FutureTask.run(FutureTask.java:262) [?:1.7.0_79]

    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [?:1.7.0_79]

    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [?:1.7.0_79]

    at java.lang.Thread.run(Thread.java:745) [?:1.7.0_79]

Caused by: java.lang.reflect.InvocationTargetException

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_79]

    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_79]

    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_79]

    at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_79]

    at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) ~[druid-indexing-service-0.9.1.1.jar:0.9.1.1]

    ... 7 more

Caused by: java.lang.RuntimeException: java.io.IOException: DataStreamer Exception:

    at io.druid.indexer.IndexGeneratorJob.run(IndexGeneratorJob.java:211) ~[druid-indexing-hadoop-0.9.1.1.jar:0.9.1.1]

    at io.druid.indexer.JobHelper.runJobs(JobHelper.java:323) ~[druid-indexing-hadoop-0.9.1.1.jar:0.9.1.1]

    at io.druid.indexer.HadoopDruidIndexerJob.run(HadoopDruidIndexerJob.java:94) ~[druid-indexing-hadoop-0.9.1.1.jar:0.9.1.1]

    at io.druid.indexing.common.task.HadoopIndexTask$HadoopIndexGeneratorInnerProcessing.runTask(HadoopIndexTask.java:261) ~[druid-indexing-service-0.9.1.1.jar:0.9.1.1]

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_79]

    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_79]

    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_79]

    at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_79]

    at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:201) ~[druid-indexing-service-0.9.1.1.jar:0.9.1.1]

    ... 7 more

Caused by: java.io.IOException: DataStreamer Exception:

    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:696) ~[?:?]

Caused by: java.lang.OutOfMemoryError: PermGen space

    at java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.7.0_79]

    at java.lang.ClassLoader.defineClass(ClassLoader.java:800) ~[?:1.7.0_79]

    at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) ~[?:1.7.0_79]

    at java.net.URLClassLoader.defineClass(URLClassLoader.java:449) ~[?:1.7.0_79]

    at java.net.URLClassLoader.access$100(URLClassLoader.java:71) ~[?:1.7.0_79]

    at java.net.URLClassLoader$1.run(URLClassLoader.java:361) ~[?:1.7.0_79]

    at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[?:1.7.0_79]

    at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_79]

    at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[?:1.7.0_79]

    at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[?:1.7.0_79]

    at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[?:1.7.0_79]

    at java.lang.Class.getDeclaredMethods0(Native Method) ~[?:1.7.0_79]

    at java.lang.Class.privateGetDeclaredMethods(Class.java:2615) ~[?:1.7.0_79]

    at java.lang.Class.getMethod0(Class.java:2856) ~[?:1.7.0_79]

    at java.lang.Class.getMethod(Class.java:1668) ~[?:1.7.0_79]

    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.getReturnProtoType(ProtobufRpcEngine.java:293) ~[?:?]

    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:258) ~[?:?]

    at com.sun.proxy.$Proxy184.getServerDefaults(Unknown Source) ~[?:?]

    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getServerDefaults(ClientNamenodeProtocolTranslatorPB.java:267) ~[?:?]

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_79]

    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_79]

    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_79]

    at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_79]

    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) ~[?:?]

    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) ~[?:?]

    at com.sun.proxy.$Proxy185.getServerDefaults(Unknown Source) ~[?:?]

    at org.apache.hadoop.hdfs.DFSClient.getServerDefaults(DFSClient.java:1007) ~[?:?]

    at org.apache.hadoop.hdfs.DFSClient.shouldEncryptData(DFSClient.java:2043) ~[?:?]

    at org.apache.hadoop.hdfs.DFSClient.newDataEncryptionKey(DFSClient.java:2049) ~[?:?]

    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:208) ~[?:?]

    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:182) ~[?:?]

    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1413) ~[?:?]

2016-10-08T23:30:20,366 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_hadoop_pageviews_2016-10-08T23:30:03.792Z] status changed to [FAILED].

2016-10-08T23:30:20,371 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {

“id” : “index_hadoop_pageviews_2016-10-08T23:30:03.792Z”,

“status” : “FAILED”,

“duration” : 10729

}

2016-10-08T23:30:20,380 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.server.coordination.AbstractDataSegmentAnnouncer.stop()] on object[io.druid.server.coordination.BatchDataSegmentAnnouncer@7043dc6f].

2016-10-08T23:30:20,380 INFO [main] io.druid.server.coordination.AbstractDataSegmentAnnouncer - Stopping class io.druid.server.coordination.BatchDataSegmentAnnouncer with config[io.druid.server.initialization.ZkPathsConfig@22e2266d]

2016-10-08T23:30:20,380 INFO [main] io.druid.curator.announcement.Announcer - unannouncing [/druid/announcements/lvsdmetl72.lvs.paypal.com:8100]

2016-10-08T23:30:20,394 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.server.listener.announcer.ListenerResourceAnnouncer.stop()] on object[io.druid.query.lookup.LookupResourceListenerAnnouncer@928212c].

2016-10-08T23:30:20,394 INFO [main] io.druid.curator.announcement.Announcer - unannouncing [/druid/listeners/lookups/__default/lvsdmetl72.lvs.paypal.com:8100]

2016-10-08T23:30:20,396 INFO [main] io.druid.server.listener.announcer.ListenerResourceAnnouncer - Unannouncing start time on [/druid/listeners/lookups/__default/lvsdmetl72.lvs.paypal.com:8100]

2016-10-08T23:30:20,397 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.query.lookup.LookupReferencesManager.stop()] on object[io.druid.query.lookup.LookupReferencesManager@1af3e4e3].

2016-10-08T23:30:20,397 INFO [main] io.druid.query.lookup.LookupReferencesManager - Stopping lookup factory references manager

2016-10-08T23:30:20,399 INFO [main] org.eclipse.jetty.server.ServerConnector - Stopped ServerConnector@23f89d56{HTTP/1.1}{0.0.0.0:8100}

2016-10-08T23:30:20,401 INFO [main] org.eclipse.jetty.server.handler.ContextHandler - Stopped o.e.j.s.ServletContextHandler@563105a6{/,null,UNAVAILABLE}

2016-10-08T23:30:20,403 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.indexing.worker.executor.ExecutorLifecycle.stop() throws java.lang.Exception] on object[io.druid.indexing.worker.executor.ExecutorLifecycle@61758c2d].

2016-10-08T23:30:20,404 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.indexing.overlord.ThreadPoolTaskRunner.stop()] on object[io.druid.indexing.overlord.ThreadPoolTaskRunner@2b2131a1].

2016-10-08T23:30:20,406 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery.ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator.discovery.ServerDiscoverySelector@3c6a0cf3].

2016-10-08T23:30:20,409 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.announcement.Announcer.stop()] on object[io.druid.curator.announcement.Announcer@32df21a4].

2016-10-08T23:30:20,409 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery.ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator.discovery.ServerDiscoverySelector@5367ae1e].

2016-10-08T23:30:20,409 INFO [main] io.druid.curator.CuratorModule - Stopping Curator

2016-10-08T23:30:20,410 INFO [Curator-Framework-0] org.apache.curator.framework.imps.CuratorFrameworkImpl - backgroundOperationsLoop exiting

2016-10-08T23:30:20,413 INFO [main] org.apache.zookeeper.ZooKeeper - Session: 0x157a60518d00057 closed

16.467: [Full GC 34M->16M(56M), 0.4642240 secs]

[Eden: 8192.0K(50.0M)->0.0B(32.0M) Survivors: 0.0B->0.0B Heap: 34.1M(88.0M)->16.6M(56.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.53 sys=0.01, real=0.47 secs]

16.932: [Full GC 16M->16M(55M), 0.4409370 secs]

[Eden: 0.0B(32.0M)->0.0B(31.0M) Survivors: 0.0B->0.0B Heap: 16.6M(56.0M)->16.5M(55.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.51 sys=0.00, real=0.44 secs]

17.374: [Full GC 16M->16M(55M), 0.4562070 secs]

[Eden: 1024.0K(31.0M)->0.0B(31.0M) Survivors: 0.0B->0.0B Heap: 16.5M(55.0M)->16.2M(55.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.52 sys=0.00, real=0.45 secs]

17.831: [Full GC 16M->16M(55M), 0.4383460 secs]

[Eden: 0.0B(31.0M)->0.0B(31.0M) Survivors: 0.0B->0.0B Heap: 16.2M(55.0M)->16.2M(55.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.50 sys=0.00, real=0.43 secs]

18.269: [Full GC 16M->16M(55M), 0.4366570 secs]

[Eden: 0.0B(31.0M)->0.0B(31.0M) Survivors: 0.0B->0.0B Heap: 16.2M(55.0M)->16.2M(55.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.50 sys=0.00, real=0.44 secs]

18.707: [Full GC 16M->16M(55M), 0.4348820 secs]

[Eden: 0.0B(31.0M)->0.0B(31.0M) Survivors: 0.0B->0.0B Heap: 16.2M(55.0M)->16.2M(55.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.51 sys=0.01, real=0.44 secs]

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread “main-EventThread”

19.144: [Full GC 16M->16M(55M), 0.4509780 secs]

[Eden: 1024.0K(31.0M)->0.0B(31.0M) Survivors: 0.0B->0.0B Heap: 16.4M(55.0M)->16.3M(55.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.52 sys=0.00, real=0.45 secs]

19.595: [Full GC 16M->16M(55M), 0.4968770 secs]

[Eden: 0.0B(31.0M)->0.0B(31.0M) Survivors: 0.0B->0.0B Heap: 16.3M(55.0M)->16.3M(55.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.57 sys=0.00, real=0.50 secs]

20.093: [Full GC 16M->16M(55M), 0.4394240 secs]

[Eden: 1024.0K(31.0M)->0.0B(31.0M) Survivors: 0.0B->0.0B Heap: 16.3M(55.0M)->16.3M(55.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.52 sys=0.00, real=0.44 secs]

20.533: [Full GC 16M->16M(55M), 0.4386390 secs]

[Eden: 0.0B(31.0M)->0.0B(31.0M) Survivors: 0.0B->0.0B Heap: 16.3M(55.0M)->16.3M(55.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.50 sys=0.00, real=0.44 secs]

20.972: [Full GC 16M->16M(54M), 0.4509180 secs]

[Eden: 1024.0K(31.0M)->0.0B(30.0M) Survivors: 0.0B->0.0B Heap: 16.3M(55.0M)->16.2M(54.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.51 sys=0.01, real=0.45 secs]

21.423: [Full GC 16M->16M(54M), 0.4361620 secs]

[Eden: 0.0B(30.0M)->0.0B(30.0M) Survivors: 0.0B->0.0B Heap: 16.2M(54.0M)->16.2M(54.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.50 sys=0.00, real=0.43 secs]

21.860: [Full GC 16M->16M(54M), 0.4400430 secs]

[Eden: 0.0B(30.0M)->0.0B(30.0M) Survivors: 0.0B->0.0B Heap: 16.2M(54.0M)->16.2M(54.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.51 sys=0.00, real=0.44 secs]

22.301: [Full GC 16M->16M(54M), 0.4366030 secs]

[Eden: 0.0B(30.0M)->0.0B(30.0M) Survivors: 0.0B->0.0B Heap: 16.2M(54.0M)->16.2M(54.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.50 sys=0.00, real=0.44 secs]

Exception in thread “main” 22.738: [Full GC 16M->16M(54M), 0.4506170 secs]

[Eden: 1024.0K(30.0M)->0.0B(30.0M) Survivors: 0.0B->0.0B Heap: 16.2M(54.0M)->16.1M(54.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.51 sys=0.00, real=0.45 secs]

23.189: [Full GC 16M->16M(54M), 0.5033690 secs]

[Eden: 0.0B(30.0M)->0.0B(30.0M) Survivors: 0.0B->0.0B Heap: 16.1M(54.0M)->16.1M(54.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.57 sys=0.00, real=0.51 secs]

23.693: [Full GC 16M->16M(54M), 0.4380840 secs]

[Eden: 0.0B(30.0M)->0.0B(30.0M) Survivors: 0.0B->0.0B Heap: 16.1M(54.0M)->16.1M(54.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.51 sys=0.00, real=0.43 secs]

24.131: [Full GC 16M->16M(54M), 0.4374120 secs]

[Eden: 0.0B(30.0M)->0.0B(30.0M) Survivors: 0.0B->0.0B Heap: 16.1M(54.0M)->16.1M(54.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.51 sys=0.00, real=0.43 secs]

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread “main”

24.570: [Full GC 16M->16M(54M), 0.4539170 secs]

[Eden: 1024.0K(30.0M)->0.0B(30.0M) Survivors: 0.0B->0.0B Heap: 16.2M(54.0M)->16.1M(54.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.52 sys=0.00, real=0.46 secs]

25.024: [Full GC 16M->16M(54M), 0.4379990 secs]

[Eden: 0.0B(30.0M)->0.0B(30.0M) Survivors: 0.0B->0.0B Heap: 16.1M(54.0M)->16.1M(54.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.50 sys=0.01, real=0.44 secs]

25.463: [Full GC 16M->16M(54M), 0.4399900 secs]

[Eden: 0.0B(30.0M)->0.0B(30.0M) Survivors: 0.0B->0.0B Heap: 16.1M(54.0M)->16.1M(54.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.51 sys=0.00, real=0.44 secs]

25.903: [Full GC 16M->16M(54M), 0.4379480 secs]

[Eden: 0.0B(30.0M)->0.0B(30.0M) Survivors: 0.0B->0.0B Heap: 16.1M(54.0M)->16.1M(54.0M)], [Perm: 83967K->83967K(83968K)]

[Times: user=0.50 sys=0.00, real=0.43 secs]

Heap

garbage-first heap total 55296K, used 16496K [0x000000067ae00000, 0x000000067e400000, 0x00000007fae00000)

region size 1024K, 0 young (0K), 0 survivors (0K)

compacting perm gen total 83968K, used 83967K [0x00000007fae00000, 0x0000000800000000, 0x0000000800000000)

the space 83968K, 99% used [0x00000007fae00000, 0x00000007ffffffb8, 0x0000000800000000, 0x0000000800000000)

``

Many Thanks,

Sathish

How big is your current permgen size? Maybe it’s too small for the number of classes being loaded, can you try increasing that?

Thanks,

Jon

Thanks Jon,

Increasing the permgen solved the issue

Regards,

Sathish