Can't restart historical node

I can’t restart my historical node.

There are always some gc exception:

OpenJDK 64-Bit Server VM warning: Attempt to protect stack guard pages failed.

OpenJDK 64-Bit Server VM warning: Attempt to deallocate stack guard pages failed.

OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000007f7eb0000, 196608, 0) failed; error=‘Cannot allocate memory’ (errno=12)

2015-10-28 07:19:39,653 ERROR o.a.c.f.l.ListenerContainer [ZkCoordinator-0] Listener (io.druid.server.coordination.BaseZkCoordinator$1@6932a575) threw an exception

java.lang.OutOfMemoryError: unable to create new native thread

at java.lang.Thread.start0(Native Method) ~[?:1.7.0_79]

at java.lang.Thread.start( ~[?:1.7.0_79]

at java.util.concurrent.ThreadPoolExecutor.addWorker( ~[?:1.7.0_79]

at java.util.concurrent.ThreadPoolExecutor.ensurePrestart( ~[?:1.7.0_79]

at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute( ~[?:1.7.0_79]

at java.util.concurrent.ScheduledThreadPoolExecutor.schedule( ~[?:1.7.0_79]

at io.druid.server.coordination.ZkCoordinator.removeSegment( ~[druid-server-0.8.1-rc2.jar:0.8.1-rc2]

at io.druid.server.coordination.ZkCoordinator.loadSegment( ~[druid-server-0.8.1-rc2.jar:0.8.1-rc2]

at io.druid.server.coordination.ZkCoordinator.addSegment( ~[druid-server-0.8.1-rc2.jar:0.8.1-rc2]

at io.druid.server.coordination.SegmentChangeRequestLoad.go( ~[druid-server-0.8.1-rc2.jar:0.8.1-rc2]

at io.druid.server.coordination.BaseZkCoordinator$1.childEvent( ~[druid-server-0.8.1-rc2.jar:0.8.1-rc2]

at$5.apply( ~[curator-recipes-2.8.0.jar:?]

at$5.apply( ~[curator-recipes-2.8.0.jar:?]

at org.apache.curator.framework.listen.ListenerContainer$ [curator-framework-2.8.0.jar:?]

at$SameThreadExecutorService.execute( [guava-16.0.1.jar:?]

at org.apache.curator.framework.listen.ListenerContainer.forEach( [curator-framework-2.8.0.jar:?]

at [curator-recipes-2.8.0.jar:?]

at [curator-recipes-2.8.0.jar:?]

at$ [curator-recipes-2.8.0.jar:?]

at java.util.concurrent.Executors$ [?:1.7.0_79]

at [?:1.7.0_79]

at java.util.concurrent.Executors$ [?:1.7.0_79]

at [?:1.7.0_79]

at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:1.7.0_79]

at java.util.concurrent.ThreadPoolExecutor$ [?:1.7.0_79]

at [?:1.7.0_79]

2015-10-28 07:19:39,691 INFO i.d.s.c.ZkCoordinator [ZkCoordinator-0] zNode[/druid/loadQueue/] was removed

在 2015年10月28日星期三 UTC+8下午3:43:35,luo…@conew.com写道:

I believe you need to increase the ulimit for max user processes,

Try doing ulimit -a to see your current configs and try increasing it.

Hi, Nishant:
Here is the result of command “ulimit -a”:

[luotao@yw-0-0 ~]$ ulimit -a

core file size (blocks, -c) 0

data seg size (kbytes, -d) unlimited

scheduling priority (-e) 0

file size (blocks, -f) unlimited

pending signals (-i) 514942

max locked memory (kbytes, -l) 64

max memory size (kbytes, -m) unlimited

open files (-n) 4194304

pipe size (512 bytes, -p) 8

POSIX message queues (bytes, -q) 819200

real-time priority (-r) 0

stack size (kbytes, -s) 10240

cpu time (seconds, -t) unlimited

max user processes (-u) 10000

virtual memory (kbytes, -v) unlimited

file locks (-x) unlimited

I think the number of “max user processes” is large enough.

I cleaned the segments from the historical node, then restarted it. It will reload the segments from deep storage and work well now…

I’m also confused about the caused reason about this exception

在 2015年10月28日星期三 UTC+8下午6:38:42,Nishant Bangarwa写道:


Did it start failing suddenly (after working for a while) or this is a fresh setup (or did you do any os updates and that triggered this)?

– Himanshu

Hi, Himanshu:

It start failing suddenly when loading the local reserved segments. I didn’t do any os updates

在 2015年10月30日星期五 UTC+8下午12:10:20,Himanshu Gupta写道:


Since it is failing with OOM, I would first try to check if there is enough free memory available on the system and that the process is getting appropriate -Xmx , --XX:MaxDirectMemory settings.

also see if is any help.

– Himanshu