Peon - Address already bound

Hi Guys,
I kind of hit the wall with the Druid evaluation. Any request is highly appreciated!
Basically, I have deployed Druid cluster in AWS using aurora jobs. The wiring is fine in that the nodes discover each other.
I have basic index task that loads json data and indexes them. I have Indexing Service running remotely (MiddleManager & Overlord run as separate processes)
When the task is started, the peon starts and fails over to with java.net.BindException: Address already in use
** I have the middle manager supplied with a ‘Worker’ start port as below and the middle manager is assigned with an aurora port.
Middlemanager Runtime properties:druid.service=druid/middlemanager
druid.host=

druid.port=
**

Processing threads and buffers

druid.processing.buffer.sizeBytes=100000000
druid.processing.numMergeBuffers=2
druid.processing.numThreads=2
druid.processing.tmpDir=var/druid/processing

Resources for peons

druid.indexer.runner.javaOpts=-server -Xmx512m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.restoreTasksOnRestart=true
druid.indexer.runner.startPort=40000

Peon properties

druid.indexer.fork.property.druid.monitoring.monitors=[“com.metamx.metrics.JvmMonitor”]
druid.indexer.fork.property.druid.processing.buffer.sizeBytes=25000000
druid.worker.capacity=2
druid.worker.ip=localhost
druid.worker.version=0
Here below are the task logs that fails with the exception. Essentially, I could see the parameter startPort being honoured. However it looks like the
2018-04-23T12:12:26,052 INFO [main] io.druid.cli.CliPeon - * druid.indexer.runner.startPort: 40000

2018-04-23T12:12:26,053 INFO [main] io.druid.cli.CliPeon - * druid.plaintextPort: 31183
2018-04-23T12:12:26,053 INFO [main] io.druid.cli.CliPeon - * druid.port: 40000
The 31183 is an aurora assigned port for the middle manager. I’m not sure what the issue here is? Is this a bug?
Peon - start up log for an indexing task2018-04-23T12:12:26,050 INFO [main] io.druid.cli.CliPeon - Starting up with processors[16], memory[494,927,872], maxMemory[536,870,912].
2018-04-23T12:12:26,051 INFO [main] io.druid.cli.CliPeon - * awt.toolkit: sun.awt.X11.XToolkit
2018-04-23T12:12:26,051 INFO [main] io.druid.cli.CliPeon - * druid.emitter: logging
2018-04-23T12:12:26,051 INFO [main] io.druid.cli.CliPeon - * druid.emitter.logging.logLevel: info
2018-04-23T12:12:26,051 INFO [main] io.druid.cli.CliPeon - * druid.extensions.coordinates: [“io.druid.extensions:postgresql-metadata-storage”, “io.druid.extensions:druid-histogram”]
2018-04-23T12:12:26,051 INFO [main] io.druid.cli.CliPeon - * druid.extensions.directory: /opt/druid/extensions
2018-04-23T12:12:26,051 INFO [main] io.druid.cli.CliPeon - * druid.extensions.loadList: [“postgresql-metadata-storage”, “druid-histogram”]
2018-04-23T12:12:26,052 INFO [main] io.druid.cli.CliPeon - * druid.host: ip-10-32-115-9.ec2.internal
2018-04-23T12:12:26,052 INFO [main] io.druid.cli.CliPeon - * druid.indexer.fork.property.druid.monitoring.monitors: [“com.metamx.metrics.JvmMonitor”]
2018-04-23T12:12:26,052 INFO [main] io.druid.cli.CliPeon - * druid.indexer.fork.property.druid.processing.buffer.sizeBytes: 25000000

2018-04-23T12:12:26,052 INFO [main] io.druid.cli.CliPeon - * druid.indexer.logs.type: file
2018-04-23T12:12:26,052 INFO [main] io.druid.cli.CliPeon - * druid.indexer.runner.javaOpts: -server -Xmx512m -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
2018-04-23T12:12:26,052 INFO [main] io.druid.cli.CliPeon - * druid.indexer.runner.startPort: 40000
2018-04-23T12:12:26,052 INFO [main] io.druid.cli.CliPeon - * druid.indexer.task.restoreTasksOnRestart: true
2018-04-23T12:12:26,052 INFO [main] io.druid.cli.CliPeon - * druid.metadata.storage.connector.password:
2018-04-23T12:12:26,052 INFO [main] io.druid.cli.CliPeon - * druid.metadata.storage.type: postgresql
2018-04-23T12:12:26,053 INFO [main] io.druid.cli.CliPeon - * druid.monitoring.emissionPeriod: PT120m
2018-04-23T12:12:26,053 INFO [main] io.druid.cli.CliPeon - * druid.monitoring.monitors: [“com.metamx.metrics.JvmMonitor”]
2018-04-23T12:12:26,053 INFO [main] io.druid.cli.CliPeon - * druid.plaintextPort: 31183
2018-04-23T12:12:26,053 INFO [main] io.druid.cli.CliPeon - * druid.port: 40000
2018-04-23T12:12:26,053 INFO [main] io.druid.cli.CliPeon - * druid.processing.buffer.sizeBytes: 25000000
2018-04-23T12:12:26,054 INFO [main] io.druid.cli.CliPeon - * druid.processing.numMergeBuffers: 2
2018-04-23T12:12:26,054 INFO [main] io.druid.cli.CliPeon - * druid.processing.numThreads: 2
2018-04-23T12:12:26,054 INFO [main] io.druid.cli.CliPeon - * druid.processing.tmpDir: var/druid/processing
2018-04-23T12:12:26,054 INFO [main] io.druid.cli.CliPeon - * druid.server.http.numThreads: 40
2018-04-23T12:12:26,054 INFO [main] io.druid.cli.CliPeon - * druid.startup.logging.logProperties: true
2018-04-23T12:12:26,055 INFO [main] io.druid.cli.CliPeon - * druid.storage.storageDirectory: var/druid/segments
2018-04-23T12:12:26,055 INFO [main] io.druid.cli.CliPeon - * druid.storage.type: local
2018-04-23T12:12:26,055 INFO [main] io.druid.cli.CliPeon - * druid.tlsPort: -1
2018-04-23T12:12:26,055 INFO [main] io.druid.cli.CliPeon - * druid.worker.capacity: 2
2018-04-23T12:12:26,055 INFO [main] io.druid.cli.CliPeon - * druid.worker.ip: localhost
2018-04-23T12:12:26,055 INFO [main] io.druid.cli.CliPeon - * druid.worker.version: 0
2018-04-23T12:12:26,055 INFO [main] io.druid.cli.CliPeon - * druid.zk.service.acl: true
2018-04-23T12:12:26,055 INFO [main] io.druid.cli.CliPeon - * druid.zk.service.authScheme: digest
2018-04-23T12:12:26,055 INFO [main] io.druid.cli.CliPeon - * file.encoding: UTF-8
2018-04-23T12:12:26,056 INFO [main] io.druid.cli.CliPeon - * file.encoding.pkg: sun.io
2018-04-23T12:12:26,056 INFO [main] io.druid.cli.CliPeon - * file.separator: /
**Peon - Task Failure:**2018-04-23T12:12:27,555 WARN [main] com.sun.jersey.spi.inject.Errors - The following warnings have been detected with resource and/or provider classes:
WARNING: A HTTP GET method, public void io.druid.server.http.SegmentListerResource.getSegments(long,long,long,javax.servlet.http.HttpServletRequest) throws java.io.IOException, MUST return a non-void type.
2018-04-23T12:12:27,565 INFO [main] org.eclipse.jetty.server.handler.ContextHandler - Started o.e.j.s.ServletContextHandler@52ecc989{/,null,AVAILABLE}
2018-04-23T12:12:27,571 ERROR [main] io.druid.server.initialization.jetty.JettyServerModule - Jetty lifecycle event failed [class org.eclipse.jetty.server.Server]
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method) ~[?:1.8.0_161]
at sun.nio.ch.Net.bind(Net.java:433) ~[?:1.8.0_161]
at sun.nio.ch.Net.bind(Net.java:425) ~[?:1.8.0_161]
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) ~[?:1.8.0_161]
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) ~[?:1.8.0_161]
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:317) ~[jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80) ~[jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:235) ~[jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) [jetty-util-9.3.19.v20170502.jar:9.3.19.v20170502]
at org.eclipse.jetty.server.Server.doStart(Server.java:401) ~[jetty-server-9.3.19.v20170502.jar:9.3.19.v20170502]
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) [jetty-util-9.3.19.v20170502.jar:9.3.19.v20170502]
at io.druid.server.initialization.jetty.JettyServerModule$2.start(JettyServerModule.java:351) [druid-server-0.13.0-SNAPSHOT.jar:0.13.0-SNAPSHOT]
at io.druid.java.util.common.lifecycle.Lifecycle.start(Lifecycle.java:311) [java-util-0.13.0-SNAPSHOT.jar:0.13.0-SNAPSHOT]
at io.druid.guice.LifecycleModule$2.start(LifecycleModule.java:134) [druid-api-0.13.0-SNAPSHOT.jar:0.13.0-SNAPSHOT]
at io.druid.cli.GuiceRunnable.initLifecycle(GuiceRunnable.java:107) [druid-services-0.13.0-SNAPSHOT.jar:0.13.0-SNAPSHOT]
at io.druid.cli.CliPeon.run(CliPeon.java:321) [druid-services-0.13.0-SNAPSHOT.jar:0.13.0-SNAPSHOT]
at io.druid.cli.Main.main(Main.java:116) [druid-services-0.13.0-SNAPSHOT.jar:0.13.0-SNAPSHOT]

Just some additional information.
The Peon starts with the same http port that was assigned to middlemanager.

It hopefully is only a mis-configuration ?

Hey there,

Yeah the indexers spawn Peons on the same box as themselves so you’ll need to ensure that druid.port and druid.indexer.runner.startPort don’t conflict.

Hey Dylan

Not sure what was wrong. I changed
from

druid.plaintextPort=

to

Hey Dylan

Not sure what was wrong. I changed
from

druid.plaintextPort=

to
druid.port=

While, defining startPort different in both the runs.

Varaga

druid.host=

What happens if you remove the :port from druid.host?