Prevent Middle Manager from filling up the disk space "Spill failed"


What configuration variable setting will prevent the Middle Manager disk space from filling up? This will cause the indexing tasks to fail. Spill failed
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.checkSpillException( ~[hadoop-mapreduce-client-core-2.3.0.jar:?]
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush( ~[hadoop-mapreduce-client-core-2.3.0.jar:?]
        at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close( ~[hadoop-mapreduce-client-core-2.3.0.jar:?]
        at org.apache.hadoop.mapred.MapTask.closeQuietly( [hadoop-mapreduce-client-core-2.3.0.jar:?]
        at org.apache.hadoop.mapred.MapTask.runNewMapper( [hadoop-mapreduce-client-core-2.3.0.jar:?]
        at [hadoop-mapreduce-client-core-2.3.0.jar:?]
        at org.apache.hadoop.mapred.LocalJobRunner$Job$ [hadoop-mapreduce-client-common-2.3.0.jar:?]
        at java.util.concurrent.Executors$ [?:1.8.0_72-internal]
        at [?:1.8.0_72-internal]
        at java.util.concurrent.ThreadPoolExecutor.runWorker( [?:1.8.0_72-internal]
        at java.util.concurrent.ThreadPoolExecutor$ [?:1.8.0_72-internal]
        at [?:1.8.0_72-internal]
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any valid local directory for output/spill0.out
        at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite( ~[hadoop-common-2.3.0.jar:?]
        at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite( ~[hadoop-common-2.3.0.jar:?]
        at org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite( ~[hadoop-common-2.3.0.jar:?]
        at org.apache.hadoop.mapred.MROutputFiles.getSpillFileForWrite( ~[hadoop-mapreduce-client-core-2.3.0.jar:?]
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill( ~[hadoop-mapreduce-client-core-2.3.0.jar:?]
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$900( ~[hadoop-mapreduce-client-core-2.3.0.jar:?]


These are my current configs:



Number of tasks per middleManager


Task launch parameters

druid.indexer.runner.javaOpts=-server -Xmx2g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager


HTTP server threads


Processing threads and buffers



Hadoop indexing











The middle manager that is failing runs in an m4.xlarge. The problem folders are:

/tmp/hadoop-root @ 14GB

DRUID_PATH/var/tmp @ 15GB

Thank you for any help in advance.

What types of tasks are you normally running? You can try to run less tasks per middle manager, or if you are running a lot of realtime tasks, the setting most related is handoff. If you are running out of disk space, you should try creating smaller segments and handing them off faster.

Hello Fangjin

For example, a single indexing task attempts to process 40 million events distributed across 31 CSV files on S3; each one around 200MB in size. That would be a day’s worth of data, and I have a need to load at least a year. There is orchestration around the middle managers, so I can spin up dozens of them immediately, but should I consider the EMR solution instead?

Hey Carlos,

Try adding this to your middleManager’s java startup line:


That should stop it from storing things in /tmp. If after that, var is still not big enough, you might need more disk space on your instance.

Btw, if you have access to a hadoop cluster you’ll probably find that’s a more scalable and reliable way of loading a lot of data in batch in production. The local-mode batch tasks are mostly intended to make peoples’ lives easier in simple development environments.