Plumber persist error

I am using tranquility-kafka.

I get this error a few times. Why would druid be creating file names above the limit?

Exception in thread “plumber_persist_13” java.lang.RuntimeException: java.io.IOException: File name too long

at com.google.common.base.Throwables.propagate(Throwables.java:160)

at io.druid.segment.realtime.plumber.RealtimePlumber.persistHydrant(RealtimePlumber.java:1049)

at io.druid.segment.realtime.plumber.RealtimePlumber$3.doRun(RealtimePlumber.java:445)

at io.druid.common.guava.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:42)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)

Caused by: java.io.IOException: File name too long

at java.io.UnixFileSystem.createFileExclusively(Native Method)

at java.io.File.createTempFile(File.java:2024)

at java.io.File.createTempFile(File.java:2070)

at io.druid.segment.data.TmpFileIOPeon.makeOutputStream(TmpFileIOPeon.java:55)

at io.druid.segment.data.GenericIndexedWriter.open(GenericIndexedWriter.java:68)

at io.druid.segment.IndexMerger.makeIndexFiles(IndexMerger.java:657)

at io.druid.segment.IndexMerger.merge(IndexMerger.java:421)

Which version of druid and tranquility are you using ?

Druid 0.9.0, Tranquility -0.7.4

Hi Rishi, what is your operating system?

I think that code is writing out files named after columns. Is it possible that one of your column names is too long to be a filename? Perhaps you are using dimension discovery (empty dimensions list in your schema) and some of your json objects have very long field names.