Caused by: java.io.IOException: File name too long

  • we are using Batch ingestion to ingest data into druid which is working fine till now, but when i ncrease the dimension iam getiing the error as ->

java.lang.Exception: java.io.IOException: File name too long

at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) ~[hadoop-mapreduce-client-common-2.7.3.jar:?]

at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529) [hadoop-mapreduce-client-common-2.7.3.jar:?]

Caused by: java.io.IOException: File name too long

at java.io.UnixFileSystem.createFileExclusively(Native Method) ~[?:1.8.0_131]

at java.io.File.createTempFile(File.java:2024) ~[?:1.8.0_131]

at io.druid.segment.data.TmpFileIOPeon.makeOutputStream(TmpFileIOPeon.java:62) ~[druid-processing-0.10.1.jar:0.10.1]

at io.druid.segment.data.GenericIndexedWriter.open(GenericIndexedWriter.java:130) ~[druid-processing-0.10.1.jar:0.10.1]

at io.druid.segment.StringDimensionMergerV9.writeMergedValueMetadata(StringDimensionMergerV9.java:165) ~[druid-processing-0.10.1.jar:0.10.1]

at io.druid.segment.IndexMergerV9.writeDimValueAndSetupDimConversion(IndexMergerV9.java:546) ~[druid-processing-0.10.1.jar:0.10.1]

at io.druid.segment.IndexMergerV9.makeIndexFiles(IndexMergerV9.java:177) ~[druid-processing-0.10.1.jar:0.10.1]

at io.druid.segment.IndexMerger.merge(IndexMerger.java:434) ~[druid-processing-0.10.1.jar:0.10.1]

at io.druid.segment.IndexMerger.persist(IndexMerger.java:178) ~[druid-processing-0.10.1.jar:0.10.1]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.persist(IndexGeneratorJob.java:508) ~[druid-indexing-hadoop-0.10.1.jar:0.10.1]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:686) ~[druid-indexing-hadoop-0.10.1.jar:0.10.1]

at io.druid.indexer.IndexGeneratorJob$IndexGeneratorReducer.reduce(IndexGeneratorJob.java:480) ~[druid-indexing-hadoop-0.10.1.jar:0.10.1]

at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) ~[hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) ~[hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) ~[hadoop-mapreduce-client-core-2.7.3.jar:?]

at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) ~[hadoop-mapreduce-client-common-2.7.3.jar:?]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_131]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_131]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_131]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_131]

at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_131]

2017-10-14T06:42:03,254 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - map 100% reduce 100%

2017-10-14T06:42:03,254 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Job job_local1248075607_0001 failed with state FAILED due to: NA

2017-10-14T06:42:03,337 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Counters: 30

File System Counters

FILE: Number of bytes read=3553360504

FILE: Number of bytes written=1573799653

FILE: Number of read operations=0

FILE: Number of large read operations=0

FILE: Number of write operations=0

Map-Reduce Framework

Map input records=1000

Map output records=1000

Map output bytes=2352242

Map output materialized bytes=2359079

Input split bytes=313

Combine input records=0

Combine output records=0

Reduce input groups=0

Reduce shuffle bytes=2359079

Reduce input records=0

Reduce output records=0

Spilled Records=1000

Shuffled Maps =490

Failed Shuffles=0

Merged Map outputs=490

GC time elapsed (ms)=182

Total committed heap usage (bytes)=524623544320

Shuffle Errors

BAD_ID=0

CONNECTION=0

IO_ERROR=0

WRONG_LENGTH=0

WRONG_MAP=0

WRONG_REDUCE=0

File Input Format Counters

Bytes Read=0

File Output Format Counters

Bytes Written=3912

2017-10-14T06:42:03,386 INFO [task-runner-0-priority-0] io.druid.indexer.JobHelper - Deleting path[var/druid/hadoop-tmp/1e3beb3f-15b0-4a67-b50f-ab5685cab285-session/2017-10-14T064150.527Z_d55e36fe709545d68d9f9cd603268cbe]

2017-10-14T06:42:03,400 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[HadoopIndexTask{id=index_hadoop_1e3beb3f-15b0-4a67-b50f-ab5685cab285-session_2017-10-14T06:41:50.197Z, type=index_hadoop, dataSource=1e3beb3f-15b0-4a67-b50f-ab5685cab285-session}]

java.lang.RuntimeException: java.lang.reflect.InvocationTargetException

at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]

at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:218) ~[druid-indexing-service-0.10.1.jar:0.10.1]

at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:224) ~[druid-indexing-service-0.10.1.jar:0.10.1]

at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.10.1.jar:0.10.1]

at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.10.1.jar:0.10.1]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_131]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_131]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_131]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]

Caused by: java.lang.reflect.InvocationTargetException

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]

at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:215) ~[druid-indexing-service-0.10.1.jar:0.10.1]

… 7 more

Caused by: io.druid.java.util.common.ISE: Job[class io.druid.indexer.IndexGeneratorJob] failed!

at io.druid.indexer.JobHelper.runJobs(JobHelper.java:392) ~[druid-indexing-hadoop-0.10.1.jar:0.10.1]

at io.druid.indexer.HadoopDruidIndexerJob.run(HadoopDruidIndexerJob.java:95) ~[druid-indexing-hadoop-0.10.1.jar:0.10.1]

at io.druid.indexing.common.task.HadoopIndexTask$HadoopIndexGeneratorInnerProcessing.runTask(HadoopIndexTask.java:277) ~[druid-indexing-service-0.10.1.jar:0.10.1]

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_131]

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_131]

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_131]

at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_131]

at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:215) ~[druid-indexing-service-0.10.1.jar:0.10.1]

… 7 more

2017-10-14T06:42:03,406 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_hadoop_1e3beb3f-15b0-4a67-b50f-ab5685cab285-session_2017-10-14T06:41:50.197Z] status changed to [FAILED].

2017-10-14T06:42:03,408 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {

“id” : “index_hadoop_1e3beb3f-15b0-4a67-b50f-ab5685cab285-session_2017-10-14T06:41:50.197Z”,

“status” : “FAILED”,

“duration” : 9710

}

2017-10-14T06:42:03,413 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.server.listener.announcer.ListenerResourceAnnouncer.stop()] on object[io.druid.query.lookup.LookupResourceListenerAnnouncer@48268eec].

2017-10-14T06:42:03,413 INFO [main] io.druid.curator.announcement.Announcer - unannouncing [/druid/listeners/lookups/__default/10.4.0.5:8100]

2017-10-14T06:42:03,691 INFO [main] io.druid.server.listener.announcer.ListenerResourceAnnouncer - Unannouncing start time on [/druid/listeners/lookups/__default/10.4.0.5:8100]

2017-10-14T06:42:03,691 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.query.lookup.LookupReferencesManager.stop()] on object[io.druid.query.lookup.LookupReferencesManager@a85644c].

2017-10-14T06:42:03,691 INFO [main] io.druid.query.lookup.LookupReferencesManager - LookupReferencesManager is stopping.

2017-10-14T06:42:03,691 INFO [LookupReferencesManager-MainThread] io.druid.query.lookup.LookupReferencesManager - Lookup Management loop exited, Lookup notices are not handled anymore.

2017-10-14T06:42:03,692 INFO [main] io.druid.query.lookup.LookupReferencesManager - LookupReferencesManager is stopped.

2017-10-14T06:42:03,695 INFO [main] org.eclipse.jetty.server.AbstractConnector - Stopped ServerConnector@31723307{HTTP/1.1,[http/1.1]}{0.0.0.0:8100}

2017-10-14T06:42:03,696 INFO [main] org.eclipse.jetty.server.handler.ContextHandler - Stopped o.e.j.s.ServletContextHandler@1b57c345{/,null,UNAVAILABLE}

2017-10-14T06:42:03,698 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.indexing.worker.executor.ExecutorLifecycle.stop() throws java.lang.Exception] on object[io.druid.indexing.worker.executor.ExecutorLifecycle@4833eff3].

2017-10-14T06:42:03,698 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.indexing.overlord.ThreadPoolTaskRunner.stop()] on object[io.druid.indexing.overlord.ThreadPoolTaskRunner@1cb9ef52].

2017-10-14T06:42:03,698 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery.ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator.discovery.ServerDiscoverySelector@62b93086].

2017-10-14T06:42:03,700 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.announcement.Announcer.stop()] on object[io.druid.curator.announcement.Announcer@2a39aa2b].

2017-10-14T06:42:03,701 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.curator.discovery.ServerDiscoverySelector.stop() throws java.io.IOException] on object[io.druid.curator.discovery.ServerDiscoverySelector@51c8f62c].

2017-10-14T06:42:03,701 INFO [main] io.druid.curator.CuratorModule - Stopping Curator

2017-10-14T06:42:03,701 INFO [Curator-Framework-0] org.apache.curator.framework.imps.CuratorFrameworkImpl - backgroundOperationsLoop exiting

2017-10-14T06:42:03,705 INFO [main] org.apache.zookeeper.ZooKeeper - Session: 0x15efce132db00c9 closed

2017-10-14T06:42:03,705 INFO [main-EventThread] org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 0x15efce132db00c9

2017-10-14T06:42:03,705 INFO [main] com.metamx.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.http.client.NettyHttpClient.stop()] on object[com.metamx.http.client.NettyHttpClient@2f4ba1ae].

2017-10-14T06:42:03,714 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.metrics.MonitorScheduler.stop()] on object[com.metamx.metrics.MonitorScheduler@1e1b061].

2017-10-14T06:42:03,715 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void com.metamx.emitter.service.ServiceEmitter.close() throws java.io.IOException] on object[com.metamx.emitter.service.ServiceEmitter@43acd79e].

2017-10-14T06:42:03,715 INFO [main] com.metamx.emitter.core.LoggingEmitter - Close: started [false]

2017-10-14T06:42:03,715 INFO [main] io.druid.java.util.common.lifecycle.Lifecycle$AnnotationBasedHandler - Invoking stop method[public void io.druid.initialization.Log4jShutterDownerModule$Log4jShutterDowner.stop()] on object[io.druid.initialization.Log4jShutterDownerModule$Log4jShutterDowner@3a790e40].

that means, is there any limit on character length in dimension-key …?