Heap Memory Exception Batch Ingestion

While ingesting the data with batch processing I am getting the following error if the file size is greater gets bigger than 50MB.
<<<< Error Log Start >>>>

2017-11-10T10:57:04,472 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.LocalJobRunner -
2017-11-10T10:57:04,472 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.MapTask - Starting flush of map output
2017-11-10T10:57:04,474 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.Task - Task:attempt_local1416875590_0001_m_000015_0 is done. And is in the process of committing
2017-11-10T10:57:04,477 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.LocalJobRunner - map
2017-11-10T10:57:04,477 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.Task - Task 'attempt_local1416875590_0001_m_000015_0' done.
2017-11-10T10:57:04,477 INFO [LocalJobRunner Map Task Executor #0] org.apache.hadoop.mapred.LocalJobRunner - Finishing task: attempt_local1416875590_0001_m_000015_0
2017-11-10T10:57:04,477 INFO [Thread-29] org.apache.hadoop.mapred.LocalJobRunner - map task executor complete.
2017-11-10T10:57:04,478 WARN [Thread-29] org.apache.hadoop.mapred.LocalJobRunner - job_local1416875590_0001
java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
	at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) ~[hadoop-mapreduce-client-common-2.7.3.jar:?]
	at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) [hadoop-mapreduce-client-common-2.7.3.jar:?]
Caused by: java.lang.OutOfMemoryError: Java heap space
	at java.nio.HeapCharBuffer.<init>(HeapCharBuffer.java:57) ~[?:1.8.0_144]
	at java.nio.CharBuffer.allocate(CharBuffer.java:335) ~[?:1.8.0_144]
	at java.nio.charset.CharsetDecoder.decode(CharsetDecoder.java:795) ~[?:1.8.0_144]
	at org.apache.hadoop.io.Text.decode(Text.java:412) ~[hadoop-common-2.7.3.jar:?]
	at org.apache.hadoop.io.Text.decode(Text.java:389) ~[hadoop-common-2.7.3.jar:?]
	at org.apache.hadoop.io.Text.toString(Text.java:280) ~[hadoop-common-2.7.3.jar:?]
	at io.druid.indexer.HadoopyStringInputRowParser.parse(HadoopyStringInputRowParser.java:49) ~[druid-indexing-hadoop-0.10.1.jar:0.10.1]
	at io.druid.indexer.HadoopDruidIndexerMapper.parseInputRow(HadoopDruidIndexerMapper.java:105) ~[druid-indexing-hadoop-0.10.1.jar:0.10.1]
	at io.druid.indexer.HadoopDruidIndexerMapper.map(HadoopDruidIndexerMapper.java:72) ~[druid-indexing-hadoop-0.10.1.jar:0.10.1]
	at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) ~[hadoop-mapreduce-client-core-2.7.3.jar:?]
	at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) ~[hadoop-mapreduce-client-core-2.7.3.jar:?]
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) ~[hadoop-mapreduce-client-core-2.7.3.jar:?]
	at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) ~[hadoop-mapreduce-client-common-2.7.3.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_144]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_144]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_144]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_144]
	at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_144]
2017-11-10T10:57:05,146 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Job job_local1416875590_0001 failed with state FAILED due to: NA
2017-11-10T10:57:05,170 INFO [task-runner-0-priority-0] org.apache.hadoop.mapreduce.Job - Counters: 17
	File System Counters
		FILE: Number of bytes read=49661003551
		FILE: Number of bytes written=4505925
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
	Map-Reduce Framework
		Map input records=0
		Map output records=0
		Map output bytes=0
		Map output materialized bytes=720
		Input split bytes=4215
		Combine input records=0
		Spilled Records=0
		Failed Shuffles=0
		Merged Map outputs=0
		GC time elapsed (ms)=8
		Total committed heap usage (bytes)=31677480960
	File Input Format Counters
		Bytes Read=0
2017-11-10T10:57:05,175 INFO [task-runner-0-priority-0] io.druid.indexer.JobHelper - Deleting path[var/druid/hadoop-tmp/Brand Tracking/2017-11-10T105653.241Z_8337f177a8fa4525b398e8fbb4aaae20]
2017-11-10T10:57:05,188 ERROR [task-runner-0-priority-0] io.druid.indexing.overlord.ThreadPoolTaskRunner - Exception while running task[HadoopIndexTask{id=index_hadoop_Brand Tracking_2017-11-10T10:56:53.169Z, type=index_hadoop, dataSource=Brand Tracking}]
java.lang.RuntimeException: java.lang.reflect.InvocationTargetException
	at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[guava-16.0.1.jar:?]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:218) ~[druid-indexing-service-0.10.1.jar:0.10.1]
	at io.druid.indexing.common.task.HadoopIndexTask.run(HadoopIndexTask.java:224) ~[druid-indexing-service-0.10.1.jar:0.10.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:436) [druid-indexing-service-0.10.1.jar:0.10.1]
	at io.druid.indexing.overlord.ThreadPoolTaskRunner$ThreadPoolTaskRunnerCallable.call(ThreadPoolTaskRunner.java:408) [druid-indexing-service-0.10.1.jar:0.10.1]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_144]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_144]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_144]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]
Caused by: java.lang.reflect.InvocationTargetException
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_144]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_144]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_144]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_144]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:215) ~[druid-indexing-service-0.10.1.jar:0.10.1]
	... 7 more
Caused by: io.druid.java.util.common.ISE: Job[class io.druid.indexer.IndexGeneratorJob] failed!
	at io.druid.indexer.JobHelper.runJobs(JobHelper.java:392) ~[druid-indexing-hadoop-0.10.1.jar:0.10.1]
	at io.druid.indexer.HadoopDruidIndexerJob.run(HadoopDruidIndexerJob.java:95) ~[druid-indexing-hadoop-0.10.1.jar:0.10.1]
	at io.druid.indexing.common.task.HadoopIndexTask$HadoopIndexGeneratorInnerProcessing.runTask(HadoopIndexTask.java:277) ~[druid-indexing-service-0.10.1.jar:0.10.1]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_144]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_144]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_144]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_144]
	at io.druid.indexing.common.task.HadoopTask.invokeForeignLoader(HadoopTask.java:215) ~[druid-indexing-service-0.10.1.jar:0.10.1]
	... 7 more
2017-11-10T10:57:05,194 INFO [task-runner-0-priority-0] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_hadoop_Brand Tracking_2017-11-10T10:56:53.169Z] status changed to [FAILED].
2017-11-10T10:57:05,197 INFO [task-runner-0-priority-0] io.druid.indexing.worker.executor.ExecutorLifecycle - Task completed with status: {
  "id" : "index_hadoop_Brand Tracking_2017-11-10T10:56:53.169Z",
  "status" : "FAILED",
  "duration" : 8629
}
**<<<< Error Log End >>>>**I tried to increase the heap memory limit in the middlemanager runtime properties as pasted below
**<<<< Middle Manager jvm Config >>>>**
-server
-Xms3g
-Xmx3g
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
**<<<< Middle Manager jvm Config End >>>>**
**<<<< Middle Manager Properties  Start >>>>**
druid.service=druid/middleManager
druid.port=8091
# Number of tasks per middleManager
druid.worker.capacity=3
# Task launch parameters
druid.indexer.runner.javaOpts=-server -Xmx4g -XX:+UseG1GC -XX:MaxMetaspaceSize=1g -XX:MaxGCPauseMillis=100 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Duser.timezone=UTC -Dfile.encoding=UTF-8 -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.indexer.task.baseTaskDir=var/druid/task
# HTTP server threads
druid.server.http.numThreads=9
# Processing threads and buffers
druid.processing.buffer.sizeBytes=536870912
druid.processing.numThreads=2
# Hadoop indexing
druid.indexer.task.hadoopWorkingPath=var/druid/hadoop-tmp
druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.3.0"]
**<<<< Middle Manager Properties End >>>>**

I am running the Index Job on ec2 environment, the machine is m4.4xlarge(64GB Raw and 12 cpu's). This looks weird as I thought I have a big enough machine to handle it.