Why are less weight the re-indexed segments with hadoop-indexing tasks?

Hi guys! I already opened a issue here -> https://groups.google.com/forum/#!topic/druid-user/3d0gaPn9gNo

but I will say here also…

Before, the hourly segment was about 200kB (only 1 shard) after re-indexed them the dayly segment is about 60kB. The main idea was that the segments will be more weight but it isn’t so, why?

I am using hadoop-indexing task:

{
“type”: “index_hadoop”,
“spec”: {
“dataSchema”: {
“dataSource”: “reicmp2”,
“parser”: {
“type”: “hadoopyString”,
“parseSpec”: {
“format”: “json”,
“timestampSpec”: {
“column”: “timestamp”,
“format”: “auto”
},
“dimensionsSpec”: {
“spatialDimensions”: ,
“dimensions”: [“ping_lost”,“endpoint”,“min”,“max”,“company”,“probe”,“tool”,“site”,“ping_total”,“mdev”,“time”,“probe_ip”,“avg”,“ipv4”,“percent_response”]
}
}
},
“metricsSpec”: [
{
“type”: “count”,
“name”: “event_count”
},
{
“type”: “longSum”,
“fieldName”: “response_percent”,
“name”: “response_percent_sum”
},
{
“type”: “longMin”,
“fieldName”: “response_percent”,
“name”: “response_percent_min”
},
{
“type”: “longMax”,
“fieldName”: “response_percent”,
“name”: “response_percent_max”
},
{
“type”: “doubleSum”,
“fieldName”: “media_deviation”,
“name”: “media_deviation_sum”
},
{
“type”: “doubleMin”,
“fieldName”: “media_deviation”,
“name”: “media_deviation_min”
},
{
“type”: “doubleMax”,
“fieldName”: “media_deviation”,
“name”: “media_deviation_max”
},
{
“type”: “doubleSum”,
“fieldName”: “minimum_rtt”,
“name”: “minimum_rtt_sum”
},
{
“type”: “doubleMin”,
“fieldName”: “minimum_rtt”,
“name”: “minimum_rtt_min”
},
{
“type”: “doubleMax”,
“fieldName”: “minimum_rtt”,
“name”: “minimum_rtt_max”
},
{
“type”: “doubleSum”,
“fieldName”: “average_rtt”,
“name”: “average_rtt_sum”
},
{
“type”: “doubleMin”,
“fieldName”: “average_rtt”,
“name”: “average_rtt_min”
},
{
“type”: “doubleMax”,
“fieldName”: “average_rtt”,
“name”: “average_rtt_max”
},
{
“type”: “doubleSum”,
“fieldName”: “maximum_rtt”,
“name”: “maximum_rtt_sum”
},
{
“type”: “doubleMin”,
“fieldName”: “maximum_rtt”,
“name”: “maximum_rtt_min”
},
{
“type”: “doubleMax”,
“fieldName”: “maximum_rtt”,
“name”: “maximum_rtt_max”
}
],
“granularitySpec”: {
“type”: “uniform”,
“segmentGranularity”: “DAY”,
“queryGranularity”: “MINUTE”,
“intervals” : [ “2018-01-07T00:00:00Z/P1W” ]
}
},
“ioConfig”: {
“type” : “hadoop”,
“inputSpec” : {
“type” : “dataSource”,
“ingestionSpec”: {
“dataSource”: “spark-ef-icmp”,
“intervals”: [“2018-01-07T00:00:00Z/P1W”]
}
}
},
“tuningConfig”: {
“type”: “hadoop”,
“leaveIntermediate”: true,
“ignoreInvalidRows”: false,
“numBackgroundPersistThreads”: 1
}
},
“hadoopDependencyCoordinates”: [“org.apache.hadoop:hadoop-client:2.7.3”]
}

Thanks in advance!!