Tranquility send data to druid,druid create a new segment,but the old segment can't query

1,process history data

I rollup data by day,segmentGranularity is year,so I got a segment test_2019-01-01T00:00:00.000Z_2020-01-01T00:00:00.000Z_2019-04-16T02:00:54.159Z,

this segment save data from 2019-01-01 to 2019-04-15,and hdfs as the deep storage

2,process realtime data

i send data to tranquility,and druid received the data sucessfully.

form coordinator console,i found druid create a new segment for realtime data :

test_2019-01-01T00:00:00.000Z_2020-01-01T00:00:00.000Z_2019-04-16T06:22:59.297Z

So,i have tow segment,one for history data ,and onre for realtime data,but when i query data of intervel [2019-04-01/2019-04-16] i got a null list.when i change the interval to [2019-04-16/2019-04-17] i can got the data.That means i can query the realtime data segment but not the history data segment.

How can i fix this?

Thanks!

process history data’s task jaon:

{

“type”:“index_hadoop”,

“spec”:{

“dataSchema”:{

“dataSource”:“test”,

“parser”:{

“type”:“hadoopyString”,

“parseSpec”:{

“format”:“json”,

“dimensionsSpec”:{

“dimensions”:[

some dimensionsSpec …

]

},

“timestampSpec”:{

“column”:“timestamp”,

“format”:“posix”

}

}

},

“metricsSpec”:[

some metricsSpec …

],

“granularitySpec”:{

“type”:“uniform”,

“segmentGranularity”:“year”,

“queryGranularity”:“day”,

“intervals”:[2019-01-01/2019-04-16],

“rollup”:true

}

},

“ioConfig”:{

“type”:“hadoop”,

“inputSpec”:{

“type”:“granularity”,

“dataGranularity”:“day”,

“pathFormat”:"‘y’=yyyy/‘m’=MM/‘d’=dd",

“inputPath”:“hdfs://…/”,

“filePattern”:"^hdfs://…$"

}

},

“tuningConfig”:{

“type”:“hadoop”,

“partitionsSpec”:{

“type”:“hashed”,

“targetPartitionSize”:5000000

},

“forceExtendableShardSpecs”:true,

“jobProperties”:{

“mapreduce.job.classloader”:true,

“mapreduce.job.classloader.system.classes”:"-javax.validation.,java.,javax.,org.apache.commons.logging.,org.apache.log4j.,org.apache.hadoop.",

“mapreduce.map.memory.mb”:“6144”,

“mapreduce.reduce.memory.mb”:“6144”,

“mapreduce.map.java.opts”:"-server -Xmx6144m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps",

“mapreduce.reduce.java.opts”:"-server -Xmx6144m -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps"

}

}

}

}

tranquility server.json

{

“dataSources”:[

{

“spec”:{

“dataSchema”:{

“dataSource”:“test”,

“parser”:{

“type”:“hadoopyString”,

“parseSpec”:{

“format”:“json”,

“dimensionsSpec”:{

“dimensions”:[

some dimensionsSpec …

]

},

“timestampSpec”:{

“column”:“timestamp”,

“format”:“posix”

}

}

},

“metricsSpec”:[

some metricsSpec …

],

“granularitySpec”:{

“type”:“uniform”,

“segmentGranularity”:“year”,

“queryGranularity”:“day”,

“intervals”:null,

“rollup”:true

}

},

“ioConfig”:{

“type”:“realtime”

},

“tuningConfig”:{

“type”:“realtime”,

“maxRowsInMemory”:“50000”,

“intermediatePersistPeriod”:“PT10M”,

“windowPeriod”:“P1D”

},

“properties”:{

“task.partitions”:“1”,

“task.replicants”:“2”

}

}

}

],

“properties”:{

“zookeeper.connect”:“XXXX:XXX,XXXX:XXX”,

“druid.discovery.curator.path”:"/druid/discovery",

“druid.selectors.indexing.serviceName”:“druid/overlord”,

“http.port”:“8200”,

“http.threads”:“40”,

“serialization.format”:“smile”,

“druidBeam.taskLocator”:“overlord”

}

}

I wonder if the realtime segment is overshadowing the historical segment.

Thanks,

Jon