Segment location issue


I’m trying to locate which nodes serve segments of the time interval. To get servers list I use this request to Coordinator:

curl http://host:port/druid/coordinator/v1/datasources//intervals/2018-09-28T00:00:00.000_2018-09-28T01:00:00.000?full | jq -r ‘to_entries|.value | to_entries|.value.servers’ | sort | uniq

Next, I try sending “timeBoundary” query to one of the servers, like:

curl -X POST http://host:port/druid/v2 -d’{

“queryType” : “timeBoundary”,

"dataSource": "<datasource>"

}’ -H “Content-type: application/json”

But I receive an empty result like:

It looks like there is no data at all, what did I miss here?

Version: 0.11.0



BTW, I works in my local environment with 1 historical node and 1 datasource, but doesn’t work in production.

Egor is this second query directly to historicals?


I’ve seen weird cases where querying the historicals directly doesn’t yield any query results!

This might be a historical-query-directly problem and not a timeBoundary problem. Do any queries work when hitting historicals directly?

No queries return results (tried timeseries also).

I wonder if someone has an idea how to reproduce it in local environment and find a root cause?

Hi Egor,

I’ve faced this issue previously as well. However I’ve noticed that the historical does give back results but only when all the chunks for the specified time interval are part of that historical. Have you tried spawning multiple historical processes on a local host and check if the issue is reproducible?



Nope, unfortunately, tests don’t confirm it.

Meanwhile, investigation in prod shows that specifying a correct partition number also returns empty result, ie:

curl -X POST http://$1/druid/v2 -d’{

“queryType”: “timeseries”,

“dataSource”: “’$2’”,

“granularity”: “hour”,

“intervals”: {

"type": "segments",

"segments": [




  "part": '$4'  




“aggregations”: [

{ "type" : "count", "name" : "count" }


“metric”: “count”

}’ -H “Content-type: application/json”

Still, it works for some other datasources, I checked that ShardSpec type is the same (hashed).

BTW, the restart of the node doesn’t solve the issue.