Exception on Druid's internal TimeBoundaryQuery

Hey,

I’m getting this exception a few times a day on all broker nodes without issuing any Time Boundary Queries :

2016-08-17T20:46:12,190 WARN [HttpClient-Netty-Worker-1] com.metamx.http.client.NettyHttpClient - [POST http://172.31.8.133:8083/druid/v2/] Channel disconnected before response complete
2016-08-17T20:46:12,191 WARN [qtp335466988-44] io.druid.server.QueryResource - Exception occurred on request [TimeBoundaryQuery{dataSource='gwiq', querySegmentSpec=MultipleIntervalSegmentSpec{intervals=[0000-01-01T00:00:00.000Z/3000-01-01T
00:00:00.000Z]}, duration=PT94670899200S, bound=maxTime}]
com.metamx.common.RE: Failure getting results from[http://172.31.8.133:8083/druid/v2/] because of [org.jboss.netty.channel.ChannelException: Channel disconnected]

Full stack trace here :

http://pastebin.com/raw/GU6vzABK

I guess they are issued internally by Druid because I don’t use them.

Any idea how to get rid of it? What might be the cause? I’m using implydata 1.2.1 distribution.

It looks like the broker is having trouble talking to one of the historicals. Are there error logs in your historicals?

Druid doesn’t issue that query (or any other) on its own. Perhaps you have an app running somewhere making those queries (Pivot does them periodically for example).

No there are no errors on historical, no errors anywhere except for the “disconnect” one. And yes, the request is issued by Pivot, it happens regularly every time when Pivot does that.

Is there any configuration I could change perhaps?

Pivot is currently rather promiscuous on the number of timeBoundary queries it issues (under default configuration). It is being addressed for the next release but until then you can decrease how many timeBoundary queries are being made by changing your dataCubes to have refreshRule: { type: "realtime" } instead of “query” for any druid datasources that are realtime.