Transfer encoding : chunked data transfer from historical to broker

Hello,

Hope everyone is staying safe. I need some help in query optimization. We have dataset of daily size around 700-800 mb segments having avg 6 shards. Query has sort on the metrics and we have rocksdb based lookup extraction work at the historical side (https://github.com/yahoo/maha/tree/master/druid-lookups: High Cardinality Dimension Lookup). Query has 3 metric fields and 2 dimensions fields from lookup with metric sort.

“context”:{“applyLimitPushDown”:false,“defaultTimeout”:300000,“finalize”:false,“fudgeTimestamp”:“1585699200000”,“groupByIsSingleThreaded”:true,“groupByOutermost”:false,“groupByStrategy”:“v2”,“maxScatterGatherBytes”:9223372036854775807,“queryId”:“pranavbhole_05”,“timeout”:299915,“uncoveredIntervalsLimit”:1,“userId”:“pranavbhole”},“descending”:false} {“query/time”:23193,“query/bytes”:174854422,“success”:true}

above is query stats from historical, to scan only segment it took 23 seconds and sent 178 MB of the data to broker. Historical is also doing look up work . Broker is receiving this data in stream using transfer encoding as chunked (which is up to 64k at max in one chunk). Because of this streaming data, broker took 13 seconds to receive this fully. Do you have some suggestion to optimize this data transfer? I am seeing following logs in debug mode. Can we increase this chunk size?

2020-04-25 00:54:32,656 DEBUG c.m.h.c.NettyHttpClient [HttpClient-Netty-Worker-68] [POST https://hostname:port/druid/v2/] Got chunk: 17022B, last=false

2020-04-25 00:54:32,658 DEBUG c.m.h.c.NettyHttpClient [HttpClient-Netty-Worker-68] [POST https://hostname:port/druid/v2/] messageReceived: org.jboss.netty.handler.codec.http.DefaultHttpChunk@3eb902af

Thank you