[druid-user] how do I round up in query granularity?

I am ingesting data from Kafka.

In my granularity spec, I wrote:

“granularitySpec”: {
“segmentGranularity”: “day”,
“queryGranularity”: “fifteen_minute”,
“rollup”: true

I noticed that the __time column gets rounded down instead of up, however, our business logic requires us to round this up.

Is there a way to configure this sort of behavior or achieve this sort of behavior?

The queryGranularity setting allows for a number of types of supported truncation:

I don’t believe that there’s any “round up” function per se as this is not a rounding but a truncation – if that makes sense?

I wonder if you might be able to do a calculation on your timestamp using a transform, whether to create a new dimension that is rounded up, or even to replace the time value completely with what you need… - maybe timestampCeil?

Hi Peter,

Again, thank you. That seems like a good idea, however, I am also using a stringLast aggregator, and I think a timestamp transformation destroys the correctness of that.

Perhaps there is a way to get the stringLast aggregator to use the untransformed timestamp? I hope that a custom javascript aggregator is not the answer.

Hmmm it’s all down to the order that things get processed in, I guess… I believe queryGranularity doesn’t actually truncate the timestamp until right before the roll-up happens – so you could maybe generate a second time dimension using a ceiling function (this will take the incoming timestamp data) and just leave queryGranularity happily truncating…?

When you say you’re using stringLast – is that in the metricsSpec to generate a roll-up metric? I had thought stringLast wasn’t possible at ingestion time…

Hi Marco,

In regard to your question about using the stringLast aggregator on untransformed timestamps, do you mean that there would be no rollup? I ask because the First / Last aggregators “cannot be used in ingestion spec, and should only be specified as part of queries.”

Also, for the sake of my own clarity as I read your questions, does your business logic require two timestamps? An untransformed __time and something else?



The documentation says:

(Double/Float/Long) First and Last aggregator cannot be used in ingestion spec, and should only be specified as part of queries.

I want to use stringLast. That will work, right?

My business logic does not require two time stamps.

I just need the timestamp to round up instead of truncate, and I do need to also calculate the value with the largest timestamp in that interval (stringLast).

stringLast should take the string with the latest timestamp before rollup - ie, before truncation on timestamp. Is that not what you’re seeing?

Hi Ben,

I’ll recap this thread first before I answer.

Pre-aggregation in druid truncates the timestamp, however, my business logic requires me to round up.
Peter suggested that I can transform the incoming timestamp and round up during ingestion with a “transform spec” and
now I am stating my concern that my aggregation also requires stringLast function applied to it, and my concern is that a transform spec on the timestamp will break the correctness

I still need to experiment and see what happens.

But ultimately, my question now is, if I use a transform spec on the __timestamp column, is it still possible for the stringLast to use the original timestamp when calculating stringLast?

Thank you, that clarifies it for me. If you transform __time to __time+yourInterval, then it might truncate down

to the “rounded up” value, and stringLast should hopefully come out the same as before.

I am going to add some closure to this discussion.

this is my solution for rounding up in query granularity:

“transformSpec”: {
“transforms”: [
{ “type”: “expression”, “name”: “__time”, “expression”: “timestamp_ceil(__time, ‘PT15M’) + __time - timestamp_floor(__time,‘PT15M’)” }

I learned this technique from this mailing list, and also the following links:


It seems to be working.

Everybody, thank you for helping.

1 Like

ha! I’m glad you got an answer! @Ben_Krug yet again being helpful as opposed to me hahaha!

That’s an interesting example - thanks for pasting your solution :slight_smile: