How to change timezone in Druid?

Hi Guys,

We are using druid with UTC timezone, but recently we need to change timezone from UTC to IST [Asia/Calcutta].

And we are using PlyQL mysql server implementation to fetch data.

As the docs, http://druid.io/docs/latest/configuration/index.html, we changed all processes timezones by setting -Duser.timezone=UTC to -Duser.timezone=Asia/Kolkata

http://joda-time.sourceforge.net/timezones.html

But still the datasources are getting created in UTC timezone level, Is there anything we are missing here ?

Hi Guys,

Could you please update. We are waiting for response.

Hey Lovenish,

Druid servers only support being run in the UTC timezone. You can get weird behavior if you use a different one. So I suggest leaving that at UTC. Even if the servers are running in UTC, you can still query for any time zone you want using Druid’s JSON language. Check out the “timeZone” field on granularities: http://druid.io/docs/latest/querying/granularities.html

If you’re using PlyQL then try the --timezone flag, which will control the timezone that is used for granular bucketing. In the future it will also control the timezone used for displaying timestamps. Watch https://github.com/implydata/plyql/issues/69 for that.

Hi Gian,

I wanna create a segment and query granularity by DAY in my timezone. How to do that?

I am using batch job submit.

I change param in jobPropreties

"mapreduce.map.java.opts": "-Xmx1024m -Duser.timezone=UTC -Dfile.encoding=UTF-8",
"mapreduce.reduce.java.opts": "-Xmx8192m -Duser.timezone=UTC -Dfile.encoding=UTF-8",

->

"mapreduce.map.java.opts": "-Xmx1024m -Duser.timezone=Asia/Singapore -Dfile.encoding=UTF-8",
"mapreduce.reduce.java.opts": "-Xmx8192m -Duser.timezone=Asia/Singapore -Dfile.encoding=UTF-8",

"intervals": [
  "2017-02-02T00:00:00.000+08:00/2017-02-03T00:00:00.000+08:00"
]

both parser and inputSpec but It still can't make a segment with current timezone.


@Chanh - There is a PR to help create segments in any desired timezone - https://github.com/druid-io/druid/pull/3850
Should be merged soon. I’m already using that in production to create segments in America/Los_Angeles time.

Druid server us running in UTC timezone.

However on querying my data in Asia/Kolkata(UTC+05:30), I’m seeing the same count being returned as when I query in Africa/Abidjan (UTC+00:00)

Asia query json

{“intervals”: “2017-02-26T10:12:16+05:30/2017-02-28T10:12:16+05:30”, “dimensions”: , “granularity”: {“timezone”: “Asia/Kolkata”, “type”: “period”, “period”: “P1D”}, “post_aggregations”: {}, “aggregations”: [{“type”: “count”, “name”: “count”}], “dataSource”: “orderdata”, “queryType” : “timeseries”}

data

[ {

“timestamp” : “2017-02-26T00:00:00.000Z”,

“result” : {

"count" : 24

}

}, {

“timestamp” : “2017-02-27T00:00:00.000Z”,

“result” : {

"count" : 35

}

}, {

“timestamp” : “2017-02-28T00:00:00.000Z”,

“result” : {

"count" : 17

}

}

Africa query json

{“intervals”: “2017-02-26T10:12:16+00:00/2017-02-28T10:12:16+00:00”, “dimensions”: , “granularity”: {“timezone”: “Africa/Abidjan”, “type”: “period”, “period”: “P1D”}, “post_aggregations”: {}, “aggregations”: [{“type”: “count”, “name”: “count”}], “dataSource”: “orderdata”, “queryType” : “timeseries”}

data

[ {

“timestamp” : “2017-02-26T00:00:00.000Z”,

“result” : {

"count" : 24

}

}, {

“timestamp” : “2017-02-27T00:00:00.000Z”,

“result” : {

"count" : 35

}

}, {

“timestamp” : “2017-02-28T00:00:00.000Z”,

“result” : {

"count" : 17

}

}

I’m referring to the data received on 27th Feb. It’s the same as if I’m quering in UTC. Is this the expected behaviour?

How can i use granularity type = “all” while setting timezone.

I see it’s two usage of granularity, but might not able to use both.

  1. granularity: “all”

  2. “granularity”: {“timezone”: “Asia/Kolkata”, “type”: “period”, “period”: “P1D”}

在 2017年2月28日星期二 UTC+8下午6:37:49,Rushabh Nagda写道: