Reducing the data size by coarse granularity - not happening

Production Setup:
Druid 0.9.2 ( since tranquility doesn’t not support 0.10 can’t upgrade until then)

Tranquility 0.8.2

250 Million events per day.

ingestion spec:

"dataSchema" : {
  "dataSource" : "raw_events",
  "granularitySpec" : {
    "type" : "uniform",
    "segmentGranularity" : "day",
    "rollup": true,
    "queryGranularity" : "minute",
    "intervals" : [ "2017-08-03/2017-08-04" ]
  },

Trying different combinations of granularity and dimensions to come up with a decent segment size.

6 Dimensions with MINUTE granularity end up being 250 MB per day

7 Dimensions with MINUTE granularity end up being 450 MB per day

60 Dimensions with MINUTE granularity end up being 5.64 GB per day

I do understand this increase, since we are increasing dimension we are technically adding lot of rows depending on cardinality.

So when we reduced the granularity to fifteen_minute we expected it to have reduced the data by 15 times mathematically. Since we have almost all the dimensions evenly having same set of dimensions values throughout the day.

But 60 dimensions, with fifteen_minute granularity was 4.22 GB.

And 60 dimensions, with thirty_minute granularity was 4.03 GB.

Are we missing something ?

We tried these experiments with Batch ingestion via Hadoop .

tried with a different dimension set:
59 dimensions, minute granularity - 1.47 GB

59 dimensions, 15 minute granularity - 832 mb

59 dimensions, 30 minute granularity - 729 mb