tranquility sent fine but query result is lower than expected

Druid-0.9.0-rc1,

tranquility-v0.7.3

I am trying the tranquility with kafka for the index_realtime.
I start the tranquility server and send the 11 messages to kafka.

Here is the logs from tranquility.

2016-02-18 15:28:42,905 [KafkaConsumer-CommitThread] INFO c.m.tranquility.kafka.KafkaConsumer - Flushed {metrics={receivedCount=11, sentCount=11, failedCount=0}} pending messages in 31021ms and committed offsets in 3ms.

``

Messages are generated by bin/generate-example-metrics script and all the timestamp are equal.

{“unit”: “milliseconds”, “http_method”: “GET”, “value”: 71, “timestamp”: “2016-02-18T15:27:48Z”, “http_code”: “200”, “page”: “/”, “metricType”: “request/latency”, “server”: “www3.example.com”}

``

However, query result is 7.

Here is the query.json.

{

“queryType” : “timeseries”,

“dataSource” : “metrics-kafka”,

“intervals” : [“2016-02-18/2016-02-20”],

“granularity” : “minute”,

“dimension” : “server”,

“threshold” : 1000,

“aggregations” : [

{ "type" : "count", "fileName" : "count", "name" : "access" }

]

}

``

And result.

{

“timestamp” : “2016-02-18T15:27:00.000Z”,

“result” : {

"access" : 7

}

}

``

Here is my tranquility config file.

{

“dataSources” : {

"metrics-kafka" : {

  "spec" : {

    "dataSchema" : {

      "dataSource" : "metrics-kafka",

      "parser" : {

        "type" : "string",

        "parseSpec" : {

          "timestampSpec" : {

            "column" : "timestamp",

            "format" : "auto"

          },

          "dimensionsSpec" : {

            "dimensions" : [],

            "dimensionExclusions" : [

              "timestamp",

              "value"

            ]

          },

          "format" : "json"

        }

      },

      "granularitySpec" : {

        "type" : "uniform",

        "segmentGranularity" : "FIVE_MINUTE",

        "queryGranularity" : "none"

      },

      "metricsSpec" : [

        {

          "type" : "count",

          "name" : "count"

        },

        {

          "name" : "value_sum",

          "type" : "doubleSum",

          "fieldName" : "value"

{

          "fieldName" : "value",

          "name" : "value_min",

          "type" : "doubleMin"

        },

        {

          "type" : "doubleMax",

          "name" : "value_max",

          "fieldName" : "value"

        }

      ]

    },

    "ioConfig" : {

      "type" : "realtime"

    },

    "tuningConfig" : {

      "type" : "realtime",

      "maxRowsInMemory" : "100000",

      "intermediatePersistPeriod" : "PT2M",

      "windowPeriod" : "PT2M"

    }

  },

  "properties" : {

    "task.partitions" : "1",

    "task.replicants" : "1",

    "topicPattern" : "metrics"

  }

}

},

“properties” : {

"zookeeper.connect" : "localhost:2181",

"druid.discovery.curator.path" : "/druid/discovery",

"druid.selectors.indexing.serviceName" : "druid/overlord",

"commit.periodMillis" : "15000",

"consumer.numThreads" : "2",

"kafka.zookeeper.connect" : "localhost:2181",

"kafka.group.id" : "tranquility-kafka"

}

}

``

My query.json might be wrong that why result is lower than sent.

If so, please correct me.

Thanks in advances.

Hey Azrael,

As mentioned in “Counting the number of ingested events” on http://druid.io/docs/latest/ingestion/schema-design.html, try using a longSum aggregator with fieldName: count rather than using a count aggregator.

Oh, my bad!

Thanks gian, It’s really helpful.