Broker API missing metrics and dimensions

I’m trying to get a list of dimensions and metrics defined for a datasource.

I’m using the API defined for the broker, documented here: http://druid.io/docs/latest/design/broker.html

I can get a list of datasources by using the API:

  • /druid/v2/datasources
    But then the documentation says that one can see a list of dimensions and metrics by using the API:

  • /druid/v2/datasources/{dataSourceName}

When I use that query, however, the lists of the metrics and dimensions are empty.

{“dimensions”:,“metrics”:}

Note that I can query the data source just fine, returning lots of rolled up data.

Here’s the results I get from applying the status:

{“version”:“0.8.1”,“modules”:[{“name”:“io.druid.firehose.s3.S3FirehoseDruidModule”,“artifact”:“druid-s3-extensions”,“version”:“0.8.1”},{“name”:“io.druid.metadata.storage.postgresql.PostgreSQLMetadataStorageModule”,“artifact”:“postgresql-metadata-storage”,“version”:“0.8.1”},{“name”:“io.druid.storage.s3.S3StorageDruidModule”,“artifact”:“druid-s3-extensions”,“version”:“0.8.1”}],“memory”:{“maxMemory”:1065025536,“totalMemory”:1065025536,“freeMemory”:579457696,“usedMemory”:485567840}}

Is this fixed in a later version?

What am I missing?

That endpoint doesn’t work for realtime nodes, only data on historicals. You should look into using the segment metadata query instead. It is what Pivot (https://github.com/implydata/pivot) uses for datasource introspection.

I have this problem too and all segments have been on historical node for a long time… I don’t use real-time node at all.

I used hadoop indexing task using distribution-docker container

What version of Druid? try to add ?interval=0/3000 as the end of your GET request

I used version 0.12.3 for this operation and encountered the same problem. How did you solve it

在 2016年2月3日星期三 UTC+8上午7:50:37,Chris Jones写道:

These days, Druid SQL metadata tables are a better way to get this info. Check out http://druid.io/docs/latest/querying/sql, try a query like: SELECT * FROM INFORMATION_SCHEMA.COLUMNS;