Configure druid for monitoring with prometheus


I’m working on setting up a test environment that will ultimately make its way to production. However, I’m running into a snag, unable to configure monitoring with Prometheus.

Working with the micro-quickstart service, I’ve included the following in my file.


Then, I visit the following URLs and no data is available:


I would appreciate help with the following questions:

  • Does anything look “off” in those config settings? If so, how can I get this working properly?
  • Are these the correct endpoints to expect a json file available metrics and dimensions?
  • Would I use these URLs in the settings for an exporter, or is there another that should be included? See link below for exporter reference

Hi @dcsettings,

In order to use Prometheus a better starting point is this doc:

In it you’ll see how to set the emitter property along with the corresponding parameters and there’s a link to where you can download and setup a prometheus server.

You will also need to add the extension using pull-deps tool and add it to the druid.extensions.loadList config parameter.

One thing that tripped me up initially is that some monitors that you add to the druid.monitoring.monitors list are process specific and should only be added to the corresponding process’

Hi @Sergio_Ferragut thank you for the reply… I found my way to this doc so nice to hear confirmation that its the better approach.

I got stuck on the step of “downloading the extension” to my environment. Would you be able to provide guidance on the appropriate command? As I keep hitting the following error:

Error: Could not find or load main class org.apache.druid.cli.Main
Caused by: java.lang.ClassNotFoundException: org.apache.druid.cli.Main

Just tried this out on my laptop and it worked with this running from the druid install path:

java \
  -cp "lib/*" \"extensions" \
  -Ddruid.extensions.hadoopDependenciesDir="hadoop-dependencies" \
  org.apache.druid.cli.Main tools pull-deps \
  --no-default-hadoop \
  -c "org.apache.druid.extensions.contrib:prometheus-emitter:24.0.0"

Hi again @Sergio_Ferragut thank you for that command, it definitely worked as expected. Now that I have the prometheus emitter in the extensions, I’m hitting an unexpected issue.

Just so we’re on the same page, I’ve only downloaded and set up this quickstart using Java 11.

Then, I modified the Monitoring section of file in the micro-quickstart directory with the settings below. However, I get an error whenever I try starting Druid with the following set:


Here’s the error log, seems prometheus is an unknown emitter:

1) Error injecting method, Unknown emitter type[druid.emitter]=[prometheus], known types[[noop, logging, http, parametrized, composing]]

So, I’m wondering either:

  • Where’s the correct location to declare the prometheus emitter?
  • Is there another approach to setting up prometheus based on this doc?

I tried setting the prometheus-emitter in my extensions load list without declaring anything prometheus related in the monitoring section of the common properties file:

druid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches", "druid-multi-stage-query", "prometheus-emitter"]

However, I’m not sure if this does anything because I don’t know where the metrics would be accessible.

Is there a particular end-point where the metrics will be available for prometheus to scrape? I guess it’d be good to clarify this for either approach.

Have you set up the Prometheus server? You’ll also need to configure the emitter properties. Specifically, you’ll need to set druid.emitter.prometheus.port. Also, if you look here, you’ll find the following sample Prometheus config:

  scrape_interval:     15s
  evaluation_interval: 15s

  # - "first.rules"
  # - "second.rules"

  - job_name: prometheus
      - targets: ['localhost:9090']

I think you’ll need to use the Druid hosts that are emitting metrics and the port that you selected in the above setting.

@Sergio_Ferragut and I attempted the same configuration you did by using the micro-quickstart, but we also added the port setting, and set it to 19090. Once everything was running, we were able to see metrics at curl http://localhost:19090/metrics.

Unfortunately, we never saw the error that you reported, so we’re not entirely clear what that’s about.

We ran this test without setting up a Prometheus server. Prometheus would poll the metrics URL to read Druid metrics.

One additional note. If this is done with a single server setup like the micro-quickstart, the druid.emitter.prometheus.port property needs to be setup in each process’ and with different port each. They otherwise collide and you only get the first one that starts up on that port. The prometheus scrape_configs would then need to list all the targets, one for each emitting process (broker, historical, middleManager, etc.) each with a different port.

Hi @Sergio_Ferragut and @Mark_Herrera I just wanted to reach out and say thank you for the guidance, I was able to get this working now :slightly_smiling_face:

1 Like

Hi @Sergio_Ferragut and @dcsettings, was reading through this forum since i’m doing the same thing, the only issue is that ill get a 404 error from prometheus (and curl too) and i’m not sure what configuration i’m missing.
i added the exporter to extensions, did the java command, added the config into the common properties and added my target for the prometheus.yml. Maybe you guysy can help me out. Any suggestion?

Hi @CiaraKey ,

How is Druid deployed? single server or cluster?

Did you configure the emitter properties and specifically the druid.emitter.prometheus.port property?
Which URL are you curl’ing to test?

It’s a single server (micro quickstart on 1 node). I added some configs but not the port since i thought it wasn’t mandatory (default should be 8080 right)? I access via nodename:8888 and did a curl on nodename:8080/metrics on my Prometheus server/node. What port should i use? In prometheus.yml i added the target with port 8080 (should be the exporters port).
In druid UI i can see that the emitter has been loaded as an extension.