Use Prometheus, Grafana, and Consul to monitor job service metrics
This tutorial explains how to configure Prometheus and Grafana to integrate with a Consul service mesh deployed with Nomad. While this tutorial introduces the basics of enabling mesh telemetry, you can also use this data to customize dashboards and set autoscaling rules for alerting.
When deploying a service mesh using Nomad and Consul, one of the benefits is the ability to collect service-to-service traffic telemetry emitted by Envoy sidecar proxies. This includes data such as request count, traffic rate, connections, response codes, and more.
In this tutorial you will deploy Grafana and Prometheus within the mesh, setup intentions, and
configure an ingress to enable access. You will configure Consul service discovery for targets
in Prometheus so that services are automatically scraped as they are deployed. A
Consul ingress gateway will load-balance the Prometheus deployment and provide access to the
web interfaces of Prometheus and Grafana on ports 8081
and 3000
respectively.
Prometheus telemetry on Envoy sidecars
The Prometheus configuration can either be done directly in Consul using proxy-defaults or per service within the Nomad job specification. This tutorial will cover configuration within the Nomad jobspec.
For a point of comparison and reference, enabling proxy metrics globally in a Consul
datacenter can be done with the following configuration and the Consul CLI command
consul config write ./<path_to_configuration_file>
.
Prerequisites
For this tutorial, you will need:
- A Nomad environment with Consul installed. The Nomad project provides Terraform configuration to deploy a cluster on AWS.
Ensure that the NOMAD_ADDR
and CONSUL_ADDR
environment variables are set
appropriately.
Create the Nomad jobs
Use the jobspec files below to create jobs for:
- two web applications to simulate traffic flows between envoy proxies
- an ingress controller to monitor traffic coming into the mesh
- Prometheus to collect the envoy metrics
- Grafana to act as a visualization frontend for Prometheus
Create the foo
web application job
The first web application job configures a "foo" service. Take note of these three specific configurations.
A dynamic port to send traffic to Prometheus' default port of
9102
.A
meta
attribute set in theservice
block that uses the dynamic port set. This port will be present in the Consul service registration that Prometheus will use to discover the proxy.A
sidecar_service
to bind the Prometheus endpoint to the dynamic port.
Create a file with the name foo.nomad.hcl
, add the following contents to it, and save the file.
Submit the job to Nomad.
Create the bar
web application job
The bar
service jobspec is similar to the foo
service jobspec.
Create a file with the name bar.nomad.hcl
, add the following contents to it, and save the file.
Submit the job to Nomad.
Create the ingress controller job
The ingress controller is a system job so it deploys on all client nodes.
Create a file with the name ingress-controller.nomad.hcl
, add the following contents to it,
and save the file.
Submit the job to Nomad.
Create the Prometheus job
The Prometheus job uses the template stanza to create the Prometheus configuration file.
It has the attr.unique.network.ip-address
attribute in the
consul_sd_config
section that allows Prometheus to use Consul to detect and scrape
targets automatically. It works in this example because the Consul client is
running on the same virtual machine as Nomad.
The relabel_configs
section lets you replace the default application port with the
dynamic envoy port to scrape data from.
The volumes attribute of the Nomad task
block takes the configuration file
that the template stanza dynamically creates and places it in the Prometheus container.
Create a file with the name prometheus.nomad.hcl
, add the following contents to it,
and save the file.
Submit the job to Nomad.
Create the Grafana job
Create a file with the name grafana.nomad.hcl
, add the following contents to it,
and save the file.
Submit the job to Nomad.
Access and configure Grafana
Grafana is available via the ingress gateway on port 3000
. Use the
nomad service info
command to get the IP address of the
client running Grafana.
The default username and password for Grafana are both admin
. Grafana requires a password change
on initial login. Choose and set a new password for the admin
user and make a note of it.
Deploy an envoy dashboard
An envoy clusters dashboard is available from the Grafana dashboard marketplace.
Navigate to the dashboards page, click on the New button, then click on Import.
Enter 11021
in the field with the placeholder text Grafana.com dashboard URL or ID, click
Load, then click Import to finish the process.
The dashboard displays aggregated Envoy health information and traffic flows.
Simulate traffic
Simulate traffic to the cluster by making requests to either
of the client nodes on port 8080
.
Open the dashboard in Grafana to see requests, connections, and traffic volume on the time series panels.
Next steps
In this tutorial, you deployed Grafana and Prometheus within the Consul service mesh, set up intentions, configured an ingress to enable access, and configured Consul service discovery to allow automatic scraping of targets in Prometheus.
For more information, check out the additional resources below.