Scraping Istio metrics from Prometheus Operator (e.g. using ServiceMonitor)

I am deploying Prometheus/Alertmanager/Grafana to my cluster using the latest kube-prometheus-stack helm chart (formerly known as the prometheus operator helm chart).

This chart provides the Helm Operator CRDs like ServiceMonitor and PodMonitor, which are really nice for exposing metrics to Prometheus. I’ve got a ServiceMonitor defined to pull in my custom workload metrics and the operator already includes monitors to pull in general Kubernetes cluster metrics, but I’m not sure how to proceed with Istio.

I’m curious about both the architecture of Istio metrics collection and the practical configuration side of things. With the current Istio architecture, am I right to guess that all metrics for the mesh will be exposed from Istiod? Or do I need to have Prometheus pull metrics directly from envoy containers in the mesh somehow?

Assuming all metrics are collected and exposed by Istiod, I would imagine that I can create a ServiceMonitor which targets the Istiod service in the istio-system namespace and scrape that? If I run kubectl get endpoints -n istio-system istiod-1-9-5 -o yaml, I see an endpoint named http-monitoring. Is that maybe a metrics endpoint?

I guess I’m wondering also if anyone out there has already tackled this problem. Is there any configuration available to pull Istio metrics into the Prometheus Operator, using a ServiceMonitor or PodMonitor? Thanks.

2 Likes

I don’t know if this is optimal or not, but I’ve currently got some configuration which appears to be working. I’m defining a ServiceMonitor for Isiod and the ingress-gateway, since those both have services. And I’m defining a PodMonitor to scrape metrics from the injected sidecar containers in my other pods.

apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  name: istio-sidecars
spec:
  selector:
    matchLabels:
      security.istio.io/tlsMode: 'istio'
  podMetricsEndpoints:
    - port: http-envoy-prom
      path: /stats/prometheus
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: istio-ingressgateway
spec:
  selector:
    matchLabels:
      istio: ingressgateway
  namespaceSelector:
    matchNames:
      - istio-system
  endpoints:
    - targetPort: http-envoy-prom
      path: /stats/prometheus
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: istiod
spec:
  selector:
    matchLabels:
      istio: pilot
  namespaceSelector:
    matchNames:
      - istio-system
  endpoints:
    - port: http-monitoring
      interval: 15s

Hope this helps someone else. And very open to feedback if there’s a better way to pull these metrics.

1 Like

Thanks for sharing the useful configuration. Additionally, the ServiceMonitor resources have to be labelled to be automatically recognized by prometheus. In my case I had to add the following label:

labels:
    release: prometheus-stack

I was also wondering how to make Istio work with Prometheus Operator and found this:

2 Likes

I use the PodMonitor and ServiceMonitor CRDs I shared above exactly as-is with the PrometheusOperator, which I deploy via helm with the kube-prometheus-stack helm chart. helm-charts/charts/kube-prometheus-stack at main · prometheus-community/helm-charts · GitHub

Whether you need any other labels or selectors really depends on how you have the PrometheusOperator configured. I have it set up so that it finds all PodMonitors and ServiceMonitors in any namespace in my cluster.

Hello Ghazgkull
I deployed it as mentioned, saved the file inside along with other "ServiceMonitor"s inside :
kube-prometheus-stack > templates > istio (mkdir) > servicemonitor.yaml (your file), but it didn’t work… in Service Discovery prometheus it’s possible to see some labels as mentioned, but it doesn’t work… Do you have any tips?

Checking the istio it has as mentioned the service and the http-monitoring endpoint, creating a PF for it it returns a 404 code, what could be wrong?

just to clarify to others stumbling on this post, if you want your podMonitor to discover all sidecars across all namespaces, you should add:

spec:
  namespaceSelector:
    any: true