Istio Prometheus to scrape metrics from service specific metrics endpoint

How can Prometheus deployed with Istio can be configured to scrape application specifc metrics from a Service? Service is exposing its metrics at /metrics endoint in Prometheus format. I have looked through the documentation and threads but did not find appropriate solution.

Any help is appreciated.

Thanks
Animesh

Have you seen https://preliminary.istio.io/help/faq/metrics-and-logs/#prometheus-application-metrics by any chance?

Thanks. Let me try that. I did not see that

Douglas I am running Istio 1.0.5 and the config that you mentioned seems is in the source tree for 1.0.5 so I updated deployment definition to include the annotation for Prometheus like

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: β€œsomename”
annotations:
prometheus.io/scrape: β€œtrue”
prometheus.io/path: /metrics
prometheus.io/port: β€œ5000”
spec:
selector:
// rest of config

Pods are exporting metrics at port 5000 ant /metric. It is mostly Kafka and JVM metrics like

TYPE kafka_read_latency histogram

kafka_read_latency_bucket{le=β€œ0.005”,} 269168.0
kafka_read_latency_bucket{le=β€œ0.01”,} 269957.0
kafka_read_latency_bucket{le=β€œ0.025”,} 272408.0
kafka_read_latency_bucket{le=β€œ0.05”,} 276268.0
kafka_read_latency_bucket{le=β€œ0.075”,} 278317.0
kafka_read_latency_bucket{le=β€œ0.1”,} 279768.0
kafka_read_latency_bucket{le=β€œ0.25”,} 287774.0
kafka_read_latency_bucket{le=β€œ0.5”,} 306292.0
kafka_read_latency_bucket{le=β€œ0.75”,} 308063.0
kafka_read_latency_bucket{le=β€œ1.0”,} 310958.0
kafka_read_latency_bucket{le=β€œ2.5”,} 430591.0
kafka_read_latency_bucket{le=β€œ5.0”,} 430591.0
kafka_read_latency_bucket{le=β€œ7.5”,} 430591.0
kafka_read_latency_bucket{le=β€œ10.0”,} 430591.0
kafka_read_latency_bucket{le="+Inf",} 430591.0
kafka_read_latency_count 430591.0

But I don’t see Prometheus picking up custom metrics that these pods are exposing when I go to Prometheus UI. Can you share some pointers on how to debug this?

Thanks
Animesh

1 Like

I suspect that your annotations should be set on the pods instead of the statefulset. In other words, they need to go to the spec.template.metadata section.

Thanks looks like that works. I can see metrics getting picked up in prometheus

2 Likes

While that worked for application where the pod had only one port which was set up for metrics.

For another application that has two ports (1 for grpc and another for metrics) Istio is not scraping metrics.

apiVersion: v1
kind: Service
metadata:
  name: fei-service
  labels:
    podlabel: fei_pod
    app: fei-service
spec:
  ports:
  - port: 80
    name: grpc-fei
    targetPort: 9999
    protocol: TCP
  - port: 8888
    name: http-fei
    protocol: TCP
  selector:
    podlabel: fei_pod
    app: fei-service
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: fei-service
spec:
  replicas: 1
  template:
    metadata:
      labels:
        podlabel: fei_pod
        app: fei-service
        version: v1
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/path: /metrics
        prometheus.io/port: "8888"
    spec:
      containers:
      - name: fei-service
        image: $image_path
        imagePullPolicy: Always
        resources:
          limits:
            memory: 2Gi
          requests:
            memory: 1Gi
        ports:
        - containerPort: 9999
          name: grpc-server
        - containerPort: 8888
          name: http-metrics

When logged into the pod the metrics endpoint http://127.0.0.1:8888/metrics shows metrics

IIUC, you have two metrics endpoints being served by the same pod. Is that correct?

If so, then yes, the current configuration only supports scraping a single metrics port via annotations. I believe this is a limitation of the annotation scheme with prom SD.

We could alter the base config to scrape multiple ports, perhaps. Maybe using a naming scheme for the metrics port (always end in *-metrics or whatever). But, based on your config above, I think that would involve renaming grpc-server to grpc-metrics. I don’t know if that is ideal.

No there is only one metrics endpoint for the pod at port 8888. The other port 9999 is port on which pod accepts gRPC requests unrelated to metrics

This should not impact prometheus’ ability to scrape based on that annotation then. What do the /targets look like on your prometheus instance? Are you using the istio default prom config?

Finally got around to report, it turned out that prometheus simplecllient we were using in our application version 0.5.0 was the issue. Istio/Prometheus was reporting 503. Changing prometheus simpleclient version to 0.6 and it started working.

I think may be the version of simpleclient is probably returning something wrong and Istio/Enviy does not like it. The logs were just showig 503 without much details even at trace.

Hi chaturvedia,
I am facing similar problem where Prometheus is not scraping prometheus-mongodb-exporter metrics. The pod definition has following metadata:
prometheus.io/port: β€œ9216”
prometheus.io/scrape: β€œtrue”
prometheus.io/path: /metrics
I could find any logs either in Prometheus container or envoy. Any help much appreciated.