Dear experts
I am desperately trying to figure out how to make Prometheus scrape my Istio components so I can get Kiali up and running but I can’t really make sense of the information out there.
These are the components I wish to scrape:
- Envoy side cars
- Egress gateways
- Ingress gateways
Now, this is working in my lab when I use the Helm charts fom Rancher, but I just can’t get it to work at my workplace. I also can’t for the life of me figure out why it is working in my lab. The main reason for this post is for me to understand (and get it working in the long run).
Below are the Monitors included by Rancher that I have tried to get to work in my work cluster:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: istio-component-monitor
namespace: istio-system
labels:
monitoring: istio-components
release: istio
spec:
jobLabel: istio
targetLabels: [app]
selector:
matchExpressions:
- {key: istio, operator: In, values: [pilot]}
namespaceSelector:
any: true
endpoints:
- port: http-monitoring
interval: 15s
If I understand this correctly it is supposed to look for services with a label named istio that contains the string pilot and then look for a port named “http-monitoring” and scrape it using the path /metrics.
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: envoy-stats-monitor
namespace: istio-system
labels:
monitoring: istio-proxies
release: istio
spec:
selector:
matchExpressions:
- {key: istio-prometheus-ignore, operator: DoesNotExist}
namespaceSelector:
any: true
jobLabel: envoy-stats
podMetricsEndpoints:
- path: /stats/prometheus
interval: 15s
relabelings:
- action: keep
sourceLabels: [__meta_kubernetes_pod_container_name]
regex: "istio-proxy"
- action: keep
sourceLabels: [__meta_kubernetes_pod_annotationpresent_prometheus_io_scrape]
- sourceLabels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
targetLabel: __address__
- action: labeldrop
regex: "__meta_kubernetes_pod_label_(.+)"
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: namespace
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: pod_name
Moving on to the PodMonitor. If I understand this one it matches all pods that does not have a label with the key istio-prometheus-ignore. Then it will scrape /stats/prometheus every 15 seconds and do some magic with relabeling (I don’t understand this either yet but let’s keep it out of scope unless relevant).
The questions
- Do I correctly understand the ServiceMonitor and PodMonitor above?
- Is there anything else that can make Prometheus ignore the Monitors?
- I have seen multiple examples of Prometheus configurations online:
This one uses jobs (used for external Prometheus instances?)
This one uses ServiceMonitors but has strange selectors and does not cover envoy:
Grateful for any input/links/info.
Kind regards,
Patrik