Istio 1.7.4 with another prometheus in monitoring namespace

hi, i was trying istio + kiali on my k8s cluster hosted on vmware esxi (tanzu)

i have prometheus and grafana deployed with helm on a monitoring namespace

helm upgrade --install prometheus prometheus-community/kube-prometheus-stack --values values.yaml -n monitoring

k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None 9093/TCP,9094/TCP,9094/UDP 41s
prometheus-grafana ClusterIP 100.69.72.155 80/TCP 49s
prometheus-kube-prometheus-alertmanager ClusterIP 100.69.45.217 9093/TCP 49s
prometheus-kube-prometheus-operator ClusterIP 100.71.149.147 443/TCP 49s
prometheus-kube-prometheus-prometheus ClusterIP 100.70.59.9 9090/TCP 49s
prometheus-kube-state-metrics ClusterIP 100.67.113.53 8080/TCP 49s
prometheus-operated ClusterIP None 9090/TCP 40s
prometheus-prometheus-node-exporter ClusterIP 100.69.218.159 9100/TCP 49s

my istio was deployed with operator

kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istiocontrolplane
spec:
profile: default
EOF

kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-b94df4549-7n48x 1/1 Running 0 7m13s
istiod-85fbd94bcc-qqhcc 1/1 Running 0 7m39s

i have install kiali:

helm install
–namespace istio-system
–set auth.strategy=“anonymous”
–repo Kiali | helm-charts
kiali-server
kiali-server

kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-b94df4549-7n48x 1/1 Running 0 15m
istiod-85fbd94bcc-qqhcc 1/1 Running 0 15m
kiali-5b7d698868-cf8rs 1/1 Running 0 39s

at this point kiali and istio was runing but they have no link with my prometheus
i have try this tutorial to bring istio with my own prometheus

https://www.istiobyexample.dev/prometheus

after update my prometheus-stack by adding scraper on my values.yaml:

additionalScrapeConfigs:
- job_name: traefik
static_configs:
- targets:
- 100.69.235.4:9000
- job_name: prometheus
static_configs:
- targets:
- localhost:9090
- job_name: pilot
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istio-pilot;http-monitoring

at this point the scraper was added but not online:

after adding volumes on my values.yaml:

storageSpec:
volumeClaimTemplate:
spec:
storageClassName: standard
accessModes: [“ReadWriteOnce”]
resources:
requests:
storage: 50Gi

volumes:

  • name: config-volume
    configMap:
    name: prometheus
  • name: istio-certs
    secret:
    defaultMode: 420
    optional: true
    secretName: istio.default

volumeMounts:

  • name: config-volume
    mountPath: /etc/prometheus
  • mountPath: /etc/istio-certs
    name: istio-certs

and upgrade chart, the pod promotheus is still pending, the pvc is bound all volume of the statfullset are created but the pod does not start and have no logs.

here monitoring namespace overview:

and the statefullset otherview:

all help is welcome,
need to get my prometheus runing with volumes to share tls with istio
and need to know how to upgrade my istio with operator with my prometheus values
i am new on k8s and a self made man.
Regards.

have some news but still have some scrapper bad configured

Did you solve this issue, I’m facing the same inability to scrape the istiod metrics