Istio 1.1.0 Adding metrics blocks internal traffic?

Hi!

I’m running a GKE cluster (version 1.11.7-gke.12) in google cloud. Istio is installed using the official helm charts as follows:

helm template install/kubernetes/helm/istio-init \
   --name istio-init --namespace istio-system \
   --set tracing.enabled=true \
   --set servicegraph.enabled=false \
   --set grafana.enabled=true \
   --set global.mtls.enabled=false \
   --set global.k8sIngressHttps=true \
   --set sidecarInjectorWebhook.enabled=true \
   --set certmanager.enabled=true \
   --set certmanager.tag=v0.6.2 \
   --set kiali.enabled=true \
   --set kiali.dashboard.grafanaURL=http://grafana.istio-system:3000 \
> istio-init.yaml

kubectl apply -f istio-init.yaml

helm template install/kubernetes/helm/istio \
   --name istio --namespace istio-system \
   --set tracing.enabled=true \
   --set servicegraph.enabled=false \
   --set grafana.enabled=true \
   --set global.mtls.enabled=false \
   --set global.k8sIngressHttps=true \
   --set sidecarInjectorWebhook.enabled=true \
   --set certmanager.enabled=true \
   --set certmanager.tag=v0.6.2 \
   --set kiali.enabled=true \
   --set kiali.dashboard.grafanaURL=http://grafana.istio-system:3000 \
> istio.yaml

kubectl apply -f istio.yaml

Like this, all communication works fine. From outside in as well as internal.
The problem here was that the mesh is not collection any metrics. In order to fix this, I tried another set of installation options to generate a manifest that contains that the required resources to collect traffic metrics.

helm template install/kubernetes/helm/istio \                                                                                            
   --name istio --namespace istio-system \
   --set global.mtls.enabled=false \
   --set global.k8sIngressHttps=true \
   --set sidecarInjectorWebhook.enabled=true \
   --set certmanager.enabled=true \
   --set certmanager.tag=v0.6.2 \
   --set kiali.enabled=true \ --set mixer.telemetry.enabled=true\
   --set kiali.dashboard.grafanaURL=http://grafana.istio-system:3000 \ 
   --set kiali.prometheusAddr=http://prometheus.istio-system:9090 \
> test.yaml

kubectl apply -f test.yaml

This combination worked nicely, until I deployed a new service and found some that some proxies reported RDS STALE (Never acknowledged). From this moment on all internal communication results in 404 not found.

I had a similar problem earlier with version 1.0.5/1.0.6 as well, which makes me wonder is there is something wrong with the combination of installation options I am choosing.

Does anyone know what I am doing wrong?

The default value of mixer.telemetry.enabled is true. Your inclusion of setting that explicitly in the helm options should not have had any impact. You can verify this with a diff between istio.yaml and test.yaml.

Something else is impacting your networking with your re-application of the artifacts. Have you tried looking at the pilot and galley logs?