We have the same problem with 1.3.3. In our sittuation we pinned the problem with file upload and download. Now I am creating a simple service to do just that in order to reproduce it reliably and will report back the result.
We are also having the same issue, i was working with @Francois last night to try and provide the outputs from dump_kubernetes.sh etc.
We are using 1.3.3 in production.
If we can provide any more info, willing to try and help… we have been running 1.3.0 on our staging environment for quite 2-3 weeks and didnt notice any issue, we also upgraded to 1.3.3 2 days ago and everything was still fine on that cluster.
In our case, we are running 3 clusters on istio 1.3.3. On two of them (which are very similar - flannel, kube-dns and kubernetes 1.11.4) we are encountering this issue and on the 3rd one (cilium, coredns, kubernetes 1.12.7) there are no pods with high cpu envoys. All of them run workloads that range from low to high ops and have similar application patterns.
To be sure I’m providing the right configs, which are you looking for… will gladly share it. Although, the biggest impact is actually in istio-ingressgateway (also running proxyv2).
I had a sandbox cluster that was using 1.2.2 that I wanted to test and see if I could reproduce…
With a single pod the ingress-gateway was only using about 43m on the 1.2.2 image. After upgrading to 1.3.3 it hit the max replicas in the HPA within minutes all sitting somewhere close to 800m+ CPU.
This cluster does not have any inbound traffic at all, why would it see that big of a spike in utilization?