Istio Give 503 error with no healthy upstream when pods get evicted

Istio Give 503 error with no healthy upstream when pods get evicted. Ideally pods goes down and come up when there are lack of resources on individual nodes to other nodes. During this time when it happens across multiple times and incase if it is taking a while for pod to come up, istio is still giving “no healthy upstream” error on browser when the external url for the pod is hit. Can anyone help me with this. Do I need to add any external flag or do I need to do any application level change. One solution I found is virtualService have retry strategies like attempts and perTryTimeout. But I don’t feel this is the right way to be implemented in production, either it should be something can be implemented in istio-proxy or istio-ingressgateway to get ride of this.

Did you fix this issue?
I have the exactly the same issue.

Hi Framled,

I didn’t have the fix from istio but I can help you with the work around I did.

  1. When issue occurs, recreate virtualservice for the application. Recreation of virtualservice will help istio to drop the current state (issue state where the calls are not forwarded to application from istioingress gateway) and creates a fresh connection.
  2. Then we have stabilised our application our to not use more resources and the pods are not getting evicted now.
    Please feel free to connect me via linked


We face the same issue, after new deployment and new pods comes up, they look healthy , but when call them we got error 503
Pilot restart resolved the issue, but still dont know what the main reason for this behavior
we are running with istio 1.4.7v

Hi, I guess my scenario and your’s is little different. In my case I am facing this when pods get evicted because lack of resources and they come up automatically. At this point thought pods were running the istio gives an 503 error. I have solved this by recreating virtualservice for the specific application.

Note : Please feel free to connect me via linked

I am facing the same issue.
Did you find a fix?