ISTIO pods are in crashloopback status


I have deployed ISTIO pods in ec2 k8s cluster.When i checked the status,using
‘kubectl get pods -n istio-system’ its showing crashloopback status for some of the pods.
Sometimes istio-telemetry or some other pods will be in crashloopback status.

In ec2 cluster , I have master node with medium(4gb,2cpu) and worker nodes are with configuration(2gb,1cpu).

Please suggest what went wrong or how I can deploy all the pods in running status.


Do a ‘kubectl describe’ on the crashlooping pods and look in teh ‘events’ section at the bottom. The answer usually lies there.

also the logs from the pods. Pass -p to kubectl logs to get the logs of the previous container