Internal error occurred: failed calling webhook "namespace.sidecar-injector.istio.io": Post "https://istiod.istio-system.svc:443/inject?timeout=10s": context deadline exceeded

I created istio-injection=enabled label to specific namespace. But, Replicaset of that namespace occur below error.

Warning  FailedCreate  12m (x20 over 53m)  replicaset-controller  Error creating: Internal error occurred: failed calling webhook "namespace.sidecar-injector.istio.io": Post "https://istiod.istio-system.svc:443/inject?timeout=10s": context deadline exceeded

I think the problem is with the link below.

$ kubectl get --raw /api/v1/namespaces/istio-system/services/https:istiod:https-webhook/proxy/inject -v4

I0803 20:26:06.413013   28163 helpers.go:216] server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "error trying to reach service: dial tcp 10.0.8.144:15017: connect: connection timed out",
  "reason": "ServiceUnavailable",
  "code": 503
}]
Error from server (ServiceUnavailable): error trying to reach service: dial tcp 10.0.8.144:15017: connect: connection timed out
$ curl https://istiod.istio-system.svc:443/inject -k # other container

no body found
$ curl https://localhost:15017/inject -k # self container

no body found

How can i solve this problem? I use EKS.

3 Likes

Having the same exact issue. Did you find a solution?

1 Like

Also having this issue in GKE. Haven’t seen it before, but now happening in our dev and staging clusters. Both using pre-emptible machines.

The other symptom is that I don’t seem to be able to get logs from any pods in the cluster.

Seems to be fixed by

kubectl rollout restart deployment  -n kube-system

I haven’t yet managed to find out the root cause though.

Istio 1.14.2

For those who come here with GKE issues:

Have you done the Firewall edits on GCP? I gave the roll out a shot and had no luck.

I had the error below

      message: Deployment does not have minimum availability.
    - type: ReplicaFailure
      status: 'True'
      lastUpdateTime: '2022-08-26T15:16:04Z'
      lastTransitionTime: '2022-08-26T15:16:04Z'
      reason: FailedCreate
      message: >-
        Internal error occurred: failed calling webhook
        "namespace.sidecar-injector.istio.io": failed to call webhook: Post
        "https://istiod.istio-system.svc:443/inject?timeout=10s": context
        deadline exceeded

And once I updated the firewall rules for the master, it worked!

Link to firewall documentation


I am also running a private pre-emptive cluster on GKE. Version 1.24.3-gke.900


Comments I make are mine and mine only, and do not reflect that of my Employer

1 Like

having the exact issue on EKS v1.23 with Istio 1.16.0, any fix/solution?
no luck with rollout restart deployment

server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "error trying to reach service: dial tcp 10.4.78.203:15017: connect: connection timed out",
  "reason": "ServiceUnavailable",
  "code": 503

}]

Try checking if communication between nodes and eks control plane is open on port 15017. In my case i had to open this port on ingress in node security group. Those curls you tested proves that communication between nodes is possible, istio in order to create sidecar needs to be able to communicate with hook and for that network passage to EKS master is needed.

2 Likes

Check the PROXY settings for k8s API server(sorry checking in on-premise cluster)
Removed the PROXY env variables api server yaml in /etc/kubernetes/manifest.

K8s automatically set these proxy setting variables if system already has proxy settings.

same! thanks! took some searching to find this one.

Hi,
I see the following error
- lastTransitionTime: ‘2023-03-10T10:15:06Z’
message: >-
Internal error occurred: failed calling webhook
namespace.sidecar-injector.istio.io”: failed to call webhook: Post
https://istiod.istio-system.svc:443/inject?timeout=10s”: context

In the kube-apiserver.yaml there is no PROXY setting.

Having the same exact issue. Did you find a solution?

[Istio / Google Kubernetes Engine](Istio/GKE Private Cluster Firewall)

This solution worked for me

Create a env variable CLUSTER_NAME with your cluster name or replace ${CLUSTER_NAME} for your cluster name directly in following command

gcloud compute firewall-rules list --filter="name~gke-${CLUSTER_NAME}-[0-9a-z]*-master"

Get firewall-rule-name returned by previous command and replace in next command

gcloud compute firewall-rules update <firewall-rule-name> --allow tcp:10250,tcp:443,tcp:15017