I wanna run Istio 1.2.5 on kubernetes 1.14.0 (Minikube on Mac OSX) but the error “Readiness probe failed: HTTP probe failed with statuscode: 503” happens every time in my pod. I tried the Istio 1.1.14 and the error is the same.
Below more informations:
$ minikube start --memory=8192 --cpus=4 --vm-driver=hyperkit --kubernetes-version=v1.14.0 --insecure-registry='0.0.0.0/0'
$ for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done
$ kubectl apply -f install/kubernetes/istio-demo.yaml
$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-6fb9f8c5c7-twnhj 1/1 Running 0 14m
istio-citadel-5cf47dbf7c-jzcst 1/1 Running 0 14m
istio-cleanup-secrets-1.2.5-pwcsj 0/1 Completed 0 14m
istio-egressgateway-867485bc6f-ngkzl 1/1 Running 0 14m
istio-galley-7898b587db-gxn6r 1/1 Running 0 14m
istio-grafana-post-install-1.2.5-dzp5z 0/1 Completed 0 14m
istio-ingressgateway-6c79cd454c-clfq4 1/1 Running 0 14m
istio-pilot-76c567544f-h5r2p 2/2 Running 0 14m
istio-policy-6ccd5fbb7f-5c2kv 2/2 Running 5 14m
istio-security-post-install-1.2.5-bmnws 0/1 Completed 0 14m
istio-sidecar-injector-677bd5ccc5-w6h57 1/1 Running 0 14m
istio-telemetry-8449b7f8bd-brpz9 2/2 Running 5 14m
istio-tracing-5d8f57c8ff-n8279 1/1 Running 0 14m
kiali-7d749f9dcb-5r5mt 1/1 Running 0 14m
prometheus-776fdf7479-dq6jq 1/1 Running 0 14m
$ kubectl label namespace bpe istio-injection=enabled --overwrite
$ kubectl create -f bpe-api/kubernates/Deployment-istio.yml
$ kubectl get pods --namespace=bpe
NAME READY STATUS RESTARTS AGE
bpe-api-1.0.0-76bfc77c69-x4rn7 1/2 Running 0 25m
$ kubectl describe pod bpe-api-1.0.0 --namespace=bpe
Name: bpe-api-1.0.0-76bfc77c69-x4rn7
Namespace: bpe
Priority: 0
Node: minikube/192.168.64.59
Start Time: Tue, 03 Sep 2019 23:42:31 -0300
Labels: app=bpe-api
pod-template-hash=76bfc77c69
version=1.0.0
Annotations: sidecar.istio.io/inject: true
sidecar.istio.io/status:
{"version":"761ebc5a63976754715f22fcf548f05270fb4b8db07324894aebdb31fa81d960","initContainers":["istio-init"],"containers":["istio-proxy"]...
Status: Running
IP: 172.17.0.16
Controlled By: ReplicaSet/bpe-api-1.0.0-76bfc77c69
Init Containers:
istio-init:
Container ID: docker://5007da5efb534de54066204e1910d19dcb6c86dedf53c2a60fc74d720b949899
Image: docker.io/istio/proxy_init:1.2.5
Image ID: docker-pullable://istio/proxy_init@sha256:c9964a8c1c28b85cc631bbc90390eac238c90f82c8f929495d1e9f9a9135b724
Port: <none>
Host Port: <none>
Args:
-p
15001
-u
1337
-m
REDIRECT
-i
*
-x
-b
8080,8778,9779
-d
15020
State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 03 Sep 2019 23:42:53 -0300
Finished: Tue, 03 Sep 2019 23:42:54 -0300
Ready: True
Restart Count: 0
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 10m
memory: 10Mi
Environment: <none>
Mounts: <none>
Containers:
bpe-api:
Container ID: docker://af81687d54ec739985d21d3800787c4f447440a6d04b573bf6dd89a32479de01
Image: 192.168.1.103:5000/bpe-api:1.0.0
Image ID: docker-pullable://192.168.1.103:5000/bpe-api@sha256:bb8b7f2b65ea7690fa13afa4d1c95bdde5690e6dc596f47d6dd781c5e8e0e60d
Ports: 8080/TCP, 8778/TCP, 9779/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Running
Started: Tue, 03 Sep 2019 23:43:03 -0300
Ready: True
Restart Count: 0
Environment:
JAVA_OPTIONS: -Xms15m -Xmx15m -Xmn15m
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bx895 (ro)
istio-proxy:
Container ID: docker://8cb175c730a246d2e8da475fbd523199b3be2924c3ef284d7c67c4e753105614
Image: docker.io/istio/proxyv2:1.2.5
Image ID: docker-pullable://istio/proxyv2@sha256:8f210c3d09beb6b8658a4255d9ac30e25549295834a44083ed67d652ad7453e4
Port: 15090/TCP
Host Port: 0/TCP
Args:
proxy
sidecar
--domain
$(POD_NAMESPACE).svc.cluster.local
--configPath
/etc/istio/proxy
--binaryPath
/usr/local/bin/envoy
--serviceCluster
bpe-api.$(POD_NAMESPACE)
--drainDuration
45s
--parentShutdownDuration
1m0s
--discoveryAddress
istio-pilot.istio-system:15010
--zipkinAddress
zipkin.istio-system:9411
--dnsRefreshRate
300s
--connectTimeout
10s
--proxyAdminPort
15000
--concurrency
2
--controlPlaneAuthPolicy
NONE
--statusPort
15020
--applicationPorts
8080,8778,9779
State: Running
Started: Tue, 03 Sep 2019 23:43:04 -0300
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 10m
memory: 40Mi
Readiness: http-get http://:15020/healthz/ready delay=1s timeout=1s period=2s #success=1 #failure=30
Environment:
POD_NAME: bpe-api-1.0.0-76bfc77c69-x4rn7 (v1:metadata.name)
POD_NAMESPACE: bpe (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
ISTIO_META_POD_NAME: bpe-api-1.0.0-76bfc77c69-x4rn7 (v1:metadata.name)
ISTIO_META_CONFIG_NAMESPACE: bpe (v1:metadata.namespace)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
ISTIO_META_INCLUDE_INBOUND_PORTS: 8080,8778,9779
ISTIO_METAJSON_ANNOTATIONS: {"sidecar.istio.io/inject":"true"}
ISTIO_METAJSON_LABELS: {"app":"bpe-api","pod-template-hash":"76bfc77c69","version":"1.0.0"}
Mounts:
/etc/certs/ from istio-certs (ro)
/etc/istio/proxy from istio-envoy (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bx895 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-bx895:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-bx895
Optional: false
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
istio-certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio.default
Optional: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 26m default-scheduler Successfully assigned bpe/bpe-api-1.0.0-76bfc77c69-x4rn7 to minikube
Normal Pulling 26m kubelet, minikube Pulling image "docker.io/istio/proxy_init:1.2.5"
Normal Pulled 25m kubelet, minikube Successfully pulled image "docker.io/istio/proxy_init:1.2.5"
Normal Created 25m kubelet, minikube Created container istio-init
Normal Started 25m kubelet, minikube Started container istio-init
Normal Pulling 25m kubelet, minikube Pulling image "192.168.1.103:5000/bpe-api:1.0.0"
Normal Pulled 25m kubelet, minikube Successfully pulled image "192.168.1.103:5000/bpe-api:1.0.0"
Normal Created 25m kubelet, minikube Created container bpe-api
Normal Started 25m kubelet, minikube Started container bpe-api
Normal Pulled 25m kubelet, minikube Container image "docker.io/istio/proxyv2:1.2.5" already present on machine
Normal Created 25m kubelet, minikube Created container istio-proxy
Normal Started 25m kubelet, minikube Started container istio-proxy
Warning Unhealthy 67s (x735 over 25m) kubelet, minikube Readiness probe failed: HTTP probe failed with statuscode: 503
$ kubectl logs bpe-api-1.0.0-76bfc77c69-x4rn7 -c istio-proxy --namespace=bpe
2019-09-04T02:43:04.304640Z info FLAG: --applicationPorts="[8080,8778,9779]"
2019-09-04T02:43:04.304737Z info FLAG: --binaryPath="/usr/local/bin/envoy"
2019-09-04T02:43:04.304756Z info FLAG: --concurrency="2"
2019-09-04T02:43:04.304771Z info FLAG: --configPath="/etc/istio/proxy"
2019-09-04T02:43:04.304784Z info FLAG: --connectTimeout="10s"
2019-09-04T02:43:04.304791Z info FLAG: --controlPlaneAuthPolicy="NONE"
2019-09-04T02:43:04.304800Z info FLAG: --controlPlaneBootstrap="true"
2019-09-04T02:43:04.304816Z info FLAG: --customConfigFile=""
2019-09-04T02:43:04.304824Z info FLAG: --datadogAgentAddress=""
2019-09-04T02:43:04.304831Z info FLAG: --disableInternalTelemetry="false"
2019-09-04T02:43:04.304836Z info FLAG: --discoveryAddress="istio-pilot.istio-system:15010"
2019-09-04T02:43:04.304849Z info FLAG: --dnsRefreshRate="300s"
2019-09-04T02:43:04.304863Z info FLAG: --domain="bpe.svc.cluster.local"
2019-09-04T02:43:04.304877Z info FLAG: --drainDuration="45s"
2019-09-04T02:43:04.304890Z info FLAG: --envoyMetricsServiceAddress=""
2019-09-04T02:43:04.304902Z info FLAG: --help="false"
2019-09-04T02:43:04.304912Z info FLAG: --id=""
2019-09-04T02:43:04.304925Z info FLAG: --ip=""
2019-09-04T02:43:04.304934Z info FLAG: --lightstepAccessToken=""
2019-09-04T02:43:04.304940Z info FLAG: --lightstepAddress=""
2019-09-04T02:43:04.304947Z info FLAG: --lightstepCacertPath=""
2019-09-04T02:43:04.304965Z info FLAG: --lightstepSecure="false"
2019-09-04T02:43:04.304972Z info FLAG: --log_as_json="false"
2019-09-04T02:43:04.304978Z info FLAG: --log_caller=""
2019-09-04T02:43:04.304982Z info FLAG: --log_output_level="default:info"
2019-09-04T02:43:04.304987Z info FLAG: --log_rotate=""
2019-09-04T02:43:04.304994Z info FLAG: --log_rotate_max_age="30"
2019-09-04T02:43:04.305005Z info FLAG: --log_rotate_max_backups="1000"
2019-09-04T02:43:04.305099Z info FLAG: --log_rotate_max_size="104857600"
2019-09-04T02:43:04.305118Z info FLAG: --log_stacktrace_level="default:none"
2019-09-04T02:43:04.305169Z info FLAG: --log_target="[stdout]"
2019-09-04T02:43:04.305246Z info FLAG: --mixerIdentity=""
2019-09-04T02:43:04.305260Z info FLAG: --parentShutdownDuration="1m0s"
2019-09-04T02:43:04.305269Z info FLAG: --pilotIdentity=""
2019-09-04T02:43:04.305280Z info FLAG: --proxyAdminPort="15000"
2019-09-04T02:43:04.305288Z info FLAG: --proxyComponentLogLevel="misc:error"
2019-09-04T02:43:04.305296Z info FLAG: --proxyLogLevel="warning"
2019-09-04T02:43:04.305304Z info FLAG: --serviceCluster="bpe-api.bpe"
2019-09-04T02:43:04.305312Z info FLAG: --serviceregistry="Kubernetes"
2019-09-04T02:43:04.305318Z info FLAG: --statsdUdpAddress=""
2019-09-04T02:43:04.305327Z info FLAG: --statusPort="15020"
2019-09-04T02:43:04.305335Z info FLAG: --templateFile=""
2019-09-04T02:43:04.305374Z info FLAG: --trust-domain=""
2019-09-04T02:43:04.305419Z info FLAG: --zipkinAddress="zipkin.istio-system:9411"
2019-09-04T02:43:04.305462Z info Version root@9ad856b9-c627-11e9-abca-26bcb80ec4e0-docker.io/istio-1.2.5-d9e231eda0e163d0f3df0103546c7a06b72cc48d-Clean
2019-09-04T02:43:04.305763Z info Obtained private IP [172.17.0.16]
2019-09-04T02:43:04.305855Z info Proxy role: &model.Proxy{ClusterID:"", Type:"sidecar", IPAddresses:[]string{"172.17.0.16", "172.17.0.16"}, ID:"bpe-api-1.0.0-76bfc77c69-x4rn7.bpe", Locality:(*core.Locality)(nil), DNSDomain:"bpe.svc.cluster.local", TrustDomain:"cluster.local", PilotIdentity:"", MixerIdentity:"", ConfigNamespace:"", Metadata:map[string]string{}, SidecarScope:(*model.SidecarScope)(nil), ServiceInstances:[]*model.ServiceInstance(nil), WorkloadLabels:model.LabelsCollection(nil)}
2019-09-04T02:43:04.305871Z info PilotSAN []string(nil)
2019-09-04T02:43:04.306556Z info Effective config: binaryPath: /usr/local/bin/envoy
concurrency: 2
configPath: /etc/istio/proxy
connectTimeout: 10s
discoveryAddress: istio-pilot.istio-system:15010
drainDuration: 45s
parentShutdownDuration: 60s
proxyAdminPort: 15000
serviceCluster: bpe-api.bpe
statNameLength: 189
tracing:
zipkin:
address: zipkin.istio-system:9411
2019-09-04T02:43:04.306577Z info Monitored certs: []string{"/etc/certs/cert-chain.pem", "/etc/certs/key.pem", "/etc/certs/root-cert.pem"}
2019-09-04T02:43:04.306608Z info PilotSAN []string(nil)
2019-09-04T02:43:04.306845Z info Starting proxy agent
2019-09-04T02:43:04.306927Z info Opening status port 15020
2019-09-04T02:43:04.307662Z info watching /etc/certs for changes
2019-09-04T02:43:04.307684Z info Received new config, resetting budget
2019-09-04T02:43:04.307696Z info Reconciling retry (budget 10)
2019-09-04T02:43:04.307728Z info Epoch 0 starting
2019-09-04T02:43:04.318159Z info Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster bpe-api.bpe --service-node sidecar~172.17.0.16~bpe-api-1.0.0-76bfc77c69-x4rn7.bpe~bpe.svc.cluster.local --max-obj-name-len 189 --local-address-ip-version v4 --allow-unknown-fields -l warning --component-log-level misc:error --concurrency 2]
[2019-09-04 02:43:04.375][18][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 14, no healthy upstream
[2019-09-04 02:43:04.375][18][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:49] Unable to establish new stream
2019-09-04T02:43:06.240469Z info Envoy proxy is NOT ready: 4 errors occurred:
* failed checking application ports. listeners="0.0.0.0:15090","10.111.24.13:15443","10.101.234.49:443","10.105.62.138:15443","10.97.114.40:14268","10.105.62.138:31400","10.105.62.138:15030","10.101.202.175:42422","10.97.114.40:14267","10.96.0.1:443","10.105.62.138:15020","10.105.62.138:443","10.105.62.138:15031","10.100.188.134:15011","10.111.24.13:443","10.105.62.138:15029","10.96.0.10:53","10.96.0.10:9153","10.106.149.191:16686","10.105.62.138:15032","10.110.36.10:443","0.0.0.0:80","0.0.0.0:8080","0.0.0.0:15004","0.0.0.0:15014","0.0.0.0:9901","0.0.0.0:8060","0.0.0.0:9091","0.0.0.0:20001","0.0.0.0:9411","0.0.0.0:3000","0.0.0.0:15010","0.0.0.0:9090","172.17.0.16:15020","0.0.0.0:15001"
* envoy missing listener for inbound application port: 8080
* envoy missing listener for inbound application port: 8778
* envoy missing listener for inbound application port: 9779
I saw the istio-pilot too. The results is:
Name: istio-pilot-76c567544f-h5r2p
Namespace: istio-system
Priority: 0
Node: minikube/192.168.64.59
Start Time: Tue, 03 Sep 2019 23:25:30 -0300
Labels: app=pilot
chart=pilot
heritage=Tiller
istio=pilot
pod-template-hash=76c567544f
release=istio
Annotations: sidecar.istio.io/inject: false
Status: Running
IP: 172.17.0.14
Controlled By: ReplicaSet/istio-pilot-76c567544f
Containers:
discovery:
Container ID: docker://bcaea2274daa85e79dc6f052a64c573d0933425a8aab68630f2022a1e2d49a01
Image: docker.io/istio/pilot:1.2.5
Image ID: docker-pullable://istio/pilot@sha256:9ec1ad7e99904e108c099dccc817c49858d1b39e0909e5e2a589fdc41506eec1
Ports: 8080/TCP, 15010/TCP
Host Ports: 0/TCP, 0/TCP
Args:
discovery
--monitoringAddr=:15014
--log_output_level=default:info
--domain
cluster.local
--secureGrpcAddr
--keepaliveMaxServerConnectionAge
30m
State: Running
Started: Tue, 03 Sep 2019 23:28:01 -0300
Ready: True
Restart Count: 0
Requests:
cpu: 10m
memory: 100Mi
Readiness: http-get http://:8080/ready delay=5s timeout=5s period=30s #success=1 #failure=3
Environment:
POD_NAME: istio-pilot-76c567544f-h5r2p (v1:metadata.name)
POD_NAMESPACE: istio-system (v1:metadata.namespace)
GODEBUG: gctrace=1
PILOT_PUSH_THROTTLE: 100
PILOT_TRACE_SAMPLING: 100
PILOT_DISABLE_XDS_MARSHALING_TO_ANY: 1
Mounts:
/etc/certs from istio-certs (ro)
/etc/istio/config from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from istio-pilot-service-account-token-fr6fg (ro)
istio-proxy:
Container ID: docker://68aed8ae4b35566a6838749bb2ff9261e4cb2efeb55575b41a742c736864142e
Image: docker.io/istio/proxyv2:1.2.5
Image ID: docker-pullable://istio/proxyv2@sha256:8f210c3d09beb6b8658a4255d9ac30e25549295834a44083ed67d652ad7453e4
Ports: 15003/TCP, 15005/TCP, 15007/TCP, 15011/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args:
proxy
--domain
$(POD_NAMESPACE).svc.cluster.local
--serviceCluster
istio-pilot
--templateFile
/etc/istio/proxy/envoy_pilot.yaml.tmpl
--controlPlaneAuthPolicy
NONE
State: Running
Started: Tue, 03 Sep 2019 23:28:02 -0300
Ready: True
Restart Count: 0
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 10m
memory: 40Mi
Environment:
POD_NAME: istio-pilot-76c567544f-h5r2p (v1:metadata.name)
POD_NAMESPACE: istio-system (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
Mounts:
/etc/certs from istio-certs (ro)
/var/run/secrets/kubernetes.io/serviceaccount from istio-pilot-service-account-token-fr6fg (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio
Optional: false
istio-certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio.istio-pilot-service-account
Optional: true
istio-pilot-service-account-token-fr6fg:
Type: Secret (a volume populated by a Secret)
SecretName: istio-pilot-service-account-token-fr6fg
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 47m default-scheduler Successfully assigned istio-system/istio-pilot-76c567544f-h5r2p to minikube
Warning FailedMount 47m (x2 over 47m) kubelet, minikube MountVolume.SetUp failed for volume "istio-certs" : couldn't propagate object cache: timed out waiting for the condition
Warning FailedMount 47m (x3 over 47m) kubelet, minikube MountVolume.SetUp failed for volume "istio-pilot-service-account-token-fr6fg" : couldn't propagate object cache: timed out waiting for the condition
Normal Pulling 47m kubelet, minikube Pulling image "docker.io/istio/pilot:1.2.5"
Normal Pulled 45m kubelet, minikube Successfully pulled image "docker.io/istio/pilot:1.2.5"
Normal Created 45m kubelet, minikube Created container discovery
Normal Started 45m kubelet, minikube Started container discovery
Normal Pulled 45m kubelet, minikube Container image "docker.io/istio/proxyv2:1.2.5" already present on machine
Normal Created 45m kubelet, minikube Created container istio-proxy
Normal Started 45m kubelet, minikube Started container istio-proxy
Warning Unhealthy 42m (x4 over 44m) kubelet, minikube Readiness probe failed: Get http://172.17.0.14:8080/ready: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
I’m new with Istio. Does anyone have any idea how to fix this problem?
Thanks all.