Did you already try the debug steps here: https://istio.io/docs/tasks/traffic-management/ingress/secure-ingress-sds/#troubleshooting
Could you share any corresponding VirtualService and Gateway rules for the service in question?
Otherwise here are some steps for debugging. First use istioctl
to check the config status of Istio ingress gateway:
$ istioctl proxy-status istio-ingressgateway-5586f47659-r64lb.istio-system
Clusters Match
Listeners Match
Routes Match (RDS last loaded at Wed, 19 Jun 2019 09:26:07 CDT)
If anything is not synced, try restarting the ingress gateway pod - it may be possible that it somehow missed an update. Next, if RDS looked good, you can check access logs. If you have access logs enabled:
$ kubectl get configmap istio -n istio-system -o yaml | grep "accessLogFile: "
disable access log.\naccessLogFile: \"/dev/stdout\"\n\n# If accessLogEncoding
Or you can enable access logs via a helm template and kubectl apply command (if you specified a particular profile to install, or added any other --set
params to your installation, please use those same values in the command below):
helm template install/kubernetes/helm/istio --namespace=istio-system -x templates/configmap.yaml --set global.proxy.accessLogFile="/dev/stdout" | kubectl replace -f -
Once access logs are enabled, you can try your request a few more times and then check the logs on the ingress gateway:
$ kubectl logs -n istio-system istio-ingressgateway-5586f47659-r64lb | grep -v deprecated
...
[2019-06-19T14:10:48.660Z] "HEAD /status/200 HTTP/1.1" 200 - "-" "-" 0 0 56 24 "10.94.221.130" "curl/7.54.0" "9cfb9139-77bd-9567-bac7-205ddc2e01a5" "httpbin.example.com" "172.30.56.234:80" outbound|8000||httpbin.default.svc.cluster.local - 172.30.100.180:80 10.94.221.130:61632 -
The Istio default access log is defined here and if you need any more info on what those fields are, you can refer directly to the Envoy docs here
If you had gotten a connection refused error on the request, it would imply a problem with your Gateway and/or the corresponding envoy listener for that port. 503’s typically means a problem with EDS (endpoints) or CDS (clusters). Since you got a 404, that implies a problem with your route configuration and/or the VirtualService for the app in question. You can do a quick check using istioctl
to see if you have some routes defined for the port you have exposed in your Gateway rule:
istioctl proxy-config routes <istio-ingressgateway-pod-name>.istio-system
NOTE: This output only contains routes loaded via RDS.
NAME VIRTUAL HOSTS
http.80 1
1
If you do not have any routes, perhaps the ingress gateway needs to be restarted because it missed a config update, or maybe your gateway selector in your rules is pointing to the wrong ingress gateway. Assuming the istioctl command shows that there are some routes present on the ingress gateway, you can get the full route config for the ingress gateway.
$ istioctl proxy-config routes <istio-ingressgateway-pod-name>.istio-system -o json
[
{
"name": "http.80",
"virtualHosts": [
{
"name": "httpbin.example.com:80",
"domains": [
"httpbin.example.com",
"httpbin.example.com:80"
],
"routes": [
{
"match": {
"prefix": "/status",
"caseSensitive": true
},
"route": {
"cluster": "outbound|8000||httpbin.default.svc.cluster.local",
...
...
"metadata": {
"filterMetadata": {
"istio": {
"config": "/apis/networking/v1alpha3/namespaces/default/virtual-service/httpbin"
}
}
},
If you have a lot of rules of other service defined, you may want to direct the output to a file to make it easier to search. You can search for the route name provided by the original istioctl command above and look at the virtual_hosts
for the app in question. Look at the domains
field to verify that the hostnames match with those provided in the Gateway/VirtualService. Also, check that the cluster
matches with the %UPSTREAM_CLUSTER%
name in the ingress gateway access log for the request. You can even see in the filterMetadata
which config
/rule file resulted in the generated output.
If things are looking good for the ingress gateway, you can repeat the same steps for the app in question. It is possible the 404 was returned directly by the app and not by the ingress gateway.
Hope that helps
-Greg