I have multiple Istio 1.5.1 deployments. The first 3 that I deployed, didn’t have this problem. I did 3 more today using the same scripts as the others and I’m seeing something very strange. Envoy appears to be routing traffic to a 46.19.x.x address in my new deployments. I’m not seeing this in the previous ones I did. I noticed when I deployed my pods, they were getting TCP connection errors when trying to connect to an instance on redis on the cluster (this service is not in the mesh). When I looked at the envoy logs, they were connecting to a 46.19.x.x address. To demonstrate, I change the port number to 80 and use curl and you can see the difference between the two clusters
Broken cluster output
Notice the response is a 301 response from that 46.19.x.x address:
k exec -it httpbin-654c6cbbb9-s8nd4 -- curl -v redis-service.data.svc.cluster.local:80
Defaulting container name to httpbin.
Use 'kubectl describe pod/httpbin-654c6cbbb9-s8nd4 -n test' to see all of the containers in this pod.
* Rebuilt URL to: redis-service.data.svc.cluster.local:80/
* Trying 46.19.209.188...
* TCP_NODELAY set
* Connected to redis-service.data.svc.cluster.local (46.19.209.188) port 80 (#0)
> GET / HTTP/1.1
> Host: redis-service.data.svc.cluster.local
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 301 Moved Permanently
< server: -
< date: Fri, 01 May 2020 00:35:59 GMT
< content-type: text/html
< content-length: 178
< location: https://antifraud.didww.com/
< x-envoy-upstream-service-time: 88
<
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx</center>
</body>
</html>
* Connection #0 to host redis-service.data.svc.cluster.local left intact
Working cluster output
This response is from the earlier ones I did that work. These are getting the expected 503 response (since nothing is listening on this service). I’m also seeing the 172.21.x.x address, which is expected.
kubectl --cluster $staging_dal -n test exec -it httpbin-654c6cbbb9-whsqk -- curl -v redis-service.data.svc.cluster.local:80
Defaulting container name to httpbin.
Use 'kubectl describe pod/httpbin-654c6cbbb9-whsqk -n test' to see all of the containers in this pod.
* Rebuilt URL to: redis-service.data.svc.cluster.local:80/
* Trying 172.21.79.99...
* TCP_NODELAY set
* Connected to redis-service.data.svc.cluster.local (172.21.79.99) port 80 (#0)
> GET / HTTP/1.1
> Host: redis-service.data.svc.cluster.local
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 503 Service Unavailable
< content-length: 91
< content-type: text/plain
< date: Fri, 01 May 2020 00:38:35 GMT
< server: envoy
<
* Connection #0 to host redis-service.data.svc.cluster.local left intact
upstream connect error or disconnect/reset before headers. reset reason: connection failure
I did a diff between the istioctl manifests and the only different is the ingress IP between the clusters.
Can someone help me understand what’s gonig on here? It’s kind of freaking me out that something is routing my traffic to an external server.