Namespace-specific external service

Hi all,

I’m attempting to streamline the migration of some legacy applications into Istio by creating ServiceEntries and namespace-scoped VirtualServices pointing at different legacy environments. Specifically, I’m trying to get it so curl http://auth when ran on a container in namespace dev goes to auth.dev-legacy-service.com while the same command when run on a container in the namespace test goes to auth.test-legacy-service.com. My understanding is that this can be done with a combination of ServiceEntries and VirtualServices. The configuration I’ve got now is as follows:

2 ServiceEntries, in the default namespace, implicitly exportingTo: “*”:

---
kind: ServiceEntry
apiVersion: networking.istio.io/v1alpha3
metadata:
  name: legacy-auth-dev
  namespace: default
spec:
  hosts:
  - auth.dev-legacy-system.com
  location: MESH_EXTERNAL
  ports:
  - number: 80
    name: http
    protocol: HTTP
  - number: 443
    name: https
    protocol: HTTPS
---
kind: ServiceEntry
apiVersion: networking.istio.io/v1alpha3
metadata:
  name: legacy-auth-test
  namespace: default
spec:
  hosts:
  - auth.test-legacy-system.com
  location: MESH_EXTERNAL
  ports:
  - number: 80
    name: http
    protocol: HTTP
  - number: 443
    name: https
    protocol: HTTPS

A VirtualService in the dev namespace, explicitly exportingTo: .

---
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
  name: legacy-auth-dev
  namespace: dev
spec:
  hosts:
    - auth
  exportTo:
    - "."
  http:
  - timeout: 10s
    route:
      - destination:
          host: auth.dev-legacy-system.com
        weight: 100

A VirtualService in the test namespace explicitly exportingTo: .

---
kind: VirtualService
apiVersion: networking.istio.io/v1alpha3
metadata:
  name: legacy-auth-test
  namespace: test
spec:
  hosts:
    - auth
  exportTo:
    - "."
  http:
  - timeout: 10s
    route:
      - destination:
          host: auth.test-legacy-system.com
        weight: 100

After creating these and waiting a few minutes, I run the curl from a container inside the dev namespace:

$ curl -v http://auth
* Rebuilt URL to: http://auth/
* Could not resolve host: auth
* Closing connection 0
curl: (6) Could not resolve host: auth

Same result when run from a container in the test namespace. However if I use the full addresses and curl http://auth.dev-legacy-system.com it works fine. Calling kubectl describe on the relevant resources shows that everything is in place.

Am I missing something simple, or is this simply the wrong approach? It may not be relevant, but I’m running this on GKE version 1.13.6-gke.5 which appears to be running Istio 1.1.3.

And on a related note, if everything was working, what would happen to the Host header in the case of an HTTPS connection? Would it show the new “short name” and thus potentially break any routing on the external service’s side that uses the Host header to dispatch requests?

Thanks!

When you curl auth it tries to resolve the IP address of “auth” over DNS, which probably doesn’t exist

I agree - my assumption was that creating a VirtualService called auth would create the necessary DNS record, but that doesn’t appear to be the case. I’ve also tried flipping things around and creating the ServiceEntries in the namespaces, but that doesn’t appear to have worked either.

Istio doesn’t currently interact with DNS at all. If you want curl http://auth to work, in general you’ll need to configure your platform DNS to respond to that name. After that, Istio can intercept and redirect HTTP requests as you have configured.

If you are in Kubernetes, you might try creating a “dummy” Service with the name you want in each namespace, which will then create DNS entries that respond to that name.

Ahh I see. I am in Kubernetes, and so I’ve created Services of type ExternalName in each of the namespaces which need to access the legacy systems, and this is creating a DNS record that is available to the containers. The issue now is that the local istio-proxy sidecars are 404ing all requests to it. It’s the same behavior that one sees when attempting to access a mesh-external host that doesn’t have a defined ServiceEntry, however in this case the ServiceEntries exist…

$ curl -v auth
* Rebuilt URL to: auth/
*   Trying x.x.x.x...
* TCP_NODELAY set
* Connected to auth (x.x.x.x) port 80 (#0)
> GET / HTTP/1.1
> Host: auth
> User-Agent: curl/7.52.1
> Accept: */*
> 
< HTTP/1.1 404 Not Found
< date: Wed, 12 Jun 2019 17:15:33 GMT
< server: envoy
< content-length: 0
< 
* Curl_http_done: called premature == 0
* Connection #0 to host auth left intact

If this was working as intended, I’d expect to see a 302 as the response and a server of nginx.

Any further tips? Thanks!

I’ll also note that I seem to get the same result if the Service I create is NOT of type: ExternalName. In this case, the IP that is resolved is within the cluster’s IP space, but the result from Envoy is the same (404 Not Found)

Ok, so some progress, since your curl traffic is now reaching Envoy.

Looking at your ServiceEntries, I think you’re missing the resolution field, which should be set to DNS if you want Envoy to attempt to forward to those services by looking up their names in DNS.

Getting closer - I’m now optimistic that this can be made to work. I’ve implemented your suggestion and re-worked the yaml a bit so as to make it easier to copy-and-paste for testing. Using the following:

---
kind: Service
apiVersion: v1
metadata:
  name: tmp-search
  namespace: dev
spec:
  type: ExternalName
  externalName: www.google.com
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
---
kind: Service
apiVersion: v1
metadata:
  name: tmp-search
  namespace: test
spec:
  type: ExternalName
  externalName: httpbin.org
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
---
kind: ServiceEntry
apiVersion: networking.istio.io/v1alpha3
metadata:
  name: tmp-search-dev
  namespace: default
spec:
  hosts:
  - www.google.com
  location: MESH_EXTERNAL
  resolution: DNS
  ports:
  - number: 80
    name: http
    protocol: HTTP
---
kind: ServiceEntry
apiVersion: networking.istio.io/v1alpha3
metadata:
  name: tmp-search-test
  namespace: default
spec:
  hosts:
  - httpbin.org
  location: MESH_EXTERNAL
  resolution: DNS
  ports:
  - number: 80
    name: http
    protocol: HTTP

When curling http://tmp-search from a container in dev I am now getting a proper response back from Google’s system. But in test I’m getting something different:

$ curl -v tmp-search
* Rebuilt URL to: tmp-search/
*   Trying 52.0.57.170...
* TCP_NODELAY set
* Connected to tmp-search (52.0.57.170) port 80 (#0)
> GET / HTTP/1.1
> Host: tmp-search
> User-Agent: curl/7.52.1
> Accept: */*
> 
< HTTP/1.1 503 Service Unavailable
< date: Thu, 13 Jun 2019 18:46:23 GMT
< server: envoy
< content-length: 0
< 
* Curl_http_done: called premature == 0
* Connection #0 to host tmp-search left intact


$ curl -v httpbin.org
* Rebuilt URL to: httpbin.org/
*   Trying 34.230.136.58...
* TCP_NODELAY set
* Connected to httpbin.org (34.230.136.58) port 80 (#0)
> GET / HTTP/1.1
> Host: httpbin.org
> User-Agent: curl/7.52.1
> Accept: */*
> 
< HTTP/1.1 503 Service Unavailable
< date: Thu, 13 Jun 2019 18:46:55 GMT
< server: envoy
< content-length: 0
< 
* Curl_http_done: called premature == 0
* Connection #0 to host httpbin.org left intact

It seems like I must have a typo somewhere because I can’t fathom why one would work and the other would not, but it’s pretty simple yaml and I’m not seeing anything. Doubly strange is that the 2nd curl above is failing…