Best way to to route public service (DNS) internally?

We have a few service that for various reasons “need” to use public DNS names for our services… mostly code that is in some way “customer” code. api.mycompany.com and www.mycompany.com for example.

As our Istio 1.8.1 setup is currently configured, that traffic goes out our NAT to the internet, back to our Global Accelerator, then to our ALB/NLB and then back to our Istio ingress gateway which actually costs us a non-trivial amount in various traffic charges in AWS, let alone adding unnecessary latency and points of failure.

These closest I’ve gotten is by add the “mesh” gateway to our VirtualService entries but that just adds a public name, with a private port to the routing table of other services. Something like “api.mycompany.com:8080.” Of course, our service would be addressed as “https://api.mycompany.com:443” and thus, no route match.

Is there a simple setting I could do to get the same routes that would appear in the ingress gateway to appear in the sidecar proxies and should circuit all this? Even if the traffic got directly routed to an ingress gateway instead of the underlying services that would be a big win.

This is all one kube (EKS) cluster, one Istio cluster. Nothing special.

Thanks.

I think Service entries should work for your use case. You can also leverate Istios new DNS proxy. i wrote up some examples here Trying out Istio’s DNS Proxy – The New Stack

We actually do something similar to support multi cluster applications exposed externally via public dns. Some of its explained in the second half of this presentation ServiceMeshCon North America 2020: Multi(Control Plane/Network/Mesh)??: A P...

Thanks, I’ll review those things. My first try a few days ago at a Service Entry for these wasn’t effective but I’ll keep plugging away at it.

Thanks for the awesome writeup, @nick_tetrate! It was really helpful.
The only place I’m struggling with is actually making it work. Specifically, this part gives me pause:

Just create a ServiceEntry for api.tetrate.io with the Istio ingress-gateway IP address and now your client applications can route internally on the same host!

How would such a ServiceEntry look? Would it be considered MESH_INTERNAL or MESH_EXTERNAL? And why use an IP instead of the underlying LB DNS (I’m running on AWS)?

Huge thanks :pray:

P.S.
And what would prevent the 3rd party DNS resolution to take place? Meaning, right now my services can communicate by going through the Route53 DNS records, which point to the other cluster’s Ingress Gateway. If I understand the meaning of the east-west traffic config in the Istio multicluster tutorials, it is intended for inter-cluster communication. Where does proxy-dns come into this?

This is what I came up with so far:

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: svc-redirect
spec:
  endpoints:
    - address: XXX.us-east-1.elb.amazonaws.com # The Ingress Gateway LB
  hosts:
    - myservice.mycompany.com
  location: MESH_INTERNAL
  ports:
    - name: https
      number: 443
      protocol: TLS
  resolution: DNS

It actually works, but I wonder whether that’s the correct usage.

1 Like

Yes that is the correct usage, what happens is that dns queries are hijacked by istios dns-proxy and if it finds a matching service entry for host myservice.mycompany.com it will return a virtual ip address like 1.1.1.1 and will prevent any further dns lookups(like 3rd party route53). When the request is sent and passes through the outbound envoy sidecar, envoy sees the 1.1.1.1 and knows to replace/resolve it to the endpoints listed in your service entry.

1 Like

Endpoints set to the ELB address is exactly what I’m trying to avoid. I made an [ineffective] attempt using a workloadSelector to hit the ingress gateway. I’ll return to this problem this week.

edit: just occurred to me the solution is there and that if I replace the ELB DNS name with the internal service DNS name I probably get what I want without the workloadSelector.

As far as I can tell, my approximation of the above SE is doing what I want to HTTP traffic. 443/HTTPS traffic fails with some very low level TLS issue from the client’s perspective, presumably some combination of auto-magic mTLS and the “SIMPLE” TLS rules. Traffic is never making it to the ingress gateway, local proxy sidecar is recording a failure.

Thanks Nick.

That diagram IS EXACTLY what I’m looking for. It’s also the struggle to find the right combination of options to achieve that. I don’t suppose you have an example repo from that talk?

We are planning to do a deep dive on our multi cluster setup at IstioCon in Feb. We wrote a multi-cluster controller to set these options up because there is a lot of configuration.

1 Like

Good to know I’m not missing something obvious. This seems like a really basic use case but I’m feeling like I have a fundamental misunderstanding.