Istio routing compared to Kubernetes Ingress

I have been using kubernetes for a couple of years, during which time I have used the Ingress mechanism, with the nginx IngressController to route traffic to workloads in my cluster. I illustrate that on the top of the digram below:

As shown, I route all traffic on 80/443 to the IngressController. I then use Ingress resources (namespace specific) to route based on hostname to the desired service. The Service resource takes it the ‘last mile’, so to speak, to an appropriate Pod.

From what I can tell, the lower part of the above diagram shows how Istio works, and what the correlation is between the Ingress approach and the Istio approach.

I guess my broad question is whether I have the correct understanding. Some more specific questions/observations:

  1. Does the Istio Ingress Gateway Pod, which I can see is based on Envoy, reconfigure itself based on Gateway resources in a similar way to how the nginx IngressController reconfigures itself based on new Ingress resources?
  2. The Ingress resource includes Service, Port, Path and other details that appears to be split between the Istion Gateway and VirtualService. Is this the right way of thinking about it?
  3. DestinationRules appear to work with the VirtualService to route to a specific Pod (e.g. by version label), but I’m not sure how the VirtualService manages the routing.

That last bullet is a bit of a puzzle, since I can’t figure out how VirtualService routes. Is there another proxy somewhere that I’m not seeing?

1 Like

1 and 2 are correct. 3, not exactly. A VirtualService defines the routing to a target service and then a DestinationRule configures the loadbalancer details at the destination of the route. Note that even though in the simple case they are often the same, the destination of a VirtualService route can be a completely different host than the VirtualService host. The host of the DestinationRule will be matched against the destination host of a route, not the VirtualService’s host.

Btw, great diagram. It would be great in a blog or somewhere in the Istio docs.

1 Like

@frankbu Thanks for the answer. Can you confirm the following observations?

It appears that DNS resolvable host names are used instead of Resource references. For example, in an Ingress, I provide the name of a Service resource. In the Gateway, the gateways list appears to expect a resolvable hostname. The same appears to be true in the VirtualService, which under route: destination: host: expects a resolvable hostname rather than a Service resource name.

That’s correct, all the matching is done by FQDN hosts, optionally with wildcards

1 Like

When you say:

DestinationRule configures the loadbalancer details at the destination of the route

Is the loadbalancer the Envoy sidecar? That is to say that the DestinationRule is defining the Envoy configuration for the sidecar (destination)…?

That’s right. Let’s say your VirtualService routes to some destination host that has multiple pods. The sidecar Envoy then sends requests to the pods according to the DestinationRule config (e.g., ROUND_ROBIN load balancing across the pods).

1 Like

I’m getting closer. Here’s my latest understanding, including in which namespaces it seems the Resources should be created:

I am still confused about the bottom right section of the diagram. The Service in the bottom right seems unnecessary. Why wouldn’t the Istio Ingress Gateway Pod (Envoy) send traffic directly to the Envoy sidecar running in the Pod?

If the Istio Ingress Gateway Pod (Envoy) sends traffic to the Service, how would that traffic go from the Service to the Envoy sidecar?

Finally, if the Envoy sidecar gets the traffic, from your previous comments it seems that it may or may not proxy that traffic to the container app running beside it. That sidecar Envoy may proxy to a different Pod (presumably to the Envoy sidecar in that Pod) to be served by the app container running there. Is that right?

@frankbu I wonder if you can respond to the last questions above. Your responses so far have been very helpful.

Envoy sends traffic directly to the target pods. The Service in the bottom corner is only used to locate the pods (i.e., Service selector) that envoy is configured to call. In other words, Envoy does it’s own load balancing, kubedns is not used. Kube Service is just for service discovery.

The Envoy sidecars intercept both incoming and outgoing requests. The incoming ones are always forwarded to the app container (after doing any required policy and/or logging). The outgoing calls are the ones that are controlled by virtualservices and destinations rules - and it load balances the call to other pods.

and btw, the Gateway box in your diagram should not be green … it normally is defined in an app namespace, like Ingress

1 Like

Oh, I just realized the possible source of confusion. The ingress gateway Envoy in your diagram works just like outgoing calls from any other envoy sidecar (i.e., internal mesh-to-mesh calls). The VirtualService is used to figure out what destination service is to be called, The kube Service is used to identify the corresponding pods, and the destination rule is used to determine the lb details for the call. All three config resources ultimately contribute to a generated Envoy config that will be executed by the ingress gateway envoy (and/or any other applicable outgoing sidecar envoys). Hope this helps.

2 Likes

This is what I ended up with https://software.danielwatrous.com/istio-ingress-vs-kubernetes-ingress/

Thanks again for your help.