Egress gateway with LoadBalancer IP to set traffic originator IP?

Our K8S nodes are in a private IP space, with only the ingressgateway being able to route traffic as this has a LoadBalancer IP attached.

Can we do the same thing with egressgateway in order to originate our traffic from another LoadBalancer with routable IP address attached?

Can someone please explain what the NAT rules used inside the egress gateway will do?

It seems that based on the docs (https://istio.io/latest/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations) the IP will be changed to one belonging to the egress gateway Pod, but information in bug reports (https://github.com/istio/istio/issues/7724) says otherwise.


Additionally, the cluster administrator or the cloud provider can configure the network to ensure application nodes can only access the Internet via a gateway. To do this, the cluster administrator or the cloud provider can prevent the allocation of public IPs to pods other than gateways and can configure NAT devices to drop packets not originating at the egress gateways.

Hello,

With ingress, you have the following schema:

User --------> load balancer ------ [[ kubernetes cluster ]] ------> istio ingress pod ----> application pod

load balancer is public (or at least private but on a network user are able to access)
kubernetes is fully private (user cannot access node/pod/service)

istio ingress pod receive only request from the load balancer ip.

With egress gateway, the schema may vary, for instance, let say you have private k8s, pod without routable network :

application pod -------> egress pod ---------> node --------> cloud network gateway ----> internet

private k8s with pod with routable network:

application pod -------> egress pod -------> cloud network gateway ----> internet

public k8s

application pod -------> egress pod -------> node ----> internet

The “cloud network gateway” can be almost anything: a firewall (nat), a server with a corporate proxy (transparent proxy), … It is a component outside your cluster, is able to accept your packet and do something with it (deny them, nat them, inspect them …)

if you access an internet website, the website will see the ip of this cloud gateway

If you want to access a resource in the internal network, then you will have

application pod -------> egress pod -------> your resource LB

in both “your resource LB” and “cloud network gateway” you may want to filter the ip of the egress pod.
Well k8s do not have sticky ip … but it seems to be possible with calico… but not sure the configuration is available on managed k8s.

An option may to use network policy and add rules to prevent pod to access external except the egress gateway. but it can be tricky as the k8s network policies are namespaced, but again, there is calico extension to create a global policy.
If so, you will need to make the “resource LB” accept the full pod network CIDR knowing only egress can access.

1 Like

@Gregoire - thanks for replying.

So, if I read this correctly then the source IP will always be the IP address of the node where the egress-gateway is running? FYI we are using Flannel and not Calico.

We are investigating working around the limitation with https://github.com/nirmata/kube-static-egress-ip

Unfortunately whilst the project works it is still very young and may not be suitable for production.

From Istio I am guessing that the approved route is to deploy a Node with a public IP address and place the egress-gateway Pod on that Node?

The source ip may be the pod ip if you have a routing subnetwork for your pod (on GKE it is called vpn native cluster https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips )

I forgot the possibility to create a specific routable ip in the node itself to have an outside route yes.

(I’m not working for istio, it is only my humble opinion here) but when you talk about a cluster with multiple node with internal network only + another node with public network seems a good use case for istio.
You setup toleration / taints so that no pod can be scheduled on the public node but the egress pod.