I’m running Istio (version 1.1.3) on GKE, whose default setup is to open Istio to all ingress traffic. However, I want to limit connections and only allow a certain IP block as ingress traffic.
When I apply the below network policy to my ingress gateway, it doesn’t seem to have any effect. Here are my configs:
After applying the policy, I would expect the ingress gateway to only accept connections from that IP block, but the end result is that I can still connect to the ingress gateway from anywhere (any IP address).
Thanks for your response, @Steven_O_brien. That doesn’t seem to be the problem, though. From what I can see in the documentation, the from definition above should function the way it is. I suspect the problem is that GKE is not preserving the source IPs.
One thing to do is make sure you have enabled Calico in GKE, otherwise NetworkPolicy itself won’t work.
Source IPs won’t always be preserved, depending on the way you access the cluster. So, how are you reaching the ingress gateway? (e.g. via a nodePort, cloud load balancer, etc.)
@spikecurtis Good point. I’m using the NetworkPolicy resource with Calico set as the provider.
When I run kubectl get pods --namespace=kube-system I can see the calico-* pods and they all look healthy.
As for how requests reach the gateway, I’m using the Cloud Load Balancer that GKE provides out of the box for Istio.
I’m not 100% sure, but I don’t think that accessing the cluster via the Load Balancer will preserve source IPs.
However, I also noticed another problem: your NetworkPolicy is in the default namespace, but the istio-ingressgateway nromally runs in the istio-system namespace. Make sure the NetworkPolicy is in the same namespace as the istio-ingressgateway pods.