Hey all, any help in what I’m seeing would be greatly appreciated. It looks like the from sources of an authorization policy aren’t matched when going directly to a pod’s IP?
When applying an authorization policy that allows all traffic from within the namespace (below) and blocks traffic outside the namespace, Prometheus (which resides within the same namespace and discovers it’s scrape targets via a ServiceMonitor) starts getting 403 Forbidden. When the authorization policy is removed the scrapes start working again.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: namespace-allow
namespace: test-ns
spec:
action: ALLOW
rules:
- from:
- source:
namespaces: ["test-ns"]
With the above policy applied I can apply this policy that allows the scrapes to work again for the rpc-app.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: metrics-allow
spec:
selector:
matchLabels:
app: rpc-app
action: ALLOW
rules:
- to:
- operation:
ports: ["8081"]
I’ve tried a few variations with the source as namespace or principal and inverting it to a deny (using notNamespaces: ["test-ns"]
) but can never get it to match from a source. If I try to curl the metrics endpoint directly through the service it works (as expected) but if i hit the pod IP (as prometheus does) I get the RBAC: access denied
until I add the allow for the port which then opens it up for access from every namespace.
It’s not a problem of exposing data for Prometheus and could also be addressed with network policies - I’m mostly trying to understand the behavior in case we encounter it in another scenario in the future.
Thank you,
Mike