I’m using Istio v1.9.3 to enforce a
STRICT mTLS policy across my entire mesh. For one particular clustered workload, I’d like to disable mTLS communication entirely. According to the docs, I should be able to target the cluster pods and override the default mode with something like this:
apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: my-cluster namespace: default spec: selector: matchLabels: app.kubernetes.io/name: cluster-tech app.kubernetes.io/instance: my-cluster mtls: mode: DISABLE
Unfortunately, this breaks the internal communication between the cluster pods and I end up seeing a long string of istio-proxy log entries like the following:
example-cluster-worker-0 istio-proxy [2021-04-27T17:33:44.226Z] "- - -" 0 UF,URX - - "-" 0 0 0 - "-" "-" "-" "-" "127.0.0.1:2385" inbound|2385|| - 10.244.223.60:2385 192.168.64.30:51622 - -
Here comes the weird part: if I create this policy at the namespace level, then communication works properly.
apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: ns-policy namespace: default spec: mtls: mode: DISABLE
I guess I have a couple of questions:
- Is applying a
DISABLEpolicy at the workload level support, or only at the namespace and mesh levels?
- Am I missing some other, required configuration that has to accompany this resource?
- Is this is unexpected behavior, would you like me to submit a bug in GitHub?