I installed Istio 1.1.2 in GKE cluster 1.12. It’s a new install.
I configured 2 clusters in multicluster configuration, one cluster with master control plane and second has minimul istio configuration.
Service discover works ok between clusters ( I can curl from pods across clusters ).
Then I want to test authorization, and it’s not working even within one single cluster.
2/ I create 2 ngnix deployments a and b and services a-svc and b-svc in cluster1.
3/ I can curl ok from pod a to b-svc, and from pod b to a-svc. I would have excepted to have an access denied for both.
The envoy config shows that a network (i.e. TCP level) RBAC filter is generated, which means your service is defined as TCP services.
It would be helpful to attach the full envoy config dump for debugging. Could you also attach the service definition of your a-svc and b-svc in cluster1?
Last, It seems you’re using curl to access the services which means it doesn’t go through the network (i.e. TCP level) filter at all.
So I’m glad you told me, thank you…
I tried to add the port name. But as soon as I enable authorization, then my desired deployment crash.I tried another deployment yaml, and it doesn’t crash. So it seems my yaml is wrong for istio ?
apiVersion: apps/v1beta1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: catalogue-deployment
spec:
selector:
matchLabels:
app: catalogue
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: catalogue
spec:
containers:
- name: fnature-catalogue
image: fnature/fnature-catalogue:beta0
ports:
- containerPort: 3002
name: http
You’re specifying the livenessProbe and readinessProbe on port 80 which I assume it’s rejected by the RBAC as there is no policy applied and it’s deny by default.
Could you put a RBAC policy to allow access to port 80 for the probes?
thank you YangminZhu, much appreciated!
I was actually just about to test without liveness and readiness Probe.
I have removed them, it works and I get the access denied.
It seems weird that RBAC denies also these probes…
That’s the intended behavior for RBAC to deny all requests by default. The RBAC is enabled on the ports your specified in the Service and you can create a rule to whitelist the probe request.
may I ask you why must the services and deployment port be named.
It seems like a non intuitive and cumbersome task to do.
Also I tried to apply a TCP service role, and it doesn’t work with service that are named http. It works only after service are named TCP.
Can it not lead to easy mistakes to administer security ?
I think the reason is Istio doesn’t detect the protocol from the traffic automatically, which is a reasonable choice in my point.
Also I tried to apply a TCP service role, and it doesn’t work with service that are named http. It works only after service are named TCP.
Could you clarify what do you mean for TCP service role? I don’t quite understand this.
Can it not lead to easy mistakes to administer security ?
I think the common mistake is to apply a ServiceRole with HTTP-only fields to a TCP service, for now the problematic policy is simply ignored, one alternative is to generate a deny all rule in this case so that the operator could realize there is something wrong and fix it quickly.
Or maybe the better way is to provide a command in istioctl that will test the effect and detect potential problems of the authorization policy before applying it.
Could you clarify what do you mean forTCP service role? I don’t quite understand this.
A servicerole with TCP only field. I still wonder why it is not possible to apply layer 4 policy to traffic going to HTTP services. There is no use case for that ? Or do you leave that requirement to the underneath network CNI ?
The service role with TCP only field should work for both HTTP and TCP services, do you have example yaml for the deployment and RBAC policy in this case? I can try to reproduce this to see what might be wrong. Thanks!
@YangminZhu can you please point me to the direction, where can I create a rule to whitelist the probe request.
Currently when I enable the ClusterRbacConfig against a service, my pod of that specific service failed with the following error: