Authorization not working

Hi,

I installed Istio 1.1.2 in GKE cluster 1.12. It’s a new install.
I configured 2 clusters in multicluster configuration, one cluster with master control plane and second has minimul istio configuration.
Service discover works ok between clusters ( I can curl from pods across clusters ).

Then I want to test authorization, and it’s not working even within one single cluster.

1/ I enable authorization with

apiVersion: "rbac.istio.io/v1alpha1"
kind: ClusterRbacConfig
metadata:
  name: default
spec:
  mode: 'ON_WITH_INCLUSION'
  inclusion:
    namespaces: ["default"]

2/ I create 2 ngnix deployments a and b and services a-svc and b-svc in cluster1.
3/ I can curl ok from pod a to b-svc, and from pod b to a-svc. I would have excepted to have an access denied for both.

I verified tshoot steps in https://istio.io/help/ops/security/debugging-authorization/

4/ when authorization is enabled, I can see in my proxies in pod a and b :

grep -i rbac b-proxy-dump3 -A 10
“name”: “envoy.filters.network.rbac”,
“config”: {
“stat_prefix”: “tcp.”,
“rules”: {
“policies”: {}
}
}
},
{
“name”: “mixer”,
“config”: {

      "name": "envoy.filters.network.rbac",
      "config": {
       "stat_prefix": "tcp.",
       "rules": {
        "policies": {}
       }
      }
     }

5/ I turn on debug logs in the proxy. After testing the curl from a to b or b to a,
I get proxy logs with

kubectl logs $(kubectl get pods -l app=productpage -o jsonpath=‘{.items[0].metadata.name}’) -c istio-proxy

the output doesn’t show any debug lined nor any line with enforced allowed or enforced denied
I see lots of warning

[2019-04-16 19:31:32.085][15][warning][misc] [external/envoy/source/common/protobuf/utility.cc:129] Using deprecated option ‘envoy.api.v2.route.Route.per_filter_config’. This configuration will be removed from Envoy soon. Please see https://github.com/envoyproxy/envoy/blob/f235f560b8b0d4d1ce8c3c4a17134aafb171e0a8/DEPRECATED.md for details.

Is my Istio RBAC not working or something I’m missing :frowning: ?

thanks

The envoy config shows that a network (i.e. TCP level) RBAC filter is generated, which means your service is defined as TCP services.

It would be helpful to attach the full envoy config dump for debugging. Could you also attach the service definition of your a-svc and b-svc in cluster1?

Last, It seems you’re using curl to access the services which means it doesn’t go through the network (i.e. TCP level) filter at all.

To fix this, you probably want to change the service port name to start with “http” as indicated in https://istio.io/docs/setup/kubernetes/prepare/requirements/

Hi,

So I’m glad you told me, thank you…
I tried to add the port name. But as soon as I enable authorization, then my desired deployment crash.I tried another deployment yaml, and it doesn’t crash. So it seems my yaml is wrong for istio ?

My original yaml and pods don’t crash:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: a
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: a
    spec:
      serviceAccountName: a-sa
      containers:
      - name: a
        image: fnature/ngnix-curl:1
        volumeMounts:
          - name: html
            mountPath: /usr/share/nginx/html/
        livenessProbe:
          httpGet:
            path: /
            port: 80
          periodSeconds: 1
        readinessProbe:
          httpGet:
            path: /
            port: 80
      volumes:
        - name: html
          configMap:
            name: res-clust1
            items:
              - key: message
path: index.html

My new yaml with service name port, then pods crash when istio is enabled :
The following is added
ports:
- containerPort: 80
name: http

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: a
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: a
    spec:
      serviceAccountName: a-sa
      containers:
      - name: a
        image: fnature/ngnix-curl:1
        ports:
           - containerPort: 80
             name: http
        volumeMounts:
          - name: html
            mountPath: /usr/share/nginx/html/
        livenessProbe:
          httpGet:
            path: /
            port: 80
          periodSeconds: 1
        readinessProbe:
          httpGet:
            path: /
            port: 80
      volumes:
        - name: html
          configMap:
            name: res-clust1
            items:
              - key: message
                path: index.html

The other deployment that doesn’t crash

apiVersion: apps/v1beta1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: catalogue-deployment
spec:
  selector:
    matchLabels:
      app: catalogue
  replicas: 1 # tells deployment to run 1 pods matching the template
  template: # create pods using pod definition in this template
    metadata:
      # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
      # generated from the deployment name
      labels:
        app: catalogue
    spec:
      containers:
      - name: fnature-catalogue
        image: fnature/fnature-catalogue:beta0
        ports:
         - containerPort: 3002
           name: http

You’re specifying the livenessProbe and readinessProbe on port 80 which I assume it’s rejected by the RBAC as there is no policy applied and it’s deny by default.

Could you put a RBAC policy to allow access to port 80 for the probes?

thank you YangminZhu, much appreciated!
I was actually just about to test without liveness and readiness Probe.
I have removed them, it works and I get the access denied.
It seems weird that RBAC denies also these probes…

That’s the intended behavior for RBAC to deny all requests by default. The RBAC is enabled on the ports your specified in the Service and you can create a rule to whitelist the probe request.

thank you. I understand better how k8s and istio work now.

may I ask you why must the services and deployment port be named.
It seems like a non intuitive and cumbersome task to do.
Also I tried to apply a TCP service role, and it doesn’t work with service that are named http. It works only after service are named TCP.
Can it not lead to easy mistakes to administer security ?

I think the reason is Istio doesn’t detect the protocol from the traffic automatically, which is a reasonable choice in my point.

Also I tried to apply a TCP service role, and it doesn’t work with service that are named http. It works only after service are named TCP.

Could you clarify what do you mean for TCP service role? I don’t quite understand this.

Can it not lead to easy mistakes to administer security ?

I think the common mistake is to apply a ServiceRole with HTTP-only fields to a TCP service, for now the problematic policy is simply ignored, one alternative is to generate a deny all rule in this case so that the operator could realize there is something wrong and fix it quickly.

Or maybe the better way is to provide a command in istioctl that will test the effect and detect potential problems of the authorization policy before applying it.

Could you clarify what do you mean forTCP service role? I don’t quite understand this.
A servicerole with TCP only field. I still wonder why it is not possible to apply layer 4 policy to traffic going to HTTP services. There is no use case for that ? Or do you leave that requirement to the underneath network CNI ?

The service role with TCP only field should work for both HTTP and TCP services, do you have example yaml for the deployment and RBAC policy in this case? I can try to reproduce this to see what might be wrong. Thanks!

@YangminZhu can you please point me to the direction, where can I create a rule to whitelist the probe request.
Currently when I enable the ClusterRbacConfig against a service, my pod of that specific service failed with the following error:

Readiness probe failed: Get http://10.162.2.91:9093/-/ready: EOF

Please guide me how do I fix it?

@waqar

Assuming you have the following service and deployment:

apiVersion: v1
kind: Service
metadata:
  name: httpbin
  namespace: foo
  labels:
    app: httpbin
spec:
  ports:
  - name: http
    port: 8000
    targetPort: 80
  selector:
    app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpbin
  namespace: foo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpbin
      version: v1
  template:
    metadata:
      labels:
        app: httpbin
        version: v1
    spec:
      containers:
      - image: docker.io/kennethreitz/httpbin
        imagePullPolicy: IfNotPresent
        name: httpbin
        ports:
        - containerPort: 80
        livenessProbe:
          httpGet:
            path: /headers
            port: 80
          periodSeconds: 1
          initialDelaySeconds: 3

You can use the following rule to whitelist the liveness probe:

apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRole
metadata:
  name: probe
  namespace: foo
spec:
  rules:
  - services: ["httpbin.foo.svc.cluster.local"]
    methods: ["GET"]
    paths: ["/headers"]
    constraints:
    - key: destination.port
      values: ["80"]
---
apiVersion: "rbac.istio.io/v1alpha1"
kind: ServiceRoleBinding
metadata:
  name: probe
  namespace: foo
spec:
  subjects:
  - user: "*"
  roleRef:
    kind: ServiceRole
    name: "probe"