Authorization not working

#1

Hi,

I installed Istio 1.1.2 in GKE cluster 1.12. It’s a new install.
I configured 2 clusters in multicluster configuration, one cluster with master control plane and second has minimul istio configuration.
Service discover works ok between clusters ( I can curl from pods across clusters ).

Then I want to test authorization, and it’s not working even within one single cluster.

1/ I enable authorization with

apiVersion: "rbac.istio.io/v1alpha1"
kind: ClusterRbacConfig
metadata:
  name: default
spec:
  mode: 'ON_WITH_INCLUSION'
  inclusion:
    namespaces: ["default"]

2/ I create 2 ngnix deployments a and b and services a-svc and b-svc in cluster1.
3/ I can curl ok from pod a to b-svc, and from pod b to a-svc. I would have excepted to have an access denied for both.

I verified tshoot steps in https://istio.io/help/ops/security/debugging-authorization/

4/ when authorization is enabled, I can see in my proxies in pod a and b :

grep -i rbac b-proxy-dump3 -A 10
“name”: “envoy.filters.network.rbac”,
“config”: {
“stat_prefix”: “tcp.”,
“rules”: {
“policies”: {}
}
}
},
{
“name”: “mixer”,
“config”: {

      "name": "envoy.filters.network.rbac",
      "config": {
       "stat_prefix": "tcp.",
       "rules": {
        "policies": {}
       }
      }
     }

5/ I turn on debug logs in the proxy. After testing the curl from a to b or b to a,
I get proxy logs with

kubectl logs $(kubectl get pods -l app=productpage -o jsonpath=’{.items[0].metadata.name}’) -c istio-proxy

the output doesn’t show any debug lined nor any line with enforced allowed or enforced denied
I see lots of warning

[2019-04-16 19:31:32.085][15][warning][misc] [external/envoy/source/common/protobuf/utility.cc:129] Using deprecated option ‘envoy.api.v2.route.Route.per_filter_config’. This configuration will be removed from Envoy soon. Please see https://github.com/envoyproxy/envoy/blob/f235f560b8b0d4d1ce8c3c4a17134aafb171e0a8/DEPRECATED.md for details.

Is my Istio RBAC not working or something I’m missing :frowning: ?

thanks

0 Likes

#2

The envoy config shows that a network (i.e. TCP level) RBAC filter is generated, which means your service is defined as TCP services.

It would be helpful to attach the full envoy config dump for debugging. Could you also attach the service definition of your a-svc and b-svc in cluster1?

Last, It seems you’re using curl to access the services which means it doesn’t go through the network (i.e. TCP level) filter at all.

To fix this, you probably want to change the service port name to start with “http” as indicated in https://istio.io/docs/setup/kubernetes/prepare/requirements/

0 Likes

#3

Hi,

So I’m glad you told me, thank you…
I tried to add the port name. But as soon as I enable authorization, then my desired deployment crash.I tried another deployment yaml, and it doesn’t crash. So it seems my yaml is wrong for istio ?

My original yaml and pods don’t crash:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: a
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: a
    spec:
      serviceAccountName: a-sa
      containers:
      - name: a
        image: fnature/ngnix-curl:1
        volumeMounts:
          - name: html
            mountPath: /usr/share/nginx/html/
        livenessProbe:
          httpGet:
            path: /
            port: 80
          periodSeconds: 1
        readinessProbe:
          httpGet:
            path: /
            port: 80
      volumes:
        - name: html
          configMap:
            name: res-clust1
            items:
              - key: message
path: index.html

My new yaml with service name port, then pods crash when istio is enabled :
The following is added
ports:
- containerPort: 80
name: http

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: a
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: a
    spec:
      serviceAccountName: a-sa
      containers:
      - name: a
        image: fnature/ngnix-curl:1
        ports:
           - containerPort: 80
             name: http
        volumeMounts:
          - name: html
            mountPath: /usr/share/nginx/html/
        livenessProbe:
          httpGet:
            path: /
            port: 80
          periodSeconds: 1
        readinessProbe:
          httpGet:
            path: /
            port: 80
      volumes:
        - name: html
          configMap:
            name: res-clust1
            items:
              - key: message
                path: index.html

The other deployment that doesn’t crash

apiVersion: apps/v1beta1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: catalogue-deployment
spec:
  selector:
    matchLabels:
      app: catalogue
  replicas: 1 # tells deployment to run 1 pods matching the template
  template: # create pods using pod definition in this template
    metadata:
      # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
      # generated from the deployment name
      labels:
        app: catalogue
    spec:
      containers:
      - name: fnature-catalogue
        image: fnature/fnature-catalogue:beta0
        ports:
         - containerPort: 3002
           name: http
0 Likes

#4

You’re specifying the livenessProbe and readinessProbe on port 80 which I assume it’s rejected by the RBAC as there is no policy applied and it’s deny by default.

Could you put a RBAC policy to allow access to port 80 for the probes?

0 Likes

#5

thank you YangminZhu, much appreciated!
I was actually just about to test without liveness and readiness Probe.
I have removed them, it works and I get the access denied.
It seems weird that RBAC denies also these probes…

0 Likes

#6

That’s the intended behavior for RBAC to deny all requests by default. The RBAC is enabled on the ports your specified in the Service and you can create a rule to whitelist the probe request.

0 Likes

#7

thank you. I understand better how k8s and istio work now.

0 Likes

#8

may I ask you why must the services and deployment port be named.
It seems like a non intuitive and cumbersome task to do.
Also I tried to apply a TCP service role, and it doesn’t work with service that are named http. It works only after service are named TCP.
Can it not lead to easy mistakes to administer security ?

0 Likes