[Probably solved in 1.1] Which steps are required to set up trusted service communication?

Hello all,

I try to setup simple case: a service in namespace ns1 try to reach a database in namespace ns2. No other services should access the database.

I have following configuration:

$ kubectl describe meshpolicies.authentication.istio.io default
Spec:
  Peers:
    Mtls:

For the database:

$ kubectl describe -n ns2 policies.authentication.istio.io mariadb-tls-policy
Spec:
  Peers:
    Mtls:
      Mode:  STRICT
  Targets:
    Name:  mariadb

$ kubectl describe -n ns2 destinationrules.networking.istio.io mariadb-mtls
Spec:
  Host:  mariadb.ns2.svc.cluster.local
  Traffic Policy:
    Tls:
      Mode:  ISTIO_MUTUAL

$ kubectl describe -n ns2 serviceroles.rbac.istio.io mariadb-consumer-role
Spec:
  Rules:
    Methods:
      *
    Services:
      mariadb.ns2.svc.cluster.local

$ kubectl describe -n ns2 servicerolebindings.rbac.istio.io mariadb-consumer-role-binding
Spec:
  Role Ref:
    Kind:  ServiceRole
    Name:  mariadb-consumer-role
  Subjects:
    User:  xxx

User xxx isn’t an existing user and therefore I expect that service can’t access the database. As I understand it should be - user: "cluster.local/ns/ns1/sa/myservice". But if try it with telnet from the service instance, I get response:

Escape character is '^]'.
Y
5.5.5-10.2.21-MariaDB�K'!m&q*z�����LJme5upU5GFgmysql_native_password
Connection closed by foreign host.

instead of an error. What I’m missing? How debug it? How I can ensure that only wanted services can access the database? Can I control it exclusively from database namespace ns2?

It might be that you don’t have things configured properly for istio_proxy to intercept the traffic to the database.

You’ll want to verify that the istio_proxy is successfully injected into the database pod, and that the database pod has the containerPort defined for the MariaDB port (Istio will only intercept on ports that are defined in the podspec).

Thank you for your response!

Port is exposed:

$ docker history bitnami/mariadb:10.2.21
IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
78d69dae35ab        5 days ago          /bin/sh -c #(nop)  CMD ["/run.sh"]              0 B
<missing>           5 days ago          /bin/sh -c #(nop)  ENTRYPOINT ["/entrypoin...   0 B
<missing>           5 days ago          /bin/sh -c #(nop)  USER 1001                    0 B
<missing>           5 days ago          /bin/sh -c #(nop)  EXPOSE 3306                  0 B
[...]

StatefulSet container:

  containers:
  - env:
    - name: MARIADB_ROOT_PASSWORD
      valueFrom:
        secretKeyRef:
          key: mariadb-root-password
          name: mariadb
    - name: MARIADB_USER
      value: test
    - name: MARIADB_PASSWORD
      valueFrom:
        secretKeyRef:
          key: mariadb-password
          name: mariadb
    - name: MARIADB_DATABASE
      value: test
    image: docker.io/bitnami/mariadb:10.2.21
    imagePullPolicy: IfNotPresent
    livenessProbe:
      exec:
        command:
        - sh
        - -c
        - exec mysqladmin status -uroot -p$MARIADB_ROOT_PASSWORD
      failureThreshold: 3
      initialDelaySeconds: 120
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1
    name: mariadb
    ports:
    - containerPort: 3306
      name: mysql
[...]

Proxy is injected:

  istio-proxy:
    Container ID:  docker://00a89be493b58568b4c446dd2b1e787129c9e23e1834367c581d3945d7ff1376
    Image:         docker.io/istio/proxyv2:1.0.5
    Image ID:      docker-pullable://istio/proxyv2@sha256:8b7d549100638a3697886e549c149fb588800861de8c83605557a9b4b20343d4
    Port:          15090/TCP
    Host Port:     0/TCP
    Args:
      proxy
      sidecar
      --configPath
      /etc/istio/proxy
      --binaryPath
      /usr/local/bin/envoy
      --serviceCluster
      mariadb
      --drainDuration
      45s
      --parentShutdownDuration
      1m0s
      --discoveryAddress
      istio-pilot.istio-system:15007
      --discoveryRefreshDelay
      1s
      --zipkinAddress
      zipkin.istio-system:9411
      --connectTimeout
      10s
      --proxyAdminPort
      15000
      --controlPlaneAuthPolicy
      NONE
    State:          Running
      Started:      Fri, 01 Feb 2019 23:04:42 +0000
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:  10m
    Environment:
      POD_NAME:                      mariadb-mariadb-0 (v1:metadata.name)
      POD_NAMESPACE:                 ns2 (v1:metadata.namespace)
      INSTANCE_IP:                    (v1:status.podIP)
      ISTIO_META_POD_NAME:           mariadb-mariadb-0 (v1:metadata.name)
      ISTIO_META_INTERCEPTION_MODE:  REDIRECT
      ISTIO_METAJSON_LABELS:         {"app":"mariadb","chart":"mariadb-5.5.0","component":"master","release":"mariadb"}
                                     
    Mounts:
      /etc/certs/ from istio-certs (ro)
      /etc/istio/proxy from istio-envoy (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-9qq9x (ro)

May be an important note. Bitnami containers runs with non-root user. For this reason I add

  runAsUser: 0
  runAsNonRoot: false

to proxy init:

image: docker.io/istio/proxy_init:1.0.5
imagePullPolicy: IfNotPresent
name: istio-init
resources: {}
securityContext:
  runAsUser: 0
  runAsNonRoot: false
  capabilities:
    add:
    - NET_ADMIN
  privileged: true

You might want to check the logs of the proxy_init container, to see if it is successfully setting up the IPtables rules.

If that looks correct, check the istio_proxy logs after you make your test connection to see if the istio_proxy is logging that it is processing the connection.

If all that looks correct, then we know that the proxy is intercepting the connection, and the question will be why mTLS isn’t being used.

There are no errors in init container and I can see iptables rules for 3306 port.

If I connect with telnet from service container, I get:

[2019-02-04T21:00:17.592Z] - 4 130 517 "127.0.0.1:3306" inbound|3306||mariadb.ns2.svc.cluster.local 127.0.0.1:40772 10.44.0.79:3306 10.36.0.219:47318

Further:

$ ./../istio/istio-1.0.5/bin/istioctl authn tls-check mariadb.ns2.svc.cluster.local
Stderr when execute [/usr/local/bin/pilot-discovery request GET /debug/authenticationz ]: gc 1 @0.028s 7%: 0.12+1.0+2.0 ms clock, 0.49+0.080/0.47/0.49+8.1 ms cpu, 4->4->1 MB, 5 MB goal, 4 P
gc 2 @0.043s 8%: 0.062+1.4+1.1 ms clock, 0.24+0.19/1.1/1.1+4.4 ms cpu, 4->4->2 MB, 5 MB goal, 4 P

HOST:PORT                                                     STATUS     SERVER     CLIENT     AUTHN POLICY                                      DESTINATION RULE
mariadb.ns2.svc.cluster.local:3306     OK         mTLS       mTLS       mariadb-tls-policy/ns2     mariadb-mtls/ns2

Service

$ ./../istio/istio-1.0.5/bin/istioctl authn tls-check myservice.ns1.svc.cluster.local
Stderr when execute [/usr/local/bin/pilot-discovery request GET /debug/authenticationz ]: gc 1 @0.016s 11%: 0.008+0.96+1.9 ms clock, 0.034+0.14/0.61/0.61+7.7 ms cpu, 4->4->1 MB, 5 MB goal, 4 P
gc 2 @0.031s 12%: 0.023+1.4+1.9 ms clock, 0.093+0.10/1.1/1.2+7.8 ms cpu, 4->4->2 MB, 5 MB goal, 4 P

HOST:PORT                                                          STATUS     SERVER     CLIENT     AUTHN POLICY     DESTINATION RULE
myservice.ns1.svc.cluster.local:7200     OK         mTLS       mTLS       default/         myservice-destination/ns1

Just to be sure: I log in into service container to run telnet.

@liminwang do you have any idea what might be going on?

If you can point me to a small, dedicated example to just “create a service, which isn’t accessible from other services” and then “add access to particular service from other namespace”, may be I can gather some more info. And I’m not sure I don’t mess some configurations with bookinfo example.

Or can it be messed with definitions in other namespaces? For example I have allow all rule for rook:

$ kubectl describe -n rook-ceph-system serviceroles.rbac.istio.io,servicerolebindings.rbac.istio.io 
Name:         rook-ceph-system-consumer-role
Namespace:    rook-ceph-system
Labels:       app=rook-ceph-system
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"rbac.istio.io/v1alpha1","kind":"ServiceRole","metadata":{"annotations":{},"labels":{"app":"rook-ceph-system"},"name":"rook-...
API Version:  rbac.istio.io/v1alpha1
Kind:         ServiceRole
Metadata:
  Creation Timestamp:  2019-02-01T22:56:02Z
  Generation:          1
  Resource Version:    816642
  Self Link:           /apis/rbac.istio.io/v1alpha1/namespaces/rook-ceph-system/serviceroles/rook-ceph-system-consumer-role
  UID:                 8ca7187d-2674-11e9-b4f3-020d67c15ea8
Spec:
  Rules:
    Methods:
      *
    Services:
      *.rook-ceph-system.svc.cluster.local
      *
Events:  <none>


Name:         rook-ceph-system-consumer-role-binding
Namespace:    rook-ceph-system
Labels:       app=rook-ceph-system
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"rbac.istio.io/v1alpha1","kind":"ServiceRoleBinding","metadata":{"annotations":{},"labels":{"app":"rook-ceph-system"},"name"...
API Version:  rbac.istio.io/v1alpha1
Kind:         ServiceRoleBinding
Metadata:
  Creation Timestamp:  2019-02-01T22:56:02Z
  Generation:          1
  Resource Version:    816655
  Self Link:           /apis/rbac.istio.io/v1alpha1/namespaces/rook-ceph-system/servicerolebindings/rook-ceph-system-consumer-role-binding
  UID:                 8cad5419-2674-11e9-b4f3-020d67c15ea8
Spec:
  Role Ref:
    Kind:  ServiceRole
    Name:  rook-ceph-system-consumer-role
  Subjects:
    User:  *
Events:    <none>

Did you enable authorization using ClusterRbacConfig? See https://preliminary.istio.io/docs/concepts/security/#enabling-authorization

Istio authorization is deny by default. If you enable authorization for your service, no other service can access your service unless you define specific policies (ServiceRole/ServiceRoleBinding) to allow it.

Thank you very much for your response! I use istio 1.0.5 and therefore I have only RbacConfig:

$ kubectl get rbacs.config.istio.io,rbacconfigs.rbac.istio.io --all-namespaces 
NAMESPACE      NAME                               AGE
istio-system   rbacconfig.rbac.istio.io/default   12d

$ kubectl describe -n istio-system rbacconfigs.rbac.istio.io 
Name:         default
Namespace:    istio-system
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"rbac.istio.io/v1alpha1","kind":"RbacConfig","metadata":{"annotations":{},"name":"default","namespace":"istio-system"},"spec...
API Version:  rbac.istio.io/v1alpha1
Kind:         RbacConfig
Metadata:
  Creation Timestamp:  2019-01-23T13:34:29Z
  Generation:          1
  Resource Version:    95981
  Self Link:           /apis/rbac.istio.io/v1alpha1/namespaces/istio-system/rbacconfigs/default
  UID:                 9c271e5a-1f13-11e9-b86d-020d67c15ea8
Spec:
  Exclusion:
    Namespaces:
      default
  Mode:  ON_WITH_EXCLUSION
Events:  <none>

$ kubectl apply -f ../istio-clusterrbac.yaml 
error: unable to recognize "../istio-clusterrbac.yaml": no matches for kind "ClusterRbacConfig" in version "rbac.istio.io/v1alpha1"

Anton, I think your database service is running TCP protocol, correct? We supported HTTP/gRPC protocol for Istio authorization in Istio 1.0 release. TCP support was added later. From Istio 1.05 doc, it looks that we haven’t added TCP support for Istio authorization at that point. If you are running a TCP service, you need to upgrade to Istio 1.1 for Istio authorization TCP support (see https://preliminary.istio.io/docs/tasks/security/authz-tcp/).

Yes, it does. I will test it for HTTP-Communication between other services soon. Is there any ETA for more or less stable istio 1.1 release/build? How painful would be the update process from 1.0.5 to 1.1 later?

Is there any documentation about that subject for user is related to the name of serviceAccountName? I’ve spend a lot of time to debug why * and cluster.local/ns/ns2/sa/default works, but not cluster.local/ns/ns2/sa/myservice.

Anton, you need to associate a service account name with when you deploy a service. We have some examples in user guides (e.g., https://preliminary.istio.io/docs/tasks/security/authz-http/#before-you-begin). And here is the example configuration, which creates the service account “bookinfo-productpage”, and deploys it with productpage service.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: bookinfo-productpage
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: productpage-v1
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: productpage
        version: v1
    spec:
      serviceAccountName: bookinfo-productpage
      containers:
      - name: productpage
        image: istio/examples-bookinfo-productpage-v1:1.10.0
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9080

Istio 1.1 release should come out very soon, although I cannot tell the exact date. In the meanwhile, you can try out the release snapshot in the meanwhile (https://github.com/istio/istio/releases). You can follow the upgrade steps at https://preliminary.istio.io/docs/setup/kubernetes/upgrading-istio#upgrade-steps.

Hey @liminwang, thank you for your input!

Yes yes, my problem was, that it was not very clear in documentation, that I need a ServiceAccount for authorization and its name is used for service authorization. Therefore I asked about that. I think the docs can be improved.

All the services use the “default” service account if not specified. But if you want to assign specific service account to a service, you have to create the service account and then deploy. @Anton I agree with you that the doc can be improved to clarify this point.