mTLS origination for egress traffic with custom mTLS between client istio-proxy and egress gateway

Hello Istio Drivers,

I’ve originaly posted this problem on stackoverflow but I think it could be a better place for this topis.

Our Security Dept requirement on egress traffic is very strict: Each app inside POD must go through some proxy with mTLS authentication (app-proxy) using dedicated cert for the app. They’re suggesting using squid with tunneling to cope with double mTLS (one for proxy and the other one for the specific traffic app-server), but then we forced the app to be ssl-aware. Istio can come in and do the job but using out-of-the-box ISTIO_MUTUAL mode (between istio-proxy and egress gateway) is not the case for us.

So, I’ve tried using example Configure mutual TLS origination for egress traffic by modifying it a bit as follows (changes marked with #- and #+):

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: istio-egressgateway
spec:
  selector:
    istio: egressgateway
  servers:
  - port:
      number: 443
      name: https
      protocol: HTTPS
    hosts:
    - my-nginx.mesh-external.svc.cluster.local
    tls:
      #mode: ISTIO_MUTUAL #-
      mode: MUTUAL #+
      credentialName: egress-gateway-credential #+
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: egressgateway-for-nginx
spec:
  host: istio-egressgateway.istio-system.svc.cluster.local
  subsets:
  - name: nginx
    trafficPolicy:
      loadBalancer:
        simple: ROUND_ROBIN
      portLevelSettings:
      - port:
          number: 443
        tls:
          #mode: ISTIO_MUTUAL #-
          mode: MUTUAL #+
          credentialName: egress-app-credential #+
          sni: my-nginx.mesh-external.svc.cluster.local

where secrets have been created with:

kubectl create -n istio-system secret generic egress-app-credential \
--from-file=tls.key=client.app.key \
--from-file=tls.crt=client.app.crt \
--from-file=ca.crt=some-root.crt


kubectl create -n istio-system secret generic egress-gateway-credential \
--from-file=tls.key=egress.key \
--from-file=tls.crt=egress.crt \
--from-file=ca.crt=some-root.crt

I thought it’s logically correct, but it’s probably not because I’m getting the error:

kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c sleep -- curl -vsS http://my-nginx.mesh-external.svc.cluster.local

*   Trying 10.98.10.231:80...
* Connected to my-nginx.mesh-external.svc.cluster.local (10.98.10.231) port 80 (#0)
> GET / HTTP/1.1
> Host: my-nginx.mesh-external.svc.cluster.local
> User-Agent: curl/7.77.0-DEV
> Accept: */*
> 
upstream connect error or disconnect/reset before headers. reset reason: connection termination* Mark bundle as not supporting multiuse
< HTTP/1.1 503 Service Unavailable
< content-length: 95
< content-type: text/plain
< date: Mon, 07 Jun 2021 11:01:08 GMT
< server: envoy
< 
{ [95 bytes data]
* Connection #0 to host my-nginx.mesh-external.svc.cluster.local left intact

Additional info (istio-egressgateway log for the above request):

  1. ISTIO_MUTUAL (example - standard istio code)

client pod log:

istio-proxy [2021-06-08T09:18:02.777Z] "GET / HTTP/1.1" 200 - via_upstream - "-" 0 612 2 1 "-" "curl/7.77.0-DEV" "148be8db-5675-40eb-a246-26f51a5c73d2" "my-nginx.mesh-external.svc.cluste │
│ r.local" "172.17.0.7:8443" outbound|443|nginx|istio-egressgateway.istio-system.svc.cluster.local 172.17.0.5:37858 10.111.175.215:80 172.17.0.5:50610 - -

egress pod log:

[2021-06-07T11:20:52.907Z] "GET / HTTP/1.1" 200 - via_upstream - "-" 0 612 2 1 "172.17.0.5" "curl/7.77.0-DEV" "f163fbb1-8c9d-4960-9814-fc7bf11549ff" "my-nginx.mesh-external.svc.c 
  1. Custom MUTUAL settings (IP: 172.17.0.8 is istio-egress POD):

client pod log:

[2021-06-07T12:02:20.626Z] "GET / HTTP/1.1" 503 UC upstream_reset_before_response_started{connection_termination} - "-" 0 95 1 - "-" "curl/7.77.0-DEV" "5fb31226-21fd-4c10-882c-f72bed3483e7" "my-nginx.mesh-external.svc.cluster.local" "172.17.0.8:8443" outbound|443|nginx|istio-egressgateway.istio-system.svc.cluster.local 172.17.0.5:49588 10.98.10.231:80 172.17.0.5:41028 - -

egress pod log:

[2021-06-07T11:20:38.018Z] "- - -" 0 NR filter_chain_not_found - "-" 0 0 0 - "-" "-" "-" "-" "-" - - 172.17.0.8:8443 172.17.0.5:44558 - -                                         

Any help will be valuable because I’m struggling with it myself and maybe I’m making a logical mistake somewhere.

Edited:
As for 8443 port number:

istioctl x describe pod istio-egressgateway-79fcc9c54b-bnbzm -n istio-system                                                          
Pod: istio-egressgateway-79fcc9c54b-bnbzm
   Pod Ports: 8080 (istio-proxy), 8443 (istio-proxy), 15090 (istio-proxy)
Suggestion: add 'version' label to pod for Istio telemetry.
--------------------
Service: istio-egressgateway
   Port: http2 80/HTTP2 targets pod port 8080
   Port: https 443/HTTPS targets pod port 8443

Tested on:

  • 1.10

  • 1.9.2

Additional info:
Original post: mTLS origination for egress traffic with custom mTLS between istio-proxy and egress gateway - Stack Overflow

OK, finally I’ve solved it. The key point here is the part of DestinationRule spec, which says:

  • credentialName → NOTE: This field is currently applicable only at gateways. Sidecars will continue to use the certificate paths.

So I’ve modified the following manifests:

client deployment of sleep.yml (to mount certs)

kind: Deployment
metadata:
  name: sleep
 # putting it here does not work 
 # annotations:                                                                                       
 #   sidecar.istio.io/userVolumeMount: '[{"name":"app-certs", "mountPath":"/etc/istio/egress-app-credential", "readonly":true}]'
 #   sidecar.istio.io/userVolume: '[{"name":"app-certs", "secret":{"secretName":"egress-app-credential"}}]'
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sleep
  template:
    metadata:
      annotations: #+                                                                                      
        sidecar.istio.io/userVolumeMount: '[{"name":"app-certs", "mountPath":"/etc/istio/egress-app-credential", "readonly":true}]' #+
        sidecar.istio.io/userVolume: '[{"name":"app-certs", "secret":{"secretName":"egress-app-credential"}}]' #+
      labels:
        app: sleep
...

egressgateway-for-nginx DR:

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: egressgateway-for-nginx
spec:
  host: istio-egressgateway.istio-system.svc.cluster.local
  subsets:
  - name: nginx
    trafficPolicy:
      loadBalancer:
        simple: ROUND_ROBIN
      portLevelSettings:
      - port:
          number: 443
        tls:
          # mode: ISTIO_MUTUAL #-
          mode: MUTUAL #+
          clientCertificate: /etc/istio/egress-app-credential/tls.crt #+
          privateKey: /etc/istio/egress-app-credential/tls.key #+
          caCertificates: /etc/istio/egress-app-credential/ca.crt #+
          sni: my-nginx.mesh-external.svc.cluster.local   

And now all certs are correctly deployed on my client POD:

istioctl proxy-config secret "$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})"
RESOURCE NAME                                                                                   TYPE           STATUS     VALID CERT     SERIAL NUMBER                                        NOT AFTER                NOT BEFORE
file-cert:/etc/istio/egress-app-credential/tls.crt~/etc/istio/egress-app-credential/tls.key     Cert Chain     ACTIVE     true           1                                                    2022-05-06T09:19:24Z     2021-05-06T09:19:24Z
default                                                                                         Cert Chain     ACTIVE     true           200416862686144849012679224886550934182              2021-06-10T07:41:17Z     2021-06-09T07:41:17Z
file-root:/etc/istio/egress-app-credential/ca.crt                                               CA             ACTIVE     true           422042020503057064387036627903001284930102376872     2022-05-06T08:07:57Z     2021-05-06T08:07:57Z
ROOTCA                                                                                          CA             ACTIVE     true           11126135119553711053963756442081214010               2031-06-06T07:45:55Z     2021-06-08T07:45:55Z

Testing it using

kubectl exec "$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})" -c sleep -- curl -sS http://my-nginx.mesh-external.svc.cluster.local

gives the expected result.

1 Like