How enable mTLS STRICT with MongoDB

Hi,

I have a technical difficulty, I am trying to enable “STRICT” mutual TLS.

I have a stateless service (name: “my-service” / ServiceAccount / Service / Deployment) and a stateful database ( name: “database” / ServiceAccount / Service with clusterIP: None & port: 27017 / StatefulSet ).

Without PeerAuthentication, everything works well. But when I enable STRICT PeerAuthentication on ‘istio-system’, the service don’t start correctly (1/2 READY).

apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: istio-system
spec:
  mtls:
    mode: STRICT

I tried to add a “DestinationRule” :

apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: database
  namespace: my-namespace
spec:
  host: database
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL

I tried to add an “AuthorizationPolicy”:

apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: database
  namespace: striper
spec:
  selector:
    matchLabels:
      app: database
  rules:
  - from:
    - source:
         principals: ["*"]

Without success…

To connect to the database, I use “database” as the host and “27017” as the port.
The service and the database are on the same namespace.

Any help is welcome :slight_smile:

Istio’s auto mTLS should automatically start mTLS as long as sidecar is injected properly. Several things to check:

  • Can you make sure sidecar is injected properly in your database namespace? Istio / Installing the Sidecar
  • Are you calling the database app from the same namespace? If so, make sure the client/caller is in a namespace that has sidecar injected properly.
  • Delete DestinationRules (or just the TLS setting parts if you need the rest) in the client/caller namespace. TLS settings in DestinationRules are only needed if you need to override the default mTLS setting (disable mTLS or use custom TLS).

Hi, thank you for your answer,

  • Sidecar injection is OK: I have 2/2 containers on each pod and I can access envoy logs.
  • Services and databases are in the same namespace.
  • I don’t have DestinationRules.

In the meantime, I made more investigation and discovered that the problem does not seem to come from mTLS.

So I posted a second question but anti-spam blocks it:

« Our automated spam filter, Akismet, has temporarily hidden your post in Standard service to Stateful service communication error 503 for review.
A staff member will review your post soon, and it should appear shortly.
We apologize for the inconvenience. »

In this post I explain this:

  • With mTLS disabled.
  • “Deployment Service” can communicate with “Deployment Service”.
  • “Deployment Service” can communicate with “StatefulSet Service” if it’s a TCP communication like with MongoDB.
  • But “Deployment Service” can’t communicate with “StatefulSet Service” if it’s a HTTP communication.
  • I can expose “StatefulSet Service (HTTP)” using ingress without problem.

So I may have a problem with mTLS (or not), but I must solve this new problem (maybe the same) before :slight_smile:

My configuration: Istio 1.13.2, I configure HTTP redirector, HTTPS, JWK and route some paths to several services, it works fine (thanks to Istio documentation).


My problem:

I create an HTTP server as a StatefulSet service listed on the pod as follows:
my-app-0 2/2 Running 2 (48m ago) 54m

If I add an entry on the virtual service (ingress), I can access it on port 80 (from external).

If I do port forwarding, I can access it on port 2002→8080.

Everything seems to work fine, but when I try to access it from a Deployment service (not statefulset) using curl like this:
kubectl -n my-namespace exec my-service-fdd8cb667-mgd5h -c app -- curl -v "my-app-0.my-app.my-namespace.svc.cluster.local:80/version"

The client sends me this error: 0upstream connect error or disconnect/reset before headers. Reset

And the proxy gives this server-side error:
[2022-05-01T14:32:18.703Z] “GET /version HTTP/1.1” 503 UF upstream_reset_before_response_started{connection_failure,delayed_connect_error:111} - “-” 0 145 0 - “-” “curl/7.64.0” “b7f96317-d31d-42c7-8090-3dda0e773d89” “my-app-0.my-app.my-namespace.svc.cluster.local” “172.17.0.14:80” InboundPassthroughClusterIpv4 - 172.17.0.14:80 172.17.0.15:51438 outbound.80_._.my-app.my-namespace.svc.cluster.local default

So the connection is established (between deployment and statefulset) but something is broken on proxy?!

Note: I didn’t change the configuration of envoy or add rules, I just did what I said after "My configuration:… » a little higher in this post…

apiVersion: v1
kind: ServiceAccount
metadata:
  name:  my-app
  namespace:  my-namespace
---
apiVersion: v1
kind: Service
metadata:
  name:  my-app
  namespace:  my-namespace
  labels:
    app:  my-app
spec:
  clusterIP: None	# Headless.
  ports:
  - name: http-web
    port: 80
    targetPort: 8080
  selector:
    app:  my-app
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name:  my-app
  namespace:  my-namespace
  labels:
    app:  my-app
spec:
  selector:
    matchLabels:
      app:  my-app
  serviceName:  my-app
  replicas: 1
  template:
    metadata:
      labels:
        app:  my-app
        version: 0.0.1-beta2
      annotations:
        sidecar.istio.io/rewriteAppHTTPProbers: "false"
    spec:
      serviceAccountName:  my-app
      containers:
      - env:
        - name: KUBERNETES_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: my-app:latest
        imagePullPolicy: Never
        name:  my-app
        ports:
        - containerPort: 8080
        securityContext:
          privileged: false
#        readinessProbe:
#          httpGet:
#            path: /ready
#            port: 8080
#          periodSeconds: 5
#          failureThreshold: 10
#        livenessProbe:
#          httpGet:
#            path: /live
#            port: 8080
#          initialDelaySeconds: 10
#          periodSeconds: 30
#          failureThreshold: 2
—

Thanks for your help,

Arnaud