mTLS policy not working as expected

Hello folks,

I have two services ServiceA and ServiceB. ServiceA receives traffic from outside via an ingress and virtualservice. ServiceA then talks to ServiceB. The connection works fine as expected.

Then I added a policy for ServiceB to use mTLS. My understanding is, that ServiceA should still be reachable but the communication between ServiceA and ServiceB should fail, due to the fact that there is no DestinationRule yet set.
But by curling several times (via script) serviceA I get three different responses:

1: The requested URL returned error: 503 Service Unavailable

2: Service A reached succesfull : 1 (getting a number means serviceB reached successfull)

3: Service A reached successfull:
Error getting the luckynumber: org.springframework.web.client.HttpServerErrorException
$ServiceUnavailable: 503 Service Unavailable: 
[upstream connect error or disconnect/reset before headers. reset
 reason: connection termination]
  1. seems to indicate that serviceA is not reachable at all. I’d assume it would be
  2. indicates that serviceA can talk to serviceB, that’s something I would not assume (bug?)
  3. Is the state that I would expect. ServiceA is reachable and tries to get a number from serviceB which fails.

The Policy:

kind: "Policy"
apiVersion: "authentication.istio.io/v1alpha1"
metadata:
  name: "calcservices-mtls-enable"
  namespace: "default"
spec:
  targets:
  - name: calcservice
  peers:
  - mtls: {}

FYI: ServiceA=welcomservice, ServiceB=calcservice, Istio:1.4.4 / K8s:1.17.2

kind: Service
apiVersion: v1
metadata:
    name: welcomeservice
    labels:
        app: welcome
        service: welcomeservice
spec:
    selector:
        app: welcome
    ports:
        - protocol: "TCP"     
          targetPort: 8080 # the port to forward traffic to inside the pod     
          port: 8081 # port to access pod from inside the cluster
        
---
apiVersion: apps/v1
kind: Deployment
metadata:
    name: welcome-deployment #name of the deployment
spec:
    minReadySeconds: 5
    replicas: 3 # three container that should run in our desired state
    selector:
        matchLabels:
            app: welcome
    template:
        metadata:
            labels:
                app: welcome
        spec:
            imagePullSecrets:
            #kubectl create secret docker-registry <name> --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
                - name: myregkey # create key via command in shell
            containers:
                - name: welcome-container
                  image: jenskaras/welcome:v3
                  imagePullPolicy: Always
                  ports:
                  - containerPort: 8081 # port to expose on the
                    protocol: TCP #this is redundant as tcp is default if none is applied

---
kind: Service
apiVersion: v1
metadata:
    name: calcservice #use this to talk to the calc application
spec:
    selector: # target of this service
         app: calc
    ports:
        - protocol: "TCP" 
          targetPort: 8080 # the port to forward traffic to inside the pod
          port: 8080 # port to access from inside the cluster
    #type: LoadBalancer

---
apiVersion: apps/v1
kind: Deployment
metadata:
    name: calc-deployment #name of the deployment
spec:
    minReadySeconds: 5
    replicas: 3 # three container that should run in our desired state
    selector:
        matchLabels:
            app: calc
    template:
        metadata:
            labels:
                app: calc
        spec:
            imagePullSecrets:
            #kubectl create secret docker-registry <name> --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL
              - name: myregkey # create key via command in shel
            containers:
                - name: calc-container
                  image: jenskaras/welcome:calculatorV1
                  imagePullPolicy: Always
                  ports:
                      - containerPort: 8080
                        protocol: TCP

Can someone explain me why that happens?

This seems to be a bug? With further investigation:

I did: istioctl manifest apply --set values.global.mtls.auto=false
And the connection failed as expected.

I’d expected that with the value set to true the connection should have worked everytime.
With the value set to false the connection got rejected each time, which is what I expected.