Connection termination when using internal service entry

Hello all,

I would like to create a service entry for a specific dns entry. The goal is to use this dns name from inside the pods. For that reason I created a service entry:

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: my-service-entry
spec:
  hosts:
  - my-service-test
  location: MESH_INTERNAL
  ports:
  - number: 8080
    name: http
    protocol: HTTP
  resolution: DNS
  endpoints:
  - address: my-service.default.svc.cluster.local

If I execute a nslookup inside a pod, I am getting the following response:

nslookup my-service-test

my-service-test.default.svc.cluster.local	canonical name = my-service-test
Name:	my-service-test
Address: 240.240.0.2

According to the Istio documentation this is expected since the ip address is coming from the 240.240.0.0/16 ip address range.

So the dns resolution is working fine but I am not able to send a request. I am getting the error:

curl my-service-test:8080/test
upstream connect error or disconnect/reset before headers. reset reason: connection termination/

It’s working if I use the dns name of the service:

curl my-service.default.svc.cluster.local:8080/test

{"test":"test"}

I can see in the logs that the ip address of the service that should receive the request is getting resolved right:

{
authority: "my-service-test:8080"
bytes_received: 0
bytes_sent: 95
client_ip: null
downstream_local_address: "240.240.0.2:8080"
downstream_remote_address: "X.X.X.X:36638"
duration: 3
method: "GET"
path: "/test"
protocol: "HTTP/1.1"
requested_server_name: null
response_code: 503
response_code_details: "upstream_reset_before_response_started{connection_termination}"
response_flags: "UC"
route_name: "default"
upstream_cluster: "outbound|8080||my-service-test"
upstream_host: "Y.Y.Y.Y:8080"
upstream_local_address: "X.X.X.X:41590"
upstream_service_time: null
upstream_transport_failure_reason: null
user_agent: "curl/8.4.0"
x_forwarded_for: null
}

I increased the logs on the proxy but unfortunately I can’t see an error:

{
level: "debug"
msg: "[C201][***] upstream reset: reset reason: connection termination, transport failure reason: "
scope: "envoy router"
}

{
level: "debug"
msg: "[C201][***] Sending local reply with details upstream_reset_before_response_started{connection_termination}"
scope: "envoy http"
}

{
level: "debug"
msg: "[C201][***] encoding headers via codec (end_stream=false):
':status', '503'
'content-length', '95'
'content-type', 'text/plain'
'date', 'Wed, 15 Nov 2023 15:19:50 GMT'
'server', 'envoy'
"
scope: "envoy http"
}

Do you have any idea what the problem could be?

I found the issue. The issue is mTLS. The certificate validation fails. This make sense as the hostname differs.

It’s working if I include the right host name in the header:

curl -H "Host: my-service.default.svc.cluster.local" my-service-test:8080/test

{"test":"test"}

Is it possible to tell Istio to also include this hostname in the certificate?

Here is the answer from Istio Advisor Plus GPT. I hope it will be helpful for you.

TLDR

The issue you’re facing with mTLS and DNS resolution in Istio is due to a mismatch between the host name used in the ServiceEntry and the actual service DNS. To resolve this, you need to align the host names or adjust the ServiceEntry and VirtualService to handle the mTLS requirements properly.

Explanation

When mTLS is enabled in Istio, the service mesh enforces strict validation of certificates, which includes matching the host name in the certificate with the host name used in the request. If there’s a discrepancy between these host names, the certificate validation will fail, leading to the connection errors you’re experiencing.

Detailed Answer

  1. Adjust ServiceEntry and VirtualService:

    • The ServiceEntry creates an entry in the service registry for external services. In your case, you’re trying to use it for a service within the cluster but with a custom DNS name.
    • To handle this with mTLS, you might need to adjust the ServiceEntry and potentially a corresponding VirtualService to ensure that the host name matches what’s expected in the service’s certificate.
  2. Align Host Names:

    • One approach is to align the host names in the ServiceEntry with the actual DNS of the service. This might require changing the hosts field in your ServiceEntry to match the service’s DNS name.
  3. Configure VirtualService:

    • You can use a VirtualService to rewrite the host header so that it matches the service’s actual DNS name. This way, the request will pass mTLS validation.

Example Configuration

Here’s an example of how you might configure the VirtualService:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: my-virtual-service
spec:
  hosts:
    - "my-service-test"
  http:
  - match:
    - uri:
        prefix: "/test"
    rewrite:
      authority: "my-service.default.svc.cluster.local"
    route:
    - destination:
        host: "my-service.default.svc.cluster.local"
        port:
          number: 8080

Suggested Next Steps

  1. Update ServiceEntry: Modify the ServiceEntry to align with the actual service’s DNS name if necessary.
  2. Implement VirtualService: Use a VirtualService to rewrite the host header as shown in the example.
  3. Test and Validate: After applying these changes, test the setup to ensure that requests are routed correctly and mTLS validation passes.

Reference

By adjusting the configuration to ensure the host names align, you can resolve the mTLS certificate validation issue while still using the custom DNS name for your service.

Hi jimmysong,

Your suggested solution does not work. I’m getting the same error message:

upstream connect error or disconnect/reset before headers. reset reason: connection termination

What is working is the following solution:

  1. Instead of creating a service entry we create a normal kubernetes service with the type “External-IP”. This service will point to the right service.

  2. In addition to this we create a virtual service where we add the host header.

Thank you for your reply. I’m glad you have solved this problem.