Mutual TLS over HTTPS on calls via the ingress gateway - is this possible?

Hi All

Is there a possible configuration for mtls between the ingress gateway and an application in the mesh IF the application endpoint being called is HTTPS?

This is what I’m trying to achieve:

Untitled_Diagram

https calls coming in from the internet to be terminated at the gateway (this is what my current setup looks like) then forwarded to the application as a https request, with mutual tls on the layer 4 tcp traffic between the gateway and the sidecar of the application being called.

I believe I’m doing this correctly for HTTPS calls WITHIN the mesh from one application to another as per the docs at https://istio.io/docs/tasks/security/https-overlay/ “The reason is that for the workflow “sleep -> sleep-proxy -> nginx-proxy -> nginx”, the whole flow is L7 traffic, and there is a L4 mutual TLS encryption between sleep-proxy and nginx-proxy . In this case, everything works fine.”

However I’m failing to achieve this with calls from the gateway to a backend HTTPS app (error = “http request sent to a https server”). The only way I can configure my https app to work is by putting in a destination rule with tls SIMPLE mode and a policy that allows none mtls traffic to that specific application. (as the rest of the mesh is set to destination rule mtls, policy mtls for all services).

When I refer to the documentation I notice that the server side proxy in this instance maybe downgrading HTTPS to HTTP:

“kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items…metadata.name}) -c istio-proxy – curl https://my-nginx -k”

“The reason is that for the workflow “sleep-proxy -> nginx-proxy -> nginx”, nginx-proxy is expected mutual TLS traffic from sleep-proxy. In the command above, sleep-proxy does not provide client cert. As a result, it won’t work. Moreover, even sleep-proxy provides client cert in above command, it won’t work either since the traffic will be downgraded to http from nginx-proxy to nginx.”

Based on the above is it the case that because the call in my environment is being done from the ingress gateway itself to the HTTPS application, the server side proxy is downgrading my HTTPS call and ultimately leading to the error “http request sent to a https server”? If this is the case, is there anything that can be done to achieve what I would like from the pic above? and why does the server side envoy proxy automatically downgrade HTTPS calls?

Many Thanks

1 Like

I’d like to understand this use case a little better. In your desired scenario, where are the TLS sessions being originated and terminated?

In particular, the TLS session originated by the user’s browser/endpoint (the cloud in your diagram) — where is it terminated? At the Ingress Gateway? Or does it need to be tunneled through the mTLS session and terminate at the HTTPS oauth application?

Hi thanks for the response. The TLS session from the users browser is terminated at the ingress gateway and the cert on the gateway is presented to the users browser, which is fine.

It’s the forwarded call from the ingress gateway to the sidecar of the backend HTTPS auth app that I’m trying to establish mutual TLS with, then I would like this sidecar to forward the https traffic locally to the HTTPS auth app that resides within the same pod. As such… ingress-gateway > oauth sidecar > oauth app.

I have a policy applied that dictates mutual tls across the namespace, I also have a destinationrule applied such that all traffic being sent is over mutual tls.

I think what is happening at the moment is because my ingress gateway is doing the termination and forwarding, it is operating at layer 7 and thus establishing a layer 7 mutual tls connection with the oauth sidecar. When this is terminated at the oauth sidecar the traffic is then forwarded to my https oauth app within the pod over plain http… and hence I get the error message “Client sent an HTTP request to an HTTPS server.”

I have another application within the mesh that makes https calls to this oauth app and does so successfully. I believe this is because envoy recognises that it is a https request and should therefore operate in transparent mode, where the mutual tls connection between sidecars is done at layer 4, and the https stream between client app and oauth app is kept intact. (I do not know how envoy makes the decision to do this).

I’m essentially trying to achieve the same result with my call from the ingress gateway, because at the minute my only way of making the call work is to enable SIMPLE as a destination rule to this service, as well removing the mtls policy - which is not my desired solution. Any advice or correction of my understanding is welcome.

One thing to try if you haven’t: on the K8s Service corresponding to your oauth application, ensure you name the port “https”. https://istio.io/docs/setup/kubernetes/prepare/requirements/

What needs to happen is for Istio to configure the sidecar Envoy to speak HTTPS to the oauth application backend. I think it should do that if you name the port “https,” but if not, we (the security WG) might have to discuss how to handle this config.

I’ve ensured my oauth service is names correctly, still get the same problem.

My service:

apiVersion: v1
kind: Service
metadata:
name: oauth
labels:
app: oauth
spec:
ports:

  • port: 4444
    name: https-oauth-public
    selector:
    app: oauth

This is reflected correctly in my cluster config on the oauth sidecar:

http://localhost:15000/clusters

inbound|4444|https-oauth-public|oauth.default.svc.cluster.local

The traffic destined for the local interface 127.0.0.1:4444 in the oauth pod is still coming out as raw http. And the traffic that is destined for eth0 in the oauth pod arrives as encrypted traffic. The same as before.

Destination rule:

apiVersion: “networking.istio.io/v1alpha3
kind: “DestinationRule”
metadata:
name: “default”
spec:
host: “*.default.svc.cluster.local”
trafficPolicy:
tls:
mode: ISTIO_MUTUAL

Policy:

apiVersion: “authentication.istio.io/v1alpha1
kind: “Policy”
metadata:
name: “default”
spec:
peers:

  • mtls: {}

The istio logs for each service under this configuration:

Oauth pod sidecar:

[2019-05-22 07:05:55.835][26][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:200] [C344] new tcp proxy session

[2019-05-22 07:05:55.835][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:282] [C344] readDisable: enabled=true disable=true

[2019-05-22 07:05:55.835][26][debug][filter] [src/envoy/tcp/mixer/filter.cc:132] [C344] Called tcp filter onNewConnection: remote 10.0.3.219:39824, local 10.0.3.89:4444

[2019-05-22 07:05:55.835][26][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:343] [C344] Creating connection to cluster inbound|4444|https-oauth-public|oauth.default.svc.cluster.local

[2019-05-22 07:05:55.835][26][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:80] creating a new connection

[2019-05-22 07:05:55.835][26][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:372] [C345] connecting

[2019-05-22 07:05:55.835][26][debug][connection] [external/envoy/source/common/network/connection_impl.cc:644] [C345] connecting to 127.0.0.1:4444

[2019-05-22 07:05:55.835][26][debug][connection] [external/envoy/source/common/network/connection_impl.cc:653] [C345] connection in progress

[2019-05-22 07:05:55.835][26][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:106] queueing request due to no available connections

[2019-05-22 07:05:55.835][26][debug][main] [external/envoy/source/server/connection_handler_impl.cc:257] [C344] new connection

[2019-05-22 07:05:55.835][26][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:133] item added to deferred deletion list (size=1)

[2019-05-22 07:05:55.835][26][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:53] clearing deferred deletion list (size=1)

[2019-05-22 07:05:55.835][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C344] socket event: 2

[2019-05-22 07:05:55.835][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C344] write ready

[2019-05-22 07:05:55.835][26][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:142] [C344] handshake error: 2

[2019-05-22 07:05:55.835][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C345] socket event: 2

[2019-05-22 07:05:55.835][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C345] write ready

[2019-05-22 07:05:55.835][26][debug][connection] [external/envoy/source/common/network/connection_impl.cc:517] [C345] connected

[2019-05-22 07:05:55.835][26][debug][pool] [external/envoy/source/common/tcp/conn_pool.cc:293] [C345] assigning connection

[2019-05-22 07:05:55.835][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:282] [C344] readDisable: enabled=false disable=false

[2019-05-22 07:05:55.835][26][debug][filter] [external/envoy/source/common/tcp_proxy/tcp_proxy.cc:542] TCP:onUpstreamEvent(), requestedServerName: outbound_.4444_._.oauth.default.svc.cluster.local

[2019-05-22 07:05:55.835][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C344] socket event: 2

[2019-05-22 07:05:55.835][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C344] write ready

[2019-05-22 07:05:55.835][26][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:142] [C344] handshake error: 2

[2019-05-22 07:05:55.836][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C344] socket event: 3

[2019-05-22 07:05:55.836][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C344] write ready

[2019-05-22 07:05:55.836][26][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:131] [C344] handshake complete

[2019-05-22 07:05:55.836][26][debug][filter] [src/envoy/tcp/mixer/filter.cc:171] Called tcp filter onEvent: 2 upstream 127.0.0.1:4444

[2019-05-22 07:05:55.836][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:478] [C344] read ready

[2019-05-22 07:05:55.836][26][trace][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:89] [C344] ssl read returns: -1

[2019-05-22 07:05:55.836][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C344] socket event: 3

[2019-05-22 07:05:55.836][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C344] write ready

[2019-05-22 07:05:55.836][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:478] [C344] read ready

[2019-05-22 07:05:55.836][26][trace][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:89] [C344] ssl read returns: 2074

[2019-05-22 07:05:55.836][26][trace][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:89] [C344] ssl read returns: -1

[2019-05-22 07:05:55.836][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:282] [C344] readDisable: enabled=true disable=true

[2019-05-22 07:05:55.836][26][debug][filter] [src/envoy/tcp/mixer/filter.cc:140] Called tcp filter completeCheck: OK


The Ingress gateway log:

[2019-05-22 07:05:55.834][23][debug][router] [external/envoy/source/common/router/router.cc:320] [C914][S13424283891151011264] cluster ‘outbound|4444||oauth.default.svc.cluster.local’ match for URL '/oauth2*

[2019-05-22 07:05:55.834][23][debug][router] [external/envoy/source/common/router/router.cc:381] [C914][S13424283891151011264] router decoding headers:

A list of http headers here from the application

[2019-05-22 07:05:55.835][23][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:88] creating a new connection

[2019-05-22 07:05:55.835][23][debug][client] [external/envoy/source/common/http/codec_client.cc:26] [C915] connecting

[2019-05-22 07:05:55.835][23][debug][connection] [external/envoy/source/common/network/connection_impl.cc:644] [C915] connecting to 10.0.3.89:4444

[2019-05-22 07:05:55.835][23][debug][connection] [external/envoy/source/common/network/connection_impl.cc:653] [C915] connection in progress

[2019-05-22 07:05:55.835][23][debug][pool] [external/envoy/source/common/http/conn_pool_base.cc:20] queueing request due to no available connections

[2019-05-22 07:05:55.835][23][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:811] [C914][S13424283891151011264] decode headers called: filter=0x37814a0 status=1

[2019-05-22 07:05:55.835][23][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:133] item added to deferred deletion list (size=2)

[2019-05-22 07:05:55.835][23][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:133] item added to deferred deletion list (size=3)

[2019-05-22 07:05:55.835][23][debug][http2] [external/envoy/source/common/http/http2/codec_impl.cc:577] [C12] stream closed: 0

[2019-05-22 07:05:55.835][23][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:133] item added to deferred deletion list (size=4)

[2019-05-22 07:05:55.835][23][trace][http2] [external/envoy/source/common/http/http2/codec_impl.cc:368] [C12] dispatched 119 bytes

[2019-05-22 07:05:55.835][23][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:53] clearing deferred deletion list (size=4)

[2019-05-22 07:05:55.835][23][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C915] socket event: 2

[2019-05-22 07:05:55.835][23][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C915] write ready

[2019-05-22 07:05:55.835][23][debug][connection] [external/envoy/source/common/network/connection_impl.cc:517] [C915] connected

[2019-05-22 07:05:55.836][23][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:131] [C915] handshake complete

[2019-05-22 07:05:55.836][23][debug][client] [external/envoy/source/common/http/codec_client.cc:64] [C915] connected

[2019-05-22 07:05:55.836][23][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:245] [C915] attaching to next request

[2019-05-22 07:05:55.836][23][debug][router] [external/envoy/source/common/router/router.cc:1165] [C914][S13424283891151011264] pool ready

[2019-05-22 07:05:55.836][23][trace][connection] [external/envoy/source/common/network/connection_impl.cc:376] [C915] writing 2074 bytes, end_stream false

[2019-05-22 07:05:55.836][23][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C915] write ready

[2019-05-22 07:05:55.836][23][trace][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:208] [C915] ssl write returns: 2074

[2019-05-22 07:05:55.836][23][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C915] socket event: 2

[2019-05-22 07:05:55.836][23][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C915] write ready

[2019-05-22 07:05:55.836][23][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C915] socket event: 2

[2019-05-22 07:05:55.836][23][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C915] write ready

[2019-05-22 07:05:55.839][23][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C915] socket event: 3

[2019-05-22 07:05:55.839][23][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C915] write ready

[2019-05-22 07:05:55.839][23][trace][connection] [external/envoy/source/common/network/connection_impl.cc:478] [C915] read ready

[2019-05-22 07:05:55.839][23][trace][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:89] [C915] ssl read returns: 76

[2019-05-22 07:05:55.839][23][trace][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:89] [C915] ssl read returns: 0

[2019-05-22 07:05:55.839][23][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:363] [C915] parsing 76 bytes

[2019-05-22 07:05:55.839][23][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:476] [C915] message begin

[2019-05-22 07:05:55.839][23][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:442] [C915] headers complete

[2019-05-22 07:05:55.839][23][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:331] [C915] completed header: key= value=

[2019-05-22 07:05:55.839][23][debug][router] [external/envoy/source/common/router/router.cc:717] [C914][S13424283891151011264] upstream headers complete: end_stream=false

[2019-05-22 07:05:55.839][23][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C914][S13424283891151011264] encode headers called: filter=0x3781b30 status=0

[2019-05-22 07:05:55.839][23][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C914][S13424283891151011264] encode headers called: filter=0x3249b30 status=0

[2019-05-22 07:05:55.839][23][debug][filter] [src/envoy/http/mixer/filter.cc:133] Called Mixer::Filter : encodeHeaders 2

[2019-05-22 07:05:55.839][23][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1200] [C914][S13424283891151011264] encode headers called: filter=0x2faaaf0 status=0

[2019-05-22 07:05:55.839][23][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:1305] [C914][S13424283891151011264] encoding headers via codec (end_stream=false):

’:status’, '400’

’x-envoy-upstream-service-time’, '4’

’date’, 'Wed, 22 May 2019 07:05:55 GMT’

’server’, 'istio-envoy’


Many thanks for your continued help

So, I don’t think what you want is possible in Istio currently. I chatted with some other devs, and we don’t have a way for you to configure the proxy to set up HTTPS connection to the local backend.

However, I think you might be able to configure things so your application can still function. On the Ingress Gateway, configure the TLS mode to PASSTHRU. This will make it so the gateway doesn’t terminate the TLS session from the browser, instead tunneling it thru mTLS to the sidecar, where it gets forwarded to your application as TLS. You’ll lose the ability to do traffic management or collect HTTP request level telemetry, since Istio isn’t decrypting the end user traffic, but it should at least work.

1 Like

Thanks for digging into this. I think based on this outcome I’ll need to stick to a SIMPLE destination rule and remove my policy for the specific oauth port, therefore just relying on https rather than a mutual tls session. Passthrough worked as you say however I have other services behind the ingress gateway (forwarding from 443 on the ingress) that I don’t want to use Passthrough for as a configuration. SNI filtering seems not to be a viable strategy for me in this instance as well as all calls from the client to the ingress gateway indicate the same hostname.

1 Like

Hi, is there any other workaround? I think about setting up another envoy sidecar just for translating http to https because my application forces me to use https.

Hi, Is there any solution with new istio release ?

1 Like

hi all, i am also interested in this feature. Currently, it forces us to disable „STRICT“ PA for a https backend which receives traffic by ingress gateway with TLS termination.
In my opinion Ingress Gateway should originate TLS traffic (local outbound) according to the protocol selection scheme on services e.g. https-web