Istio: 1.1.5
Kubernetes: 1.14.1
Namespace with istio inject label: easybake
3 pods:
easybake-service.easybake:8000
easybake-ui.easybake:3800
debug.easybake (which is a ubuntu container that I sh into. It does have istio sidecar also)
Auth policies:
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "default"
namespace: "easybake"
spec:
peers:
- mtls:
mode: PERMISSIVE
---
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "default"
namespace: "easybake"
spec:
host: "*.easybake.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "easybake-ui"
namespace: "easybake"
spec:
host: "easybake-ui.easybake.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
Service definitions:
kind: Service
apiVersion: v1
metadata:
labels:
app: easybake-ui
service: easybake-ui
name: easybake-ui
namespace: easybake
spec:
ports:
- port: 3800
targetPort: 3800
name: http
selector:
app: easybake-ui
---
kind: Service
apiVersion: v1
metadata:
name: easybake-service
namespace: easybake
labels:
app: easybake-service
service: easybake-service
spec:
selector:
app: easybake-service
ports:
- port: 8000
targetPort: 8000
name: http
Output of tls-check:
$ istioctl authn tls-check debug easybake-ui.easybake.svc.cluster.local -n easybake
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
easybake-ui.easybake.svc.cluster.local:3800 OK HTTP/mTLS mTLS default/easybake easybake-ui/easybake
$ istioctl authn tls-check debug easybake-service.easybake.svc.cluster.local -n easybake
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
easybake-service.easybake.svc.cluster.local:8000 OK HTTP/mTLS mTLS default/easybake default/easybake
From the ubuntu debug pod in the easybake namespace:
root@debug:/# curl http://easybake-ui.easybake:3800 -w %{http_code}
upstream connect error or disconnect/reset before headers. reset reason: connection failure 503
root@debug:/# curl http://easybake-service.easybake:8000/admin/ -s -o /dev/null -w %{http_code}
302
I can’t get the easybake-ui to stop throwing 503s. This doesn’t just happen when the request is incoming from debug pod. It’s just what I thought was easiest for testing.
1 Like
I forgot to mention that if I change this rule:
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "easybake-ui"
namespace: "easybake"
spec:
host: "easybake-ui.easybake.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
to
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "easybake-ui"
namespace: "easybake"
spec:
host: "easybake-ui.easybake.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
I can get this output in my debug container:
root@debug:/# curl http://easybake-ui.easybake:3800 -s -o /dev/null -w %{http_code}
200
1 Like
From the istio-proxy sidecar running in the debug pod that I mentioned already I got this output when trying to
curl http://easybake-ui.easybake:3800
[2019-05-14 22:25:14.955][26][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:142] [C80] handshake error: 2
[2019-05-14 22:25:14.956][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C80] socket event: 3
[2019-05-14 22:25:14.956][26][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C80] write ready
[2019-05-14 22:25:14.956][26][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:142] [C80] handshake error: 1
[2019-05-14 22:25:14.956][26][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:175] [C80] TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
[2019-05-14 22:25:14.956][26][debug][connection] [external/envoy/source/common/network/connection_impl.cc:183] [C80] closing socket: 0
[2019-05-14 22:25:14.956][26][debug][client] [external/envoy/source/common/http/codec_client.cc:82] [C80] disconnect. resetting 0 pending requests
[2019-05-14 22:25:14.956][26][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:129] [C80] client disconnected, failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
[2019-05-14 22:25:14.956][26][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:164] [C80] purge pending, failure reason: TLS error: 268435703:SSL routines:OPENSSL_internal:WRONG_VERSION_NUMBER
[2019-05-14 22:25:14.956][26][debug][router] [external/envoy/source/common/router/router.cc:644] [C79][S9423351739022914287] upstream reset: reset reason connection failure
1 Like
So I might of missed an obvious clue…
I noticed the following differences in the istio-proxy between easybake-service and easybake-ui.
When I describe the pod I can see the applicationPorts
args are missing for easybake-ui.
e.g
- args:
- proxy
- sidecar
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --configPath
- /etc/istio/proxy
- --binaryPath
- /usr/local/bin/envoy
- --serviceCluster
- easybake-ui.$(POD_NAMESPACE)
- --drainDuration
- 45s
- --parentShutdownDuration
- 1m0s
- --discoveryAddress
- istio-pilot.istio-system:15010
- --zipkinAddress
- zipkin.istio-system:9411
- --connectTimeout
- 10s
- --proxyAdminPort
- "15000"
- --concurrency
- "2"
- --controlPlaneAuthPolicy
- NONE
- --statusPort
- "15020"
- --applicationPorts
- ""
Here is easybake-service:
- args:
- proxy
- sidecar
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --configPath
- /etc/istio/proxy
- --binaryPath
- /usr/local/bin/envoy
- --serviceCluster
- easybake-service.$(POD_NAMESPACE)
- --drainDuration
- 45s
- --parentShutdownDuration
- 1m0s
- --discoveryAddress
- istio-pilot.istio-system:15010
- --zipkinAddress
- zipkin.istio-system:9411
- --connectTimeout
- 10s
- --proxyAdminPort
- "15000"
- --concurrency
- "2"
- --controlPlaneAuthPolicy
- NONE
- --statusPort
- "15020"
- --applicationPorts
- "8000"
I have no idea how this happened! Still trying to work that out.
1 Like
This entire problem was because my deployment resource was missing:
...
ports:
- containerPort: 3800
...
Which is technically optional but I guess in the case of using the istio-proxy sidecare it’s required.
2 Likes
" Pod ports : Pods must include an explicit list of the ports each container listens on. Use a containerPort
configuration in the container specification for each port. Any unlisted ports bypass the Istio proxy."
– https://istio.io/docs/setup/kubernetes/prepare/requirements/
I looked right past this.
2 Likes
palic
May 23, 2019, 1:32pm
7
Hi @jstockhausen ,
as you did we notice, that the update of K8s to 1.14.1 led the sidecar fail with 503 where everything was fine before.
It is annoying to refine all yaml definitions we have in order to be fine in the case of restart a POD.
Thank you anyway for the explaination and tracking this down.
Jan