Keep getting 503 in the simplest scenario

Hi,

Using Istio 1.0.4 here.

I’m struggling with random 503s that I get in a stupid simple application even when getting the JS to display the page. After setting the proxy logging level to debug, I can spot this:

Blockquote
[2019-02-19 09:52:50.754][35][debug][http] external/envoy/source/common/http/conn_manager_impl.cc:190] [C5963] new stream
[2019-02-19 09:52:50.754][35][debug][filter] src/envoy/http/mixer/filter.cc:60] Called Mixer::Filter : Filter
[2019-02-19 09:52:50.754][35][debug][filter] src/envoy/http/mixer/filter.cc:204] Called Mixer::Filter : setDecoderFilterCallbacks
[2019-02-19 09:52:50.754][35][debug][http] external/envoy/source/common/http/conn_manager_impl.cc:889] [C5963][S17073327596826725699] request end stream
[2019-02-19 09:52:50.754][35][debug][http] external/envoy/source/common/http/conn_manager_impl.cc:490] [C5963][S17073327596826725699] request headers complete (end_stream=true):
‘:authority’, ‘pp-helpers.test.oami.eu’
‘:path’, ‘/client.min.js?version=1.7.0’
‘:method’, ‘GET’
‘user-agent’, ‘Apache-HttpClient/4.5.6 (Java/1.8.0_151)’
‘x-forwarded-for’, ‘10.133.0.44, 10.136.106.52’
‘x-forwarded-proto’, ‘http’
‘x-envoy-external-address’, ‘10.136.106.52’
‘x-request-id’, ‘817c0d38-f489-9023-9f46-706f504357d5’
‘x-envoy-decorator-operation’, ‘helpers-frontend.preprod-cb.svc.cluster.local:80/*’
‘x-b3-traceid’, ‘b421de1126053c6d’
‘x-b3-spanid’, ‘b421de1126053c6d’
‘x-b3-sampled’, ‘1’
‘x-istio-attributes’, ‘Ck8KCnNvdXJjZS51aWQSQRI/a3ViZXJuZXRlczovL2lzdGlvLWluZ3Jlc3NnYXRld2F5LTY5OTZkNTY2ZDQtYmNtamYuaXN0aW8tc3lzdGVtCkYKE2Rlc3RpbmF0aW9uLnNlcnZpY2USLxItaGVscGVycy1mcm9udGVuZC5wcmVwcm9kLWNiLnN2Yy5jbHVzdGVyLmxvY2FsCksKGGRlc3RpbmF0aW9uLnNlcnZpY2UuaG9zdBIvEi1oZWxwZXJzLWZyb250ZW5kLnByZXByb2QtY2Iuc3ZjLmNsdXN0ZXIubG9jYWwKSQoXZGVzdGluYXRpb24uc2VydmljZS51aWQSLhIsaXN0aW86Ly9wcmVwcm9kLWNiL3NlcnZpY2VzL2hlbHBlcnMtZnJvbnRlbmQKLQodZGVzdGluYXRpb24uc2VydmljZS5uYW1lc3BhY2USDBIKcHJlcHJvZC1jYgouChhkZXN0aW5hdGlvbi5zZXJ2aWNlLm5hbWUSEhIQaGVscGVycy1mcm9udGVuZA==’
‘content-length’, ‘0’
[2019-02-19 09:52:50.754][35][debug][filter] src/envoy/http/mixer/filter.cc:122] Called Mixer::Filter : decodeHeaders
[2019-02-19 09:52:50.754][35][debug][filter] src/envoy/http/mixer/filter.cc:211] Called Mixer::Filter : check complete OK
[2019-02-19 09:52:50.754][35][debug][router] external/envoy/source/common/router/router.cc:252] [C5963][S17073327596826725699] cluster ‘inbound|80||helpers-frontend.preprod-cb.svc.cluster.local’ match for URL ‘/client.min.js?version=1.7.0’
[2019-02-19 09:52:50.754][35][debug][router] external/envoy/source/common/router/router.cc:303] [C5963][S17073327596826725699] router decoding headers:
‘:authority’, ‘pp-helpers.test.oami.eu’
‘:path’, ‘/client.min.js?version=1.7.0’
‘:method’, ‘GET’
‘:scheme’, ‘http’
‘user-agent’, ‘Apache-HttpClient/4.5.6 (Java/1.8.0_151)’
‘x-forwarded-for’, ‘10.133.0.44, 10.136.106.52’
‘x-forwarded-proto’, ‘http’
‘x-envoy-external-address’, ‘10.136.106.52’
‘x-request-id’, ‘817c0d38-f489-9023-9f46-706f504357d5’
‘x-b3-traceid’, ‘b421de1126053c6d’
‘x-b3-spanid’, ‘b421de1126053c6d’
‘x-b3-sampled’, ‘1’
‘content-length’, ‘0’
[2019-02-19 09:52:50.754][35][debug][pool] external/envoy/source/common/http/http1/conn_pool.cc:89] [C6024] using existing connection
[2019-02-19 09:52:50.754][35][debug][router] external/envoy/source/common/router/router.cc:971] [C5963][S17073327596826725699] pool ready
[2019-02-19 09:52:50.755][35][debug][connection] external/envoy/source/common/network/connection_impl.cc:451] [C6024] remote close
[2019-02-19 09:52:50.755][35][debug][connection] external/envoy/source/common/network/connection_impl.cc:133] [C6024] closing socket: 0
[2019-02-19 09:52:50.755][35][debug][client] external/envoy/source/common/http/codec_client.cc:81] [C6024] disconnect. resetting 1 pending requests
[2019-02-19 09:52:50.755][35][debug][client] external/envoy/source/common/http/codec_client.cc:104] [C6024] request reset
[2019-02-19 09:52:50.755][35][debug][router] external/envoy/source/common/router/router.cc:457] [C5963][S17073327596826725699] upstream reset
[2019-02-19 09:52:50.755][35][debug][filter] src/envoy/http/mixer/filter.cc:191] Called Mixer::Filter : encodeHeaders 2
[2019-02-19 09:52:50.755][35][debug][http] external/envoy/source/common/http/conn_manager_impl.cc:1083] [C5963][S17073327596826725699] encoding headers via codec (end_stream=false):
‘:status’, ‘503’
‘content-length’, ‘57’
‘content-type’, ‘text/plain’
‘date’, ‘Tue, 19 Feb 2019 09:52:50 GMT’
‘server’, ‘envoy’
[2019-02-19 09:52:50.755][35][debug][filter] src/envoy/http/mixer/filter.cc:257] Called Mixer::Filter : onDestroy state: 2
[2019-02-19 09:52:50.755][35][debug][pool] external/envoy/source/common/http/http1/conn_pool.cc:122] [C6024] client disconnected
[2019-02-19 09:52:50.755][35][debug][filter] src/envoy/http/mixer/filter.cc:273] Called Mixer::Filter : log

In the target application, I don’t see any 503 whatsoever. Everything is always 200:

“GET /client.min.js?version=1.7.0 HTTP/1.1” 200 8774206

Anyone can help out here to understand what’s going on? If I remove Istio and route it using Nginx Ingress Controller, then everything works perfectly.

Thanks!

You are using the “preprod-cb” namespace.

By default Istio uses mTLS to send traffic to a namespace. By default, Istio doesn’t inject the sidecar to new namespaces.

Check the pod to make sure there is a sidecar: kubectl -n preprod-cb get pod <name> -o yaml | grep "image:" and make sure something like “gcr.io/istio-release/proxyv2” is there. If it isn’t, try kubectl label namespace preprod-cb istio-injection=enabled

Sorry, but what does mTLS have to do here? My example is a simple Istio Ingress Gateway controller that routes traffic to Pods in a namespace (whatever the name is) using simple VirtualService rules.

There is no mTLS involved anywhere, and I’m talking about a 503, not issues with mTLS.

Also Istio does not need to be injected as side cars into the destination Pods for the Ingress Gateway to be able to route requests.

I was thinking of the situation where istioctl authn tls-check reports a conflict because something causes the gateway to believe mTLS is happening. That’s when you see consistent 503s, and tls-check reports a CONFLICT.

You are reporting random 503s. (I missed that they were random before.) Are they happening during pod creation and deletion or steady-state?

Ah, ok, now I see your point.

No, this is about a simple Ingress Gateway routing to Pods without even a sidecar injected, so basic routing using VirtualService traffic rules.

The logs above show what I could get from the Ingress Gateway reporting the 503 random errors, and how the same exact scenario works perfectly without any issue when simply replacing the Istio Ingress Gateway with a NGINX Ingress Controller.