istio-Ingressgateway pod is failing with 503 error

I have two gateways configured which listens on 0.0.0.0:443 and 0.0.0.0:80. They both attached to ingressGateway.

When I checked the istio proxy-status, LDS was’t synced between istio and ingressgateway(envoy). I checked the ingressgateway pod status it was failing with 503 error.

$kc describe pod istio-ingressgateway-6f949f8f78-zbtbm
Events:
Warning Unhealthy 44m (x3 over 44m) kubelet, ip-10-0-60-229.us-west-2.compute.internal Readiness probe failed: HTTP probe failed with statuscode: 503

$kc logs istio-ingressgateway-6f949f8f78-zbtbm
[2019-04-09 17:09:34.051][18][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_mux_subscription_lib/common/config/grpc_mux_subscription_impl.h:70] gRPC config for type.googleapis.com/envoy.api.v2.Listener rejected: Error adding/updating listener 0.0.0.0_443: error adding listener ‘0.0.0.0:443’: multiple filter chains with the same matching rules are defined

I deleted the 443 config on one of the gateway and it started working and istio-evoy synced.

HTTP configuration works on both gateways, i,e, 0.0.0.0:80. Is this a bug?

1 Like

Seeing the same issue. This was after upgrading to 1.1.12, and implementing ipvs.