We use Istio for deployment of our services and have configured TLS SNI based routing rules. Here’s a snippet of our .yaml file :
spec:
gateways:
- istio-system/istio-ingressgateway
hosts:
- service.com
- deployment0.com
- deployment1.com
tls:
- match:
- port: 8443
sniHosts:
- service.com
route:
- destination:
host: service.local
port:
number: 8443
- match:
- port: 8443
sniHosts:
- deployment0.com
route:
- destination:
host: deployment-0.local
port:
number: 8443
- match:
- port: 8443
sniHosts:
- deployment1.com
route:
- destination:
host: deployment-1.local
port:
number: 8443
Here, deployment-0.local
, deployment-1.local
refer to K8s stateful pods and service.local
to the K8s service front-ending the pods
So, when a request from a client carries SNI (in ClientHello
packet) as deployment0.com
, it gets routed by Istio to the stateful pod deployment-0.local
. However, if that pod is down at this moment, Istio seems to send a HTTP 503 response code to the client.
Instead of HTTP 503 being sent, is there a way to enforce Istio to send the request to any other available pod, if the pod, it is supposed to send the request to, is down ?
TIA !