How to handle downstream keep alive connections not closed when upstream closes

Hello,

I am running elastic with istio in kubernetes, I have a client app also using istio. This client speaks to elastic via the official elastic client. The sidecars do mutual tls

Things are such that the client makes some keep alive connections to elastic, all good. But if one of the elastic pods fails, then I can see from looking at /proc/net/tcp that the connection my client made sticks around, when the elastic pod restarts it has a new ip and now this old connection is referring to a stale ip and gets repeated 503s.

If I kill the istio container, but keep my client container alive, then I can see the connections get closed and everything resolves itself.

Is this all as intended? I’m a little surprised, I would have thought a upstream connection close should get propagated to the downstream. Is the resolution that my client should close its connection in the case of a 503?

Yeah. This is a known problem. Please refer to Allow disabling connection pooling and use 1:1 connection with upstream and downstream · Issue #19458 · envoyproxy/envoy · GitHub for actual details. currently you will have to reconnect/resolve on request failure caused by pod going down.

1 Like

Thank you! That’s what I needed to see.

do you have any temp solution to avoid this issue? e.g traffic.sidecar.istio.io/excludeOutboundPorts or something similiar? Thanks