Hello,
I am running elastic with istio in kubernetes, I have a client app also using istio. This client speaks to elastic via the official elastic client. The sidecars do mutual tls
Things are such that the client makes some keep alive connections to elastic, all good. But if one of the elastic pods fails, then I can see from looking at /proc/net/tcp that the connection my client made sticks around, when the elastic pod restarts it has a new ip and now this old connection is referring to a stale ip and gets repeated 503s.
If I kill the istio container, but keep my client container alive, then I can see the connections get closed and everything resolves itself.
Is this all as intended? I’m a little surprised, I would have thought a upstream connection close should get propagated to the downstream. Is the resolution that my client should close its connection in the case of a 503?