Multi-Primary 503 from other cluster Pods

Hello, We are trying to setup multi-cluster service mesh [Multi-Primary] in EKS following the doc given at Istio / Install Multi-Primary
I have also deployed helloworld application following the doc at Istio / Verify the installation. However, I am able to see the traffic only going to the local cluster though the remote cluster pod is also showing healthy in the following command o/p.

% istioctl proxy-config endpoint sleep-64d7d56698-rx6tj -n sample | grep helloworld
10.21.37.66:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
10.21.47.116:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local

Here 10.21.47.116 is the local cluster Pod IP & 10.21.37.66 is the remote cluster Pod IP. I have also followed the troubleshooting step given here Istio / Troubleshooting Multicluster
& all looks good. I have confirmed that both the Pods [Workernodes] are in the same availability zone [AWS EKS]. Traffic go to the second cluster only if I bring down the hello-world application in the first cluster.
However from second cluster I am getting 503 response code with error ‘upstream connect error or disconnect/reset before headers. reset reason: connection failureupstream connect error or disconnect/reset before headers’.

% for i in $(seq 10); do kubectl -n sample exec sleep-64d7d56698-rx6tj -c sleep – curl -s helloworld:5000; done
upstream connect error or disconnect/reset before headers. reset reason: connection failureupstream connect error or disconnect/reset before headers. reset reason: connection failureupstream connect error or disconnect/reset before headers. reset reason: connection failureupstream connect error or disconnect/reset before headers. reset reason: connection failureupstream connect error or disconnect/reset before headers. reset reason: connection failure

% for i in $(seq 10); do kubectl -n sample exec sleep-64d7d56698-rx6tj -c sleep – curl -I -s helloworld:5000; done
HTTP/1.1 503 Service Unavailable
content-length: 91
content-type: text/plain
date: Wed, 04 Aug 2021 12:28:14 GMT
server: envoy

Really appreciate if someone can help with troubleshooting this issue further.

Thanks

This was an issue with Security Group rules in EKS worker nodes. Got this fixed by allowing all traffic from cluster1 worker node sg id in cluster2 sg & vice versa. The problem was if you try telnet to the cluster2 pod IP from sleep container in cluster1, it will always show connected probably because of the sidecar envoy proxy. Had to deploy a ubuntu pod in a namespace where envoy -proxy side car is not auto injected to confirm that telnet is actually failing to second cluster Pod IP.