Istio TCP Connection Failure Connection Refused

I have pods in the mesh that I injected with the Istio sidecar. In most of the logs I’m seeing quite a few

TCP connection failed: (Connection refused). I was thinking this is due to AuthorizationPolicies. I tried applying an AuthorizationPolicy to my namespace, but still seeing the TCP Connection Failure in my pods logs.
kind: AuthorizationPolicy
name: policy
namespace: foo
Is there a way to resolve this?

More info on this issue. Seems to be issue with service to service communication in the mesh. I have two pods (podA and podB) in the same k8s cluster that should communicate with one another. The service type for both pods is ClusterIP. Everything works fine before applying the envoy sidecar. Once I apply the envoy sidecar, when I check the logs from podA and see the following:

[2020/04/06 12:05:29] [error] [filter_kube] upstream connection error
[2020/04/06 12:05:29] [ warn] [filter_kube] could not get meta for POD podA
[2020/04/06 12:05:29] [ info] [http_server] listen iface= tcp_port=9090
[2020/04/06 12:05:29] [ info] [sp] stream processor started
[2020/04/06 12:05:29] [error] [filter_kube] upstream connection error
[2020/04/06 12:05:29] [error] [filter_kube] upstream connection error
[2020/04/06 12:05:29] [error] [filter_kube] upstream connection error
[2020/04/06 12:05:29] [error] [filter_kube] upstream connection error
[2020/04/06 12:05:29] [error] [filter_kube] upstream connection error
[2020/04/06 12:05:30] [error] [io] TCP connection failed: podB.svc.cluster.local:24224 (Connection refused)

I also checked the istio-proxy logs and see the following:

2020-04-06T12:05:30.152962Z info Envoy command: [-c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster podA.logging --service-node sidecar~ --max-obj-name-len 189 --local-address-ip-version v4 --log-format [Envoy (Epoch 0)] [%Y-%m-%d %T.%e][%t][%l][%n] %v -l warning --component-log-level misc:error --concurrency 2]
[Envoy (Epoch 0)] [2020-04-06 12:05:30.193][23][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:91] gRPC config stream closed: 14, no healthy upstream
[Envoy (Epoch 0)] [2020-04-06 12:05:30.193][23][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:54] Unable to establish new stream
2020-04-06T12:05:32.685545Z info Envoy proxy is NOT ready: server is not live, current state is: INITIALIZING
2020-04-06T12:05:34.115072Z info Envoy proxy is ready
[Envoy (Epoch 0)] [2020-04-06 12:06:49.110][23][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:91] gRPC config stream closed: 13,

Any ideas on what is causing this and how to resolve?

Ever figure out your issue? I seem to have the same problem.