Traffic with destination as service endoint considered egress traffic

I have two services which speak grpc. When the client service attempts to send traffic to a server service endpoint (pod IP:PORT) the client side istio proxy considers this traffic egress traffic (PassthroughCluster). Is this simply because the client side proxy only contains routes for service names?

[2019-11-08T00:05:20.731Z] "POST /ai.argo.viaduct.proto.v1.ViaductAPI/Query HTTP/2" 200 - "-" "-" 1241 1280 343 342 "-" "grpc-java-netty/1.21.1" "18cbd3cf-5a7e-92ee-bd69-e7639216fc77" "192.168.176.11:8085" "192.168.176.11:8085" PassthroughCluster - 192.168.176.11:8085 192.168.192.4:50630 -

We are facing similar issue, Istio telemetry is reporting destination_service_name of internal service traffic as “PasstrhoughCluster”, even though other data fields have correct Kubernetes values, for example:

istio_requests_total{connection_security_policy="none",destination_app="geocode",destination_principal="unknown",destination_service="geocode.services.svc.cluster.local",destination_service_name="PassthroughCluster",destination_service_namespace="services",destination_version="origin_feature_istio-1ce1de2998d9ee5aa6855c84e4c0ec0358e3e75c",destination_workload="geocode",destination_workload_namespace="services",instance="10.1.1.149:42422",job="istio-mesh",permissive_response_code="none",permissive_response_policyid="none",reporter="destination",request_protocol="http",response_code="200",response_flags="-",source_app="server",source_principal="unknown",source_version="origin_feature_istio-1ce1de2998d9ee5aa6855c84e4c0ec0358e3e75c",source_workload="server",source_workload_namespace="services"}	

Everything indicates that this is internal traffic (e.g. destination_service_namespace, destination_workload_namespace, destination_service are all corrected populated), except for destination_service_namespace is showing as PassthroughCluster. Any ideas why this might be happening?

@Sovietaced, have you figured out the issue with this?

Thanks!

We worked around this by changing our deployment to use a stateful set. Additionally we manually added service entries which instruct the istio proxies to treat the traffic towards the pod identifier as mesh internal.

Using a stateful set gives us stable identifiers for the pods which we can more confidently whitelist in a service entry compared to IP addresses. So now in our application level logic we cache the pod identifier instead of the pod IP address.

Here is an example of the service entry which we used.

---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: viaduct-service-entry
  namespace: overwatch-development
spec:
  hosts:
  - "*.viaduct"        <-- WILDCARD HERE
  location: MESH_INTERNAL 
  ports:
  - name: grpc
    number: 8085
    protocol: GRPC
  resolution: NONE

I loosely followed this blog post: https://medium.com/airy-science/making-istio-work-with-kubernetes-statefulset-and-headless-services-d5725c8efcc9