Error calling GRPC from client outside cluster

I need very precise information on this. How do I call configure a GRPC Service to be callable from outside the cluster? I have a GRPC echo service running on port 9009
below are all my configs. No matter what I set I always get back:

grpcurl -v -plaintext -protoset ./echoservice.protoset -d '{"message”:”Hello Istio GRPC!”}’ 10.10.xx.xx:31380 com.test.echo.EchoService/echo

Resolved method descriptor:

rpc echo ( .com.test.echo.EchoRequest ) returns ( .com.test.echo.EchoResponse );

Request metadata to send:

(empty)

Response headers received:

(empty)

Response trailers received:

content-type: application/grpc

date: Tue, 25 Jun 2019 12:18:13 GMT

server: istio-envoy

Sent 1 request and received 0 responses

ERROR:

Code: Unimplemented

Message:

I have spent the better part of two working days on this and so yes I would like some help or at least some suggestions. With 5 plus files to configure here are a few things I was wondering:

a) What is difference between http2, GRPC in protocol settings?
b) why are the path settings for grpc routing under http: in the virtual service? why would this not go under grpc? What exactly goes into prefix? What would the consequence of setting prefix to /
If the prefix is wrong would this cause a connection failure or something else?

How do I know what to set for the prefix of the grpc call in the virtual service?

c) How can I tell what caused the error?
d) Where is the visibility into grpc traffic to see exactly what is causing these errors?

This is super critical because I need to able to diagnose GRPC issues as the occur on my istio cluster.

Anyway, at the very least I would like to know where to look besides the pod logs of the istio ingress gateway. That does not tell me per request what is happening. I see logs related to grpc but with limited information on cause. It looks like there might be some connection failure but following what route? How I debug this? Why was there a connection failure? etc. Very little information from this. What I want to be able to do is is trace a client GRPC request from the outside to Istio Ingress Gateway and to the end grpc service. Only then will I feel ok running grpc inside istio.

[2019-06-25 12:10:09.821][14][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 13,

2019-06-16T09:21:13.785772Z info Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected

[2019-06-16 09:21:14.551][14][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 14, upstream connect error or disconnect/reset before headers. reset reason: connection failure

Can someone tell me how to see the cause of a failure to progress a grpc from the ingress gateway service? How do I debug this?

WHY was there a connection failure? Was it to the POD or from the Client to the Istio Ingress Gateway? etc
Was the route wrong or the port wrong? I dont have enough detail here.

apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: echo-service
name: echo-service
spec:
replicas: 1
selector:
matchLabels:
app: echo-service
template:
metadata:
labels:
app: echo-service
spec:
containers:
- name: echo-service
image: harbor.abc.com/proj1/echo/0.0.1-snapshot
imagePullSecrets:
- name: harborcred2

apiVersion: v1
kind: Service
metadata:
labels:
app: echo-service
name: echo-service
spec:
ports:
- name: grpc
port: 9009
targetPort: 9009
selector:
app: echo-service

  1. DestinationRule // So that connections dont get closed off quickly

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: echo-destination-rule
spec:
host: echo-service
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
connectTimeout: 30ms
tcpKeepalive:
time: 7200s
interval: 75s

  1. A VirtualService
    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
    name: echo-virtual-service
    spec:
    hosts:

    • “*”
      gateways:
    • my-grpc-gateway
      http:
    • match:
      • uri:
        prefix: /com.test.echo.EchoService. // also tried with just / /com.test.echo.EchoService/ etc
      • destination:
        host: echo-service
        port:
        number: 9009. // have tried with and without the port number of the service.
  2. Gateway
    apiVersion: networking.istio.io/v1alpha3
    kind: Gateway
    metadata:
    name: my-grpc-gateway
    spec:
    selector:
    istio: ingressgateway
    servers:

    • port:
      number: 9001
      name: grpc
      protocol: http2. // Also tried GRPC
      hosts:
      • “*”

Hi, reviving this topic as it never received responses - and I’m seeing the exact same behavior with a similar setup.

I can also say that this setup does allow other services in my cluster to reach my (in this example) foo-service with grpc requests. But same as above, calls from outside the cluster all return GRPC Unimplemented.

When outside the cluster, I’m hitting a Cloud DNS record that is mapped to the IP of the istio-ingressgateway Load Balancer, on port 80. (was planning on figuring out security once getting the basics working)

Here’s what I’ve got:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: foo-service
  name: foo-service
  namespace: foo-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: foo-service
  template:
    metadata:
      labels:
        app: foo-service
    spec:
      containers:
      - image: {{ .Image }}
        name: foo-service
        envFrom:
          - secretRef:
              name: foo-service-env
        ports:
          - containerPort: 80
apiVersion: v1
kind: Service
metadata:
  name: foo-service
  namespace: foo-service
spec:
  ports:
  - port: 80
    protocol: TCP
    name: grpc
  selector:
    app: foo-service
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: foo-service-gateway
  namespace: foo-service
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: grpc
      protocol: GRPC
    hosts:
    - "*"
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: foo-service-ingress
  namespace: foo-service
spec:
  hosts:
    - "*"
  gateways:
  - foo-service-gateway
  http:
  - route:
    - destination:
        host: foo-service.foo-service.svc.cluster.local
        port:
          number: 80

Any info or debugging help would be much appreciated!

I would use the envoy grpc/http2 bridge for outside communication. I posted the solution for this a while back. The bridge is what is used at Lyft themselves for this. The other solutions are much more brittle unless there has been a lot of progress in this area.

Bump on this topic. Istio team could we possibly get a grpc example?

I had a similar problem and solved it by adjusting the ServiceEntry of the caller side.

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
  name: caller-side-service-entry
spec:
  hosts:
  - external.your.service.fqdn
  ports:
  - number: 80
    name: grpc
    protocol: grpc
  resolution: NONE
  location: MESH_EXTERNAL

The point is the caller’s protocol, if you define it as TCP, you ended up calling it with http/1.1 when you call the outside and got an error.

My guess is that the automatic protocol decision in egress has been downgraded from http/2 to http/1.1 by mistake.

1 Like

@Steven_O_brien ,

Could you point me to the solution you posted on this issue? I am running into something similar and would be helpful.