I’m using Istio 1.1 with an AWS Network Load Balancer to allow traffic from outside into my Kubernetes cluster. This is working fine for HTTP, but I’m not able to get it working with GRPC to a new etcd cluster. etcd is not part of the service mesh as the Istio sidecar was preventing the etcd cluster from coming up.
etcd HTTP API:
> etcdctl --endpoints=https://etcd.sandbox.my.domain:443 --no-sync member list
8586e10556167ba0: name=etcd-cluster-lk4gh4w4hx peerURLs=http://etcd-cluster-lk4gh4w4hx.etcd-cluster.etcd.svc:2380 clientURLs=http://etcd-cluster-lk4gh4w4hx.etcd-cluster.etcd.svc:2379 isLeader=false
9cc42c8ec1604c23: name=etcd-cluster-7hnlx4c2h6 peerURLs=http://etcd-cluster-7hnlx4c2h6.etcd-cluster.etcd.svc:2380 clientURLs=http://etcd-cluster-7hnlx4c2h6.etcd-cluster.etcd.svc:2379 isLeader=true
b149976499e02161: name=etcd-cluster-qqr6gk7787 peerURLs=http://etcd-cluster-qqr6gk7787.etcd-cluster.etcd.svc:2380 clientURLs=http://etcd-cluster-qqr6gk7787.etcd-cluster.etcd.svc:2379 isLeader=false
etcd GRPC API:
> ETCDCTL_API=3 etcdctl --endpoints=http://etcd.sandbox.my.domain:8001 member list
Error: grpc: the client connection is closing
Service for etcd:
apiVersion: v1
kind: Service
metadata:
labels:
app: etcd
etcd_cluster: etcd-cluster
name: etcd-cluster-client
namespace: etcd
spec:
ports:
- name: client
port: 2379
protocol: TCP
targetPort: 2379
selector:
app: etcd
etcd_cluster: etcd-cluster
type: ClusterIP
VirtualService for etcd:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: etcd-cluster-client
namespace: istio-system
spec:
gateways:
- mesh
- private-ingressgateway-cert-merge
hosts:
- etcd.sandbox.my.domain
http:
- route:
- destination:
host: etcd-cluster-client.etcd.svc.cluster.local
port:
number: 2379
Istio Gateway:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: private-ingressgateway-cert-merge
namespace: istio-system
spec:
selector:
istio: private-ingressgateway
servers:
- hosts:
- '*'
port:
name: http-wildcard-redirect
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- '*'
port:
name: grpc-wildcard
number: 8001
protocol: GRPC
- hosts:
- my.domain
- '*.my.domain'
- sandbox..my.domain
- '*.sandbox.my.domain'
port:
name: https-combined-certificate
number: 443
protocol: HTTPS
tls:
httpsRedirect: false
mode: SIMPLE
privateKey: /etc/istio/private-ingressgateway-certs/combined-certificate.tls.key
serverCertificate: /etc/istio/private-ingressgateway-certs/combined-certificate.tls.crt
I’m seeing some warnings in the Gateway logs:
[2019-08-13 22:25:47.737][20][info][upstream] [external/envoy/source/server/lds_api.cc:74] lds: add/update listener '0.0.0.0_8001'
[2019-08-13 22:45:38.723][51][warning][upstream] [external/envoy/source/common/upstream/original_dst_cluster.cc:110] original_dst_load_balancer: No downstream connection or no original_dst.
[2019-08-13 22:45:40.422][51][warning][upstream] [external/envoy/source/common/upstream/original_dst_cluster.cc:110] original_dst_load_balancer: No downstream connection or no original_dst.
[2019-08-13 22:45:41.594][51][warning][upstream] [external/envoy/source/common/upstream/original_dst_cluster.cc:110] original_dst_load_balancer: No downstream connection or no original_dst.
[2019-08-13 22:47:29.061][20][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 13,
[2019-08-13 23:19:00.125][20][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 13,
Any idea why GRPC connections don’t make it through? Any help or advice would be appreciated.