Rds sync state 'not sent' when using tcp

Hello everyone,

Fist of all, My test setup is based on Istio 1.8.1 with Istio-operator running in an EKS cluster.
I have the following issue:

I configured a custom ingress gateway in my IstioOperator CR, more precisely in spec.components.ingressGateways. After applying it , a new LB (NLB in this case) in AWS was provisioned with a statusPort(15021) + my service’s custom port (10333) as expected.
Then, I created a gateway and virtualservice like this:

apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: istio-nlb-hello-world-internal-ingressgateway
namespace: hello-world
spec:
selector:
app: istio-nlb-hello-world-internal-ingressgateway
servers:

  • hosts:
    • ‘*’
      port:
      name: tcp
      number: 10333
      protocol: TCP

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: istio-nlb-hello-world-internal
namespace: hello-world
spec:
gateways:

  • hello-world/istio-nlb-hello-world-internal-ingressgateway
    hosts:
  • ‘*’
    tcp:
  • match:
    • port: 10333
      route:
    • destination:
      host: hello-world.hello-world.svc.cluster.local
      port:
      number: 8080

In the end , everything seems to be fine and I call the service through nlbhost:10333 without problems. My issue is, when executing istioctl proxy-status, a NOT SENT is returned in RDS:

istio-nlb-hello-world-internal-ingressgateway-64988bc695-5p67t.hello-world SYNCED SYNCED SYNCED NOT SENT istiod-1-8-1-5bdfc9d958-k9fk8 1.8.1
istio-nlb-hello-world-internal-ingressgateway-64988bc695-ncgzx.hello-world SYNCED SYNCED SYNCED NOT SENT istiod-1-8-1-5bdfc9d958-k9fk8 1.8.1

Is this the expected behaviour when using a ingress gateway controller for tcp-only purposes? Or something else is missing here? RDS showing NOT SENT tipically points that no (http?) routing options are available for that deployment ( which is a gateway controller in this case ).

Thanks

1 Like