Should I expose custom port in istio-ingressgateway manually?

In a scenario where there are the Deployment plus a Service, which both pod and service listening to port 8005, what is the correct way to expose it in the “istio-ingressgateway”, suposing that the ingress port also should be 8005? Should it be automatically or manually?

apiVersion: v1
kind: Service
metadata:
  namespace: custom
  name: hello-python-service-8005
spec:
  selector:
    app: hello-python-8005
  ports:
  - name: "http-8005"
    port: 8005

---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: custom
  name: hello-python-8005
spec:
  selector:
    matchLabels:
      app: hello-python-8005
  replicas: 1
  template:
    metadata:
      namespace: custom
      labels:
        app: hello-python-8005
    spec:
      containers:
      - name: hello-python-8005
        command: ["tail"]
        args: ["-f", "/dev/null"]
        image: python:3.7
        ports:
        - containerPort: 8005

and Gateway + VirtualService

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: custom-gateway-8005
  namespace: custom
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
      number: 8005
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: hellopython-8005
  namespace: custom
spec:
  hosts:
  - "*"
  gateways:
  - custom/custom-gateway-8005
  http:
  - match:
    - uri:
        exact: /hellopython
    route:
    - destination:
        host: hello-python-service-8005.custom.svc.cluster.local
        port:
          number: 8005

The ingress pod seems to be configured, but the service doesn’t seems to be automatically configured to also listen.

kube@00000000-0000-0000:~/poc$ /opt/istio/istio-1.4.3/bin/istioctl proxy-config listener $INGRESS_POD -n istio-system
ADDRESS PORT TYPE
0.0.0.0 80 HTTP
0.0.0.0 8005 HTTP
0.0.0.0 15090 HTTP

kube@00000000-0000-0000:~/poc$ kubectl get svc -n istio-system | grep “gateway”
istio-ingressgateway LoadBalancer 10.109.1.41 172.16.188.250 15020:32434/TCP,80:30178/TCP,443:32279/TCP,15029:31349/TCP,15030:32069/TCP,15031:32237/TCP,15032:30105/TCP,15443:31660/TCP

Doing a manually “kubectl edit svc istio-ingressgateway -n istio-system” I can expose the port, but I would like to know if there is some alternatives. The option to install Istio with this port already open, is not an option, as this “scenario” expects to extend an Istio already installed.

Which would be the recommended approach to expose the port 8000?

Interesting, I’m running Istio 1.5.1 and failed on same issue. Did you manage to make it working “automatically” ?

It does not propagate automatically from your gateways, but you can patch a service with a CLI. Here’s how open a new port:

kubectl -n istio-system patch svc istio-ingressgateway --type=json -p='[{"op": "add","path": "/spec/ports/-","value": {"name":"preview","nodePort":31474,"port":3474,"protocol":"TCP","targetPort":3474}}]' --dry-run=true -o yaml | kubectl apply -f -

I am currently using version 1.16.2 and I am facing the same problem as you. After adding the new microservice, do you need to manually update the Istio-ingressGateway resource to expose the custom TCP port to receive TCP traffic from outside the kubernetes cluster? Is there any other way? Or what method are you using now? Can you show me?