Port 80/443 can't use when istio proxy installed use hostNetwork

  • istio version 1.12.1:
  • use istioctl operator install
    when istio proxy installed use hostNetwork, containerPort can not select
          overlays:
            - apiVersion: apps/v1
              kind: Deployment
              name: istio-ingressgateway
              patches:
                - path: spec.template.spec.hostNetwork
                  value: true
                - path: spec.template.spec.dnsPolicy
                  value: ClusterFirstWithHostNet
#                - path: spec.template.spec.containers.[name:istio-proxy].ports.[containerPort:8080].hostPort
#                  value: 80
#                - path: spec.template.spec.containers.[name:istio-proxy].ports.[containerPort:8443].hostPort
#                  value: 443

This is a Kubernetes thing, not an istio thing, I’m running in to the same -

✘ Ingress gateways encountered an error: failed to update resource with server-side apply for obj DaemonSet/istio-system/istio-ingressgateway: DaemonSet.apps “istio-ingressgateway” is invalid: [spec.template.spec.containers[0].ports[1].containerPort: Invalid value: 8080: must match hostPort when hostNetwork is true, spec.template.spec.containers[0].ports[2].containerPort: Invalid value: 8443: must match hostPort when hostNetwork is true]

It seems to me the solution would be to get the proxyv2 container listening on 80/443 instead of 8080/8443, which would also require giving it the requisite capabilities - but I can’t figure out for the life of me how to ask it to bind to another port.

Did you make any progress on this? Or does anyone else out there have any ideas?

Background if needed -

Istio running on a cluster where I’d like all the cluster nodes to be ingress points to the ingressgateway … I know I can set up haproxy or something of the sort and just forward to the service - but then you lose client IPs - the only way around that is an HTTP header thing, which would require haproxy to be running in https mode, which would require it to do the TLS termination, yada yada yada, not going to work.

So my attempted solution was to convert the ingressgateway deployment into a daemonset to push it out to each of the cluster nodes and give it hostNetwork … working great, except for the fact that it is listening on 8080 and 8443 :-/ … anything I do to rectify that at the host level (haproxy, etc.) will bring me right back to square 1 with my client IPs being lost!

Any help would be GREATLY appreciated. I’ve been fighting with this cluster for about three weeks and I’m about ready to cry. This is the closest I’ve got to being done with it for sure, but I suddenly seem so far away, again.

Edit: ok, problem solved for me at least - I turned the ingressgateway back into a single pod deployment, and port forwarded (iptables, DNAT, not kubectl port-forward) 443 on the relevant external facing nodes on my cluster, to the service IP, which, as I’m running Weave (I think that’s the reason anyway, I’m really not sure) I am actually able to reach on the box without any trickery or magic! The services are now externally accessible, and the nginx-test pod logs are showing the correct client IP address! Everything will break in the event that the service IP changes but, I can write a script to babysit the iptables rules if that ever becomes a problem.