Need to assign static ip to ingress load balancer ip using istiotl manifest update

Hello ,

Trying to apply a static internal IP to ‘istio-ingressgateway’ loadbalancer ip.

Customvalues.yaml

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
resources:
requests:
cpu: 200m
memory: 2Gi
limits:
cpu: 400m
memory: 4Gi
serviceAnnotations:
cloud.google.com/load-balancer-type: “internal”
service:
type: LoadBalancer
loadBalancerIP: 10.143.67.8
ports:
- port: 80
name: http2
nodePort: 31380
protocol: TCP
targetPort: 80

Now after doing istioctl --kubeconfig=$CONFIG_PATH manifest apply -f Customvalues.yaml . The IP of istio-ingress gateway is not changing

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.38.17.176 75.67.737.25 15020:31904/TCP,80:31380

Need to replace ‘10.143.67.8’ under external ip.

Note :Ip are dummy . Any leads will be appreciated.Istio 1.5.0 is installed using istictl manifest .

With a subset of your config:

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  profile: empty
  components:
    ingressGateways:
    - name: istio-ingressgateway
      enabled: true
      k8s:
        serviceAnnotations:
          cloud.google.com/load-balancer-type: “internal”
        service:
          type: LoadBalancer
          loadBalancerIP: 1.2.3.4

I seem to get the correct manifest:

istioctl manifest generate -f ~/temp/gw_1.5.yaml | grep 1.2.3.4
  loadBalancerIP: 1.2.3.4

Looking at the Service spec, the external IP field is generated correctly. Could you confirm that you at least get the correct config using manifest generate?

Yes , I was able to get the output of the gerate command.
istioctl manifest generate -f demo-nw.yaml | grep 1.2.3.4
proto: tag has too few fields: “-”
loadBalancerIP: 1.2.3.4

So ,this is the yaml (modified demo profile values ) i am using .

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
egressGateways:
- name: istio-egressgateway
enabled: true
k8s:
resources:
requests:
cpu: 10m
memory: 40Mi

ingressGateways:
- name: istio-ingressgateway
  enabled: true
  k8s:
    serviceAnnotations:
      cloud.google.com/load-balancer-type: “internal”
    resources:
      requests:
        cpu: 10m
        memory: 40Mi
    service:
      type: LoadBalancer
      loadBalancerIP: 1.2.3.4
  
policy:
  enabled: false
  k8s:
    resources:
      requests:
        cpu: 10m
        memory: 100Mi

telemetry:
  k8s:
    resources:
      requests:
        cpu: 50m
        memory: 100Mi

pilot:
  k8s:
    env:
      - name: POD_NAME
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.name
      - name: POD_NAMESPACE
        valueFrom:
          fieldRef:
            apiVersion: v1
            fieldPath: metadata.namespace
      - name: GODEBUG
        value: gctrace=1
      - name: PILOT_TRACE_SAMPLING
        value: "100"
      - name: CONFIG_NAMESPACE
        value: istio-config
    resources:
      requests:
        cpu: 10m
        memory: 100Mi

addonComponents:
kiali:
enabled: true
grafana:
enabled: true
tracing:
enabled: true

values:
global:
disablePolicyChecks: false
proxy:
accessLogFile: /dev/stdout
resources:
requests:
cpu: 10m
memory: 40Mi

pilot:
  autoscaleEnabled: false

mixer:
  adapters:
    useAdapterCRDs: false
    kubernetesenv:
      enabled: true
    prometheus:
      enabled: true
      metricsExpiryDuration: 10m
    stackdriver:
      enabled: false
    stdio:
      enabled: true
      outputAsJson: false
  policy:
    autoscaleEnabled: false
  telemetry:
    autoscaleEnabled: false

gateways:
  istio-egressgateway:
    autoscaleEnabled: false
  istio-ingressgateway:
    autoscaleEnabled: false
    ports:
    ## You can add custom gateway ports in user values overrides, but it must include those ports since helm replaces.
    # Note that AWS ELB will by default perform health checks on the first port
    # on this list. Setting this to the health check port will ensure that health
    # checks always work. https://github.com/istio/istio/issues/12503
    - port: 443
      nodePort: 31390
      name: https
      # protocol: TCP
      targetPort: 443
    - port: 80
      nodePort: 31380
      name: http2
      # protocol: TCP
      targetPort: 80
    - port: 31400
      name: tcp
      nodePort: 31400
      # protocol: TCP
      targetPort: 31400
    - port: 15011
      nodePort: 31167
      name: tcp-pilot-grpc-tls
      # protocol: TCP
      targetPort: 15011
    - port: 8060
      targetPort: 8060
      name: tcp-citadel-grpc-tls
      nodePort: 31795
      # protocol: TCP
    secretVolumes:
    - name: ingressgateway-certs
      secretName: istio-ingressgateway-certs
      mountPath: /etc/istio/ingressgateway-certs
    - name: ingressgateway-ca-certs
      secretName: istio-ingressgateway-ca-certs
      mountPath: /etc/istio/ingressgateway-ca-certs
kiali:
  createDemoSecret: true

Once these values are in place I am trying to install istio itself from the istioctl commands using this values.
istioctl manifest apply --set installPackagePath=/home/istio-1.5.0/install/kubernetes/operator/charts --set profile=/home/istio-1.5.0/install/kubernetes/operator/profiles/demo-custom.yaml

.End result it should ideally assign the load balancer to the internal ip with 1.2.3.4 but its not happening.It is still showing pending .
kubectl get svc -n istio-system
istio-ingressgateway LoadBalancer x.x.x.x 443:31390/TCP,80:31380/TCP,31400:31400/TCP,15011:31167/TCP,8060:31795/TCP 100m

1)Please let me know in case the yaml place holder is not correct for load balancer ip field
2)If not 1 ,then what is the correct procedure to assign an internal ip to laod balancer (istio-ingress gateway)

The generated output looks correct so it looks like a platform issue - what provider are you using? What does kubectl describe say?

Using istio 1.5.0 and platform is on google GKE

try to add the internaIP to the global Values section

gateways:

istio-ingressgateway:
type: LoadBalancer
serviceAnnotations:
cloud.google.com/load-balancer-type: “internal”
loadBalancerIP: 1.2.3.4

Any update on this topic?

This is what worked for me (following official istioctl installation manual)
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
serviceAnnotations:
service.beta.kubernetes.io/azure-load-balancer-internal: “true”
service:
type: LoadBalancer
loadBalancerIP: 1.2.3.4
ports:
- name: status-port
port: 15021
targetPort: 15021
- name: http2
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
- name: tls
port: 15443
targetPort: 15443

from this lisk

i just added ip and type private loadbalanser created before by hands.
It is not working

apiVersion: install.istio.io/v1alpha1

kind: IstioOperator

spec:

components:

ingressGateways:

  - name: istio-ingressgateway

    enabled: true

  - namespace: istio-system

    name: ilb-gateway

    enabled: true

    k8s:

      resources:

        requests:

          cpu: 200m

      serviceAnnotations:

        cloud.google.com/load-balancer-type: "internal"

      service:

        type: LoadBalancer

        loadBalancerIP: 10.***.184.24

        ports:

        - port: 8060

          targetPort: 8060

          name: tcp-citadel-grpc-tls

        - port: 5353

          name: tcp-dns

I also meet the same problem.
istio version is : 1.6.2
k8s version: v1.18.3

I tried the following two ways, but failed both. But I changed the IP in a workround.

INGRESSGATEWAY=istio-ingressgateway
kubectl patch svc $INGRESSGATEWAY --namespace istio-system --patch ‘{“spec”: { “loadBalancerIP”: “x.x.x.x” }}’

1) 
ingressGateways:
- name: istio-ingressgateway
  enabled: true
  k8s:
    service:
      type: LoadBalancer
      loadBalancerIP: x.x.x.x

 2) 
  ingressGateways:
  - enabled: true
    k8s:
      overlays:
        - kind: Service
          name: istio-ingressgateway
          patches:
            - path: spec.loadBalancerIP
              value: "x.x.x.x"