Warnings during istio upgrade 1.4 --> 1.5

istio 1.4.6 is installed via istioctl, upgrade to 1.5.1 gives warning and asking to use --force,

what is the meaning of last 2 error lines?
what is meaning of “IOPS” word? its confusing, i couldnt find its meaning in istio glossary docs
can I use --force option?

$ istioctl upgrade -f /tmp/c.yaml --dry-run

Control Plane - citadel pod - istio-citadel-6688f56667-6zpk8 - version: 1.4.6
Control Plane - galley pod - istio-galley-58d84bddb6-r6fm8 - version: 1.4.6
Control Plane - ingressgateway pod - istio-ingressgateway-6b99bb54cc-82c9k - version: 1.4.6
Control Plane - pilot pod - istio-pilot-5458c4b8f8-4dstj - version: 1.4.6
Control Plane - policy pod - istio-policy-774f9c85cb-m7t4v - version: 1.4.6
Control Plane - sidecar-injector pod - istio-sidecar-injector-77d8c95c5c-wn6dq - version: 1.4.6
Control Plane - telemetry pod - istio-telemetry-8f4c4b7c5-2rmz9 - version: 1.4.6

Upgrade version check passed: 1.4.6 -> 1.5.1.

2020-04-09T17:09:39.653185Z     info    Error: failed to generate IOPS from file: [/tmp/c.yaml] for the current version: 1.4.6, error: chart minor version 1.4.6 doesn't match istioctl version 1.5.0, use --force to override

Error: failed to generate IOPS from file: [/tmp/c.yaml] for the current version: 1.4.6, error: chart minor version 1.4.6 doesn't match istioctl version 1.5.0, use --force to override

you are right, there might be 2 issues here.

  1. the “IOPS” word should be replaced because it is an implementation detail;
  2. the upgrade function can skip the version match check for the current version.

I will create a PR to fix them.

Tracked in this issue: https://github.com/istio/istio/issues/22907

1 Like

@deepak_deore @taohe I have a same issue. i upgrade istio from 1.4.6 to 1.5.2
i am getting still this error… can you help how to do this.

2020-04-30T09:08:58.058671Z     info    Error: failed to generate IOPS from file: [demo.yaml] for the current version: 1.4.6, error: chart mino
r version 1.4.6 doesn't match istioctl version 1.5.0, use --force to override

Error: failed to generate IOPS from file: [demo.yaml] for the current version: 1.4.6, error: chart minor version 1.4.6 doesn't match istioctl v
ersion 1.5.0, use --force to override

i got other issues also by using --force option, old pods (galley, citadel, pilot) were still running, with new version there should only be istiod pod.

since it was my non-prod env, i decided to wipe out 1.4 and install 1.5 fresh, so I dont have any working solution to this :frowning:

Thanks @deepak_deore for reply
I also tried this --force option. during the installation its shows

  Waiting for resources to become ready...                                                                                                     
  Waiting for resources to become ready...                                                                                                     
  Waiting for resources to become ready...                                                                                                     
  Waiting for resources to become ready...                                                                                                     
  Waiting for resources to become ready...                                                                                                     
2020-04-30T10:43:04.760871Z     error   installer       Failed to wait for resource: resources not ready after 10m0s: timed out waiting for the
 condition
Deployment/istio-system/istiod
- Applying manifest for component IngressGateways...
- Applying manifest for component EgressGateways...
- Applying manifest for component AddonComponents...
- Pruning objects for disabled component Policy...
- Pruning objects for disabled component Telemetry...
- Pruning objects for disabled component Galley...
- Pruning objects for disabled component Citadel...
✔ Finished pruning objects for disabled component Policy.
✔ Finished pruning objects for disabled component Galley.
✔ Finished pruning objects for disabled component Telemetry.
✔ Finished pruning objects for disabled component Citadel.
✔ Finished applying manifest for component EgressGateways.
✔ Finished applying manifest for component IngressGateways.
✔ Finished applying manifest for component AddonComponents.


✔ Installation complete

Upgrade submitted. Please use `istioctl version` to check the current versions.
To upgrade the Istio data plane, you will need to re-inject it.
If you’re using automatic sidecar injection, you can upgrade the sidecar by doing a rolling update for all the pods:
    kubectl rollout restart deployment --namespace <namespace with auto injection>
If you’re using manual injection, you can upgrade the sidecar by executing:
    kubectl apply -f < (istioctl kube-inject -f <original application deployment yaml>)

and after that when i checked version:

istioctl version
client version: 1.5.2
control plane version: 1.4.6

and when i run kubectl get pod -n istio-system it show me this:

   NAME                                    READY   STATUS              RESTARTS   AGE
grafana-5cc7f86765-zckj4                1/1     Running             0          18m
istio-egressgateway-5444f8f8c-g9ts8     0/1     ContainerCreating   0          18m
istio-egressgateway-6b6f694c97-248gt    1/1     Running             0          6d5h
istio-ingressgateway-6969bf64f7-vnpmj   0/1     ContainerCreating   0          18m
istio-ingressgateway-8c9c9c9f5-rb4ds    1/1     Running             0          6d5h
istio-tracing-8584b4d7f9-9n5x9          1/1     Running             0          18m
istiod-7f6497fff8-bqc7c                 0/1     ContainerCreating   0          28m
kiali-696bb665-g6wpg                    1/1     Running             0          18m
prometheus-5fb44dd795-qv967             0/2     ContainerCreating   0          18m
prometheus-66c5887c86-vpsj8             1/1     Running             0          6d5h

i am stuck here.
so still is there no another way or any solution of it?
any help would be appericiated.

i tried the upgrade again and it worked this time, all the pods were able to start except ingress g/w, i had to patch istio-autogenerated-k8s-ingress gateway resource in istio-system

i ran the command that includes patch gateway istio-autogenerated-k8s-ingress in https://istio.io/docs/tasks/traffic-management/ingress/ingress-certmgr/#configuring-dns-name-and-gateway

I also tried again & now it works fine but egressgateway version is still showing 1.4.6
i used kubectl upgrade --force <demo-profile.yaml>

~/istio-1.5.2$ istioctl version
client version: 1.5.2
egressgateway version: 1.4.6
ingressgateway version: 1.5.2
pilot version: 1.5.2
data plane version: 1.4.6 (1 proxies), 1.5.2 (8 proxies)

any idea why egress gateway would not updated to latest proxy 1.5.2?

egress gw is disabled in 1.5 you need to explicitly enable it, that may be the reason why it isnt getting upgraded, this is what my override file is if that helps you.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  addonComponents:
    grafana:
      enabled: false
    istiocoredns:
      enabled: false
    kiali:
      enabled: true
      k8s:
        replicaCount: 2
    prometheus:
      enabled: false
    tracing:
      enabled: false

  components:
    base:
      enabled: true
    citadel:
      enabled: false
    cni:
      enabled: false
    galley:
      enabled: false
    nodeAgent:
      enabled: false
    policy:
      enabled: false
    sidecarInjector:
      enabled: false
    telemetry:
      enabled: false
    pilot:
      enabled: true
      k8s:
        hpaSpec:
          minReplicas: 2

    ingressGateways:
    - name: istio-ingressgateway
      enabled: true
      k8s:
        hpaSpec:
          minReplicas: 2
        service:
          type: ClusterIP
          ports:
          - name: http2
            port: 80
            targetPort: 80
          - name: https
            port: 443
    egressGateways:
    - name: istio-egressgateway
      enabled: true
      k8s:
        hpaSpec:
          minReplicas: 2
        service:
          type: ClusterIP
          ports:
          - name: http2
            port: 80

  values:
    # "global.jwtPolicy" is only for minukube
    #global:
      #jwtPolicy: first-party-jwt          
    kiali:
      prometheusAddr: http://prometheus.monitoring:9090
    global:
      proxy:
        accessLogFile: /dev/stdout

Hi
i used configuration file which i got by running the command istioctl profile dumb demo >demo.yaml AFIK in demo pprofle egressgateway is enabled i shared the yaml file here which i used:

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  addonComponents:
    grafana:
      enabled: true
      k8s:
        replicaCount: 1
    istiocoredns:
      enabled: false
    kiali:
      enabled: true
      k8s:
        replicaCount: 1
    prometheus:
      enabled: true
      k8s:
        replicaCount: 1
    tracing:
      enabled: true
  components:
    base:
      enabled: true
    citadel:
      enabled: false
      k8s:
        strategy:
          rollingUpdate:
            maxSurge: 100%
            maxUnavailable: 25%
    cni:
      enabled: false
    egressGateways:
    - enabled: true
      k8s:
        resources:
          requests:
            cpu: 10m
            memory: 40Mi
      name: istio-egressgateway
    galley:
      enabled: false
      k8s:
        replicaCount: 1
        resources:
          requests:
            cpu: 100m
        strategy:
          rollingUpdate:
            maxSurge: 100%
            maxUnavailable: 25%
    ingressGateways:
    - enabled: true
      k8s:
        resources:
          requests:
            cpu: 10m
            memory: 40Mi
      name: istio-ingressgateway
    nodeAgent:
      enabled: false
    pilot:
      enabled: true
      k8s:
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: GODEBUG
          value: gctrace=1
        - name: PILOT_TRACE_SAMPLING
          value: "100"
        - name: CONFIG_NAMESPACE
          value: istio-config
        readinessProbe:
          httpGet:
            path: /ready
            port: 8080
          initialDelaySeconds: 5
          periodSeconds: 5
          timeoutSeconds: 5
        resources:
          requests:
            cpu: 10m
            memory: 100Mi
        strategy:
          rollingUpdate:
            maxSurge: 100%
            maxUnavailable: 25%
    policy:
      enabled: false
      k8s:
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        hpaSpec:
          maxReplicas: 5
          metrics:
          - resource:
              name: cpu
              targetAverageUtilization: 80
            type: Resource
          minReplicas: 1
          scaleTargetRef:
            apiVersion: apps/v1
            kind: Deployment
            name: istio-policy
        resources:
          requests:
            cpu: 10m
            memory: 100Mi
        strategy:
          rollingUpdate:
            maxSurge: 100%
            maxUnavailable: 25%
    sidecarInjector:
      enabled: false
      k8s:
        replicaCount: 1
        strategy:
          rollingUpdate:
            maxSurge: 100%
            maxUnavailable: 25%
    telemetry:
      enabled: false
      k8s:
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: GOMAXPROCS
          value: "6"
        hpaSpec:
          maxReplicas: 5
          metrics:
          - resource:
              name: cpu
              targetAverageUtilization: 80
            type: Resource
          minReplicas: 1
          scaleTargetRef:
            apiVersion: apps/v1
            kind: Deployment
            name: istio-telemetry
        replicaCount: 1
        resources:
          limits:
            cpu: 4800m
            memory: 4G
          requests:
            cpu: 50m
            memory: 100Mi
        strategy:
          rollingUpdate:
            maxSurge: 100%
            maxUnavailable: 25%
  hub: docker.io/istio
  profile: demo
  tag: 1.5.2
  values:
    clusterResources: true
    galley:
      enableAnalysis: false
      image: galley
    gateways:
      istio-egressgateway:
        autoscaleEnabled: false
        name: istio-egressgateway
        secretVolumes:
        - mountPath: /etc/istio/egressgateway-certs
          name: egressgateway-certs
          secretName: istio-egressgateway-certs
        - mountPath: /etc/istio/egressgateway-ca-certs
          name: egressgateway-ca-certs
          secretName: istio-egressgateway-ca-certs
        type: ClusterIP
      istio-ingressgateway:
        applicationPorts: ""
        autoscaleEnabled: false
        debug: info
        domain: ""
        meshExpansionPorts:
        - name: tcp-pilot-grpc-tls
          port: 15011
          targetPort: 15011
        - name: tcp-istiod
          port: 15012
          targetPort: 15012
        - name: tcp-citadel-grpc-tls
          port: 8060
          targetPort: 8060
        - name: tcp-dns-tls
          port: 853
          targetPort: 853
        name: istio-ingressgateway
        ports:
        - name: status-port
          port: 15020
          targetPort: 15020
        - name: http2
          port: 80
          targetPort: 80
        - name: https
          port: 443
        - name: kiali
          port: 15029
          targetPort: 15029
        - name: prometheus
          port: 15030
          targetPort: 15030
        - name: grafana
          port: 15031
          targetPort: 15031
        - name: tracing
          port: 15032
          targetPort: 15032
        - name: tcp
          port: 31400
          targetPort: 31400
        - name: tls
          port: 15443
          targetPort: 15443
        sds:
          enabled: false
          image: node-agent-k8s
          resources:
            limits:
              cpu: 2000m
              memory: 1024Mi
            requests:
              cpu: 100m
              memory: 128Mi
        secretVolumes:
        - mountPath: /etc/istio/ingressgateway-certs
          name: ingressgateway-certs
          secretName: istio-ingressgateway-certs
        - mountPath: /etc/istio/ingressgateway-ca-certs
          name: ingressgateway-ca-certs
          secretName: istio-ingressgateway-ca-certs
        type: LoadBalancer
        zvpn:
          enabled: false
          suffix: global
    global:
      arch:
        amd64: 2
        ppc64le: 2
        s390x: 2
      certificates: []
      configValidation: true
      controlPlaneSecurityEnabled: true
      defaultNodeSelector: {}
      defaultPodDisruptionBudget:
        enabled: true
      defaultResources:
        requests:
          cpu: 10m
      disablePolicyChecks: false
      enableHelmTest: false
      enableTracing: true
      imagePullPolicy: IfNotPresent
      imagePullSecrets: []
      istioNamespace: istio-system
      istiod:
        enabled: true
      jwtPolicy: third-party-jwt
      k8sIngress:
        enableHttps: false
        enabled: false
        gatewayName: ingressgateway
      localityLbSetting:
        enabled: true
      logAsJson: false
      logging:
        level: default:info
      meshExpansion:
        enabled: false
        useILB: false
      meshNetworks: {}
      mountMtlsCerts: false
      mtls:
        auto: true
        enabled: false
      multiCluster:
        clusterName: ""
        enabled: false
      network: ""
      omitSidecarInjectorConfigMap: false
      oneNamespace: false
      operatorManageWebhooks: false
      outboundTrafficPolicy:
        mode: ALLOW_ANY
      pilotCertProvider: istiod
      policyCheckFailOpen: false
      priorityClassName: ""
      proxy:
        accessLogEncoding: TEXT
        accessLogFile: /dev/stdout
        accessLogFormat: ""
        autoInject: enabled
        clusterDomain: cluster.local
        componentLogLevel: misc:error
        concurrency: 2
        dnsRefreshRate: 300s
        enableCoreDump: false
        envoyAccessLogService:
          enabled: false
        envoyMetricsService:
          enabled: false
          tcpKeepalive:
            interval: 10s
            probes: 3
            time: 10s
          tlsSettings:
            mode: DISABLE
            subjectAltNames: []
        envoyStatsd:
          enabled: false
        excludeIPRanges: ""
        excludeInboundPorts: ""
        excludeOutboundPorts: ""
        image: proxyv2
        includeIPRanges: '*'
        includeInboundPorts: '*'
        kubevirtInterfaces: ""
        logLevel: warning
        privileged: false
        protocolDetectionTimeout: 100ms
        readinessFailureThreshold: 30
        readinessInitialDelaySeconds: 1
        readinessPeriodSeconds: 2
        resources:
          limits:
            cpu: 2000m
            memory: 1024Mi
          requests:
            cpu: 10m
            memory: 40Mi
        statusPort: 15020
        tracer: zipkin
      proxy_init:
        image: proxyv2
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 10m
            memory: 10Mi
      sds:
        enabled: false
        token:
          aud: istio-ca
        udsPath: ""
      sts:
        servicePort: 0
      tracer:
        datadog:
          address: $(HOST_IP):8126
        lightstep:
          accessToken: ""
          address: ""
          cacertPath: ""
          secure: true
        stackdriver:
          debug: false
          maxNumberOfAnnotations: 200
          maxNumberOfAttributes: 200
          maxNumberOfMessageEvents: 200
        zipkin:
          address: ""
      trustDomain: cluster.local
      useMCP: false
    grafana:
      accessMode: ReadWriteMany
      contextPath: /grafana
      dashboardProviders:
        dashboardproviders.yaml:
          apiVersion: 1
          providers:
          - disableDeletion: false
            folder: istio
            name: istio
            options:
              path: /var/lib/grafana/dashboards/istio
            orgId: 1
            type: file
      datasources:
        datasources.yaml:
          apiVersion: 1
      env: {}
      envSecrets: {}
      image:
        repository: grafana/grafana
        tag: 6.5.2
      ingress:
        enabled: false
        hosts:
        - grafana.local
      nodeSelector: {}
      persist: false
      podAntiAffinityLabelSelector: []
      podAntiAffinityTermLabelSelector: []
      security:
        enabled: false
        passphraseKey: passphrase
        secretName: grafana
        usernameKey: username
      service:
        annotations: {}
        externalPort: 3000
        name: http
        type: ClusterIP
      storageClassName: ""
      tolerations: []
    istiocoredns:
      coreDNSImage: coredns/coredns
      coreDNSPluginImage: istio/coredns-plugin:0.2-istio-1.1
      coreDNSTag: 1.6.2
    kiali:
      contextPath: /kiali
      createDemoSecret: true
      dashboard:
        grafanaInClusterURL: http://grafana:3000
        jaegerInClusterURL: http://tracing/jaeger
        passphraseKey: passphrase
        secretName: kiali
        usernameKey: username
        viewOnlyMode: false
      hub: quay.io/kiali
      ingress:
        enabled: false
        hosts:
        - kiali.local
      nodeSelector: {}
      podAntiAffinityLabelSelector: []
      podAntiAffinityTermLabelSelector: []
      security:
        cert_file: /kiali-cert/cert-chain.pem
        enabled: false
        private_key_file: /kiali-cert/key.pem
      tag: v1.15
    mixer:
      adapters:
        kubernetesenv:
          enabled: true
        prometheus:
          enabled: true
          metricsExpiryDuration: 10m
        stackdriver:
          auth:
            apiKey: ""
            appCredentials: false
            serviceAccountPath: ""
          enabled: false
          tracer:
            enabled: false
            sampleProbability: 1
        stdio:
          enabled: true
          outputAsJson: false
        useAdapterCRDs: false
      policy:
        adapters:
          kubernetesenv:
            enabled: true
          useAdapterCRDs: false
        autoscaleEnabled: false
        image: mixer
        sessionAffinityEnabled: false
      telemetry:
        autoscaleEnabled: false
        env:
          GOMAXPROCS: "6"
        image: mixer
        loadshedding:
          latencyThreshold: 100ms
          mode: enforce
        nodeSelector: {}
        podAntiAffinityLabelSelector: []
        podAntiAffinityTermLabelSelector: []
        replicaCount: 1
        reportBatchMaxEntries: 100
        reportBatchMaxTime: 1s
        sessionAffinityEnabled: false
        tolerations: []
    nodeagent:
      image: node-agent-k8s
    pilot:
      appNamespaces: []
      autoscaleEnabled: false
      autoscaleMax: 5
      autoscaleMin: 1
      configMap: true
      configNamespace: istio-config
      cpu:
        targetAverageUtilization: 80
      enableProtocolSniffingForInbound: false
      enableProtocolSniffingForOutbound: true
      env: {}
      image: pilot
      ingress:
        ingressClass: istio
        ingressControllerMode: STRICT
        ingressService: istio-ingressgateway
      keepaliveMaxServerConnectionAge: 30m
      meshNetworks:
        networks: {}
      nodeSelector: {}
      podAntiAffinityLabelSelector: []
      podAntiAffinityTermLabelSelector: []
      policy:
        enabled: false
      replicaCount: 1
      tolerations: []
      traceSampling: 1
    prometheus:
      contextPath: /prometheus
      hub: docker.io/prom
      ingress:
        enabled: false
        hosts:
        - prometheus.local
      nodeSelector: {}
      podAntiAffinityLabelSelector: []
      podAntiAffinityTermLabelSelector: []
      provisionPrometheusCert: true
      retention: 6h
      scrapeInterval: 15s
      security:
        enabled: true
      tag: v2.15.1
      tolerations: []
    security:
      dnsCerts:
        istio-pilot-service-account.istio-control: istio-pilot.istio-control
      enableNamespacesByDefault: true
      image: citadel
      selfSigned: true
    sidecarInjectorWebhook:
      enableNamespacesByDefault: false
      image: sidecar_injector
      injectLabel: istio-injection
      objectSelector:
        autoInject: true
        enabled: false
      rewriteAppHTTPProbe: true
      selfSigned: false
    telemetry:
      enabled: true
      v1:
        enabled: false
      v2:
        enabled: true
        prometheus:
          enabled: true
        stackdriver:
          configOverride: {}
          enabled: false
          logging: false
          monitoring: false
          topology: false
    tracing:
      ingress:
        enabled: false
      jaeger:
        accessMode: ReadWriteMany
        hub: docker.io/jaegertracing
        memory:
          max_traces: 50000
        persist: false
        spanStorageType: badger
        storageClassName: ""
        tag: "1.16"
      nodeSelector: {}
      opencensus:
        exporters:
          stackdriver:
            enable_tracing: true
        hub: docker.io/omnition
        resources:
          limits:
            cpu: "1"
            memory: 2Gi
          requests:
            cpu: 200m
            memory: 400Mi
        tag: 0.1.9
      podAntiAffinityLabelSelector: []
      podAntiAffinityTermLabelSelector: []
      provider: jaeger
      service:
        annotations: {}
        externalPort: 9411
        name: http-query
        type: ClusterIP
      zipkin:
        hub: docker.io/openzipkin
        javaOptsHeap: 700
        maxSpans: 500000
        node:
          cpus: 2
        probeStartupDelay: 200
        queryPort: 9411
        resources:
          limits:
            cpu: 300m
            memory: 900Mi
          requests:
            cpu: 150m
            memory: 900Mi
        tag: 2.14.2
    version: ""

not sure but i think may be there is some resources problem.