How to downgrade and remove istiod and promtheus from previous upgrade?

Hello,

I was trying to upgrade my current Istio installation from 1.4.3 to 1.6.5. I did as it’s described in v1.6 doc.

I got an error while executing istioctl install --set revision=canary, see below:

▶ ./istioctl install --set revision=canary
✔ Istio core installed
✔ Istiod installed
✘ Addons encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the condition
Deployment/istio-system/prometheus
✘ Ingress gateways encountered an error: failed to wait for resource: resources not ready after 5m0s: timed out waiting for the condition
Deployment/istio-system/istio-ingressgateway
- Pruning removed resources                                                                                                 Error: failed to apply manifests: errors occurred during operation

There were 2 pods were added, istiod-canary and prometheus.

So, I tried to reverted back by re-running istioclt apply -f FILE.yaml and also kubectl rollout restart deployment.

All istio pods were terminated and re-created, including istiod-canary and prometheus pods.

How can I revert back to version 1.4.3 and how can I remove istiod-canary and prometheus resources (pods, svc, rs, deployments)?

Thanks!
Laurentius Purba

As far as I know, istioctl doesn’t support a way to delete canary revision. Therefore I resolved it with rollback the CRs, the ConfigMaps and the Deployments newly applied at the moment the canary had been created.
In order to find out what objects have been applied, you can use
the command ‘istioctl manifest generate --set revision=canary’ and it will show what was applied exactly.
Then you could revert everything as you examine the manifests, manually.
Btw, I don’t recommend you this way for a production environment. If you’re working on a production then you better leave it as it is.

@Renee Thanks for the answer.

1 Like

@laurentiuspurba I think I can provide answer to this. I had messed up my dev cluster that is using istio 1.4.9. In my case installation earlier install was done using helm. To restore things back to order I tried to find the resources installed by istio upgrade.

kubectl api-resources --verbs=list --namespaced -o name \
    | xargs -n 1 kubectl get --show-kind --ignore-not-found -n istio-system -l operator.istio.io/version=1.5.4

You can use another label to filter as well. You can delete these resources one by one or make a script. Once you removed new resources those were created during upgrade, rerun the old helm installation of 1.4.x. It should recreate the missing resources.

@liptan Thanks for your response.

@liptan I did a comparison between 2 Istio version (1.5.8 and 1.4.10) that I had in my system.

Currently, I have Istio running on version 1.5.8, but for some reasons the istio-ingressgateway pod is in NOT READY state.

Can I just delete these 1.4.10 resources and hope it will bring my istio-ingressgateway pod to READY state?

I’ve tried different things, and I need more suggestion on this. Thanks.

=========================================================

@liptan I did delete all these objects that had 1.4.10 version. But still no luck so far. My istio-ingressgateway is still in NOT READY state.

NAME                                    READY   STATUS    RESTARTS   AGE
istio-ingressgateway-75db45b458-rm22l   0/1     Running   0          6m38s
istio-ingressgateway-75db45b458-xwwv5   0/1     Running   0          124m
istio-tracing-7cf5f46848-wtvqq          1/1     Running   0          4h26m
istiod-678b7fb6dc-jt6rn                 1/1     Running   0          22m
kiali-b4b5b4fb8-gcv2c                   1/1     Running   0          12h

I was able to upgrade from 1.4.3 -> 1.4.10 -> 1.5.8 -> 1.6.4. I could not have done it without help from Istio community (Prune and Vito).

Check this link - Upgrading istio 1.4.3 to 1.6.0