Move from HELM to istioctl

Good day to you.

I am working on an update from ISTIO 1.4.0 to the most recent 1.6.x version (now 1.6.3).

We deployed ISTIO with helm to create the manifests for our ISTIO configuration (including certmanager, telemetry addons and so on) and kubectl apply -f to apply it.

Well ISTIO is now simplified in that way, that there will run one pod instead of one for citadel, for mixer, for policy etc.

My question now is, how to migrate the helm configuration into the istioctl compatible configuration, so that my service mesh and services will run in the same way as with ISTIO 1.4.0 (incl. mTLS, destination rules and ingress/egress, gateway, virtualservice,serviceEntries,ā€¦)

Does anyone have a hint to start with this challange?

Best,
Jan

Our migration path is:

  1. Upgrade to the same version in 1.4.x with istioctl
  2. Upgrade to istio 1.5.x with istioctl
  3. Upgrade to istio 1.6.x with istioctl

the problem we encounter is when upgrading to 1.5.x directly from 1.4.x helm based installation. It not deleted the 1.4.x component

Hi Jan, thereā€™s a migration tool for Helm - istioctl manifest migrate. istioctl also support the Helm API directly using the spec.values path so you could use that too, but itā€™s recommended to migrate since the Helm API is becoming deprecated over time.

I will share my experience.

I got a cluster that I needed to migrate (It was on istio 1.5, setup with helm)
I had to setup a new cluster. I used this cluster as test run to create a configuration which somehow was what I expect to have on the old cluster.

Some configuration update later ( change reserved ip, update gateway name ā€¦secret if you need (root CA, ā€¦) ), I was ready to upgrade my old cluster.

First backup the old config! ( well, It means ā€˜rollback is possibleā€™, so it is like to accept you will failā€¦ if you donā€™t like that, donā€™t backup :smiley: )
I deleted / purge the istio and init-istio helm release,
I deleted the istio-system namespace (just to be sure to clean everything from old version)
I k get crd | grep istio | cut -d" " -f 1 | xargs kubectl delete crd to remove all istio crd
I applied the new config, wait a little bitā€¦
ā€¦ And fail ā€¦ (istiod setup correctly, gateway pod ok, ā€¦ but gateway object in error)
Galley validation webhoook error. Ok galley doesnā€™t exist anymore:
k delete ValidatingWebhookConfiguration istio-galley

==> success !

It took me 2-3 minutes to realize the issue with the validating webhook, so I got around 5 minutes downtime. For my usecase it didnā€™t matter.

1 Like

Hi,

I totally missed that (and to look into the command reference, too). Sorry I will try that.
But first one question:
We used helm template to generate an istio-init.yaml and istio.yaml. Am I able to migrate them, too?

Best in advance!
Jan

Actually looking to do the same thing here too. What Iā€™m concerned about is being able to do this in such a way where I wonā€™t cause any downtime. Is there a method of doing so? Would you just go through each component step-by-step and install it with the operator, then disable it in the helm release?

Or is there a way where the operator will effectively take ownership of the helm resources and perform a rolling upgrade by itself?

In Mitchā€™s thread ā€œIstioā€™s Helm Support in 2020ā€, it is said: ā€œFor users who would like to continue installing and managing Istio via Helm directly, we will support calling the Helm charts which underlie the operatorā€. That statement seems to be directly at odds with your statement: ā€œHelm API is becoming depreated over timeā€. Could you please clarify?

What is being deprecated is the current Helm values.yaml API, in favor of IstioOperator + MeshConfig. Weā€™ll continue to support Helm charts with these APIs.

1 Like