Istio's Helm Support in 2020

I second you concerns. It is not clear enough what is happening with so many options… Would be nice to get a clean picture.

2 Likes

another bummer:

  • Istio 1.7: Upstream installation methods or the new samples deployment are the recommended installation methods. Installation by istioctl is deprecated.

can anyone explain what does it mean? we are using istioctl install method as of now, will this be deprecated as well?

I believe that is for the 3rd party add-ons. They meant that installing the add-ons like grafana, jaeger, kiali with istioctl will be deprecated (and removed from 1.8).

Yes, istioctl and the istio helm charts will be used for Istio components only.

In 1.7 we’ll also support helm3 directly for installing istio components with a
revision ( but not ‘upgrade in place’ of older versions ) - it has been added in 1.6, but
we’re lacking the automated testing infra.

It’s good to hear this feedback from users, it helps us make the docs and product better. Sorry for the confusion in the docs, we need to make these clearer. Operator is now suitable for production use. You can think of the operator as an istioctl install command being run from a pod in the cluster - these two share most of the same code under the hood. By design, the IstioOperator CRD has always been intended to be the definitive input into either the operator or istioctl, and you can easily move between these two models of operation, using the same CR as the input. The IstioOperator CRD replaces values.yaml (strictly speaking IstioOperator.MeshConfig replaces the dynamic config part and IstioOperator.Component.K8s replaces k8s settings).
istioctl creates a copy of the installation CR in the cluster which it calls installed-state. This is likely to be used for some sanity checks going forward and can be used if you ever lose track of what is installed in the cluster.
manifest generate can be used to install Istio along with kubectl apply, but while the YAML is the same, the sequencing of resource application is different from istioctl install and some checks are not performed. manifest generate is intended as more of an auditing tool if you want to inspect/track the actual resources being applied to the cluster. If you want to generate and apply the YAML yourself without istioctl or the operator you’re better off using Helm v3 because that will be tested more going forward.
Lastly, istioctl install is just the new name for istioctl manifest apply and the latter is being deprecated. We had some strong feedback that istioctl manifest apply was confusing and wordy, so it was changed, although manifest apply is more representative of what the command is actually doing.

Here are the relevant PRs updating the docs if you’re interested:


2 Likes

thx for the explanation @ostromart, just to be clear, you mean the standalone istio operator is stable for production use from 1.6 onwards?

I have asked the question here Is standalone operator install method stable to use production?

Hi @ostromart, great to hear the beta status of the operator. I was wondering, how I could do what I mentioned here: https://github.com/istio/istio/issues/24450#issuecomment-640413467

Right, it’s beta from 1.6 onwards. https://istio.io/latest/about/feature-stages/

You may want to update the status of the Istio operator here: https://istio.io/latest/about/feature-stages/#core

Thanks. https://github.com/istio/istio.io/pull/7573

Cool, I wasn’t aware that the whole site is in git… Next time I will do the PR myself :slight_smile:

Any update to @amitsehgal’s question? Looking for a helm repository/registry for the operator chart.

There is one thing that confuses me, if Helm installations are still supported, why the docs removed the section “Customizable Install with Helm” in 1.6 instead of adapting it to the new supported manifests? Thx

Right, this thread has helped give me a bit more context / understanding of the current state of affairs w.r.t helm and operator :slight_smile:

Istio upgrades have regularly been a pain for us; we are currently using Helm and we are running 1.3. To avoid the pain/faff of upgrading to 1.4, then 1.5, then 1.6… would it be reasonable to use the multiple control planes feature mentioned by @ostromart and switch over one NS at a time from 1.3 to 1.6, or is that just asking for trouble? :laughing:

Some of the issues we had last time around going from 1.2 to 1.3 were things like Adapters and Rules and things being missing in some clusters but not all. I am not quite sure how this happened, and these configurations are very un-documented (still not 100% sure exactly what kubernetesenv Adapter does, but it was missing on one of our clusters and we had to manually create it, among other objects to make the prometheus metrics work).

I’ve been able to do 1.4 to 1.6 in my testing with some effort - 1.3 to 1.6 is probably not that different. The main obstacle is CRD compatibility. Assuming that you migrate your config to be 1.6 compatible, you’ll have to manually remove or disable the 1.3 validation webhook before 1.6 will work (best way is to edit galley Role to not allow it to recreate the webhook and then delete the webhook).
Also, I’d recommend initially not installing any 1.6 gateway, until 1.6 is running with your new config. Keep the gateways on the old version, migrate some parts of the data plane over and upgrade the gateway as a last step.

1 Like

The helmv3 charts do not have automated testing. The project attempts not to document features that lack automated testing, especially those in the critical path (such as installation).

Hope to have this resolved in 1.8.

Cheers,
-steve

Martin,

The project should consider offering a skip-level upgrade document, with copious warnings, for those on 1.3/1.4, that explains step by step how to skip level using control plane revisions. Explaining on discuss.istio.io is painful, as people are likely to break their production systems.

cheers,
-steve

Aaron,

I would encourage you to watch this youtube video where I demo control plane revisions. It is really awesome technology. In short, you can have 2 or 10 or 20 control planes running, and then attach your data plane - pod by pod, or namespace by namespace to different control planes of your choosing.

I think we need a little work around ingress gateway (IPs may change…) - however, overall, the procedure is awesome.

1 Like

We already have that Steve: https://istio.io/latest/docs/setup/upgrade/#upgrading-from-1-4

Nice video.
I’m wondering:

  • how configuration are consumed by the 2nd deployment and what happens if there are changes in the supported configuration syntax.
  • what’s is going to happen with the ingress gateway?

I’m currently running 1.5.4 (deployed with helm and so I’ve not istiod pod) and I’m looking how to migrate to 1.7 with no downtime.

Cheers