Dear All,
In our infrastructure, we are using git repos as single source of truth and using tools like terraform, puppet, ansible to apply our config to the infrastructure (both in house and cloud).
We are planning to deploy some kubernetes clusters and looking for a similar solution to install and manage them and their components. Apart from the tools above, we looked into ArgoCD/Flux to deploy resources on top of Kubernetes.
Among the many install options (and probably due to our lack of experience in this ecosystem), we are a bit lost how could this work and what would be the best/supported way doing this with Istio.
Is there a purely declarative way of installing and managing (including updates) Istio which could be used by tools like ArgoCD?
I saw that we could generate the kubectl manifest with the istioctl manifest generate -f customizations.yaml > /tmp/generated.yaml
and this probably could be included into a git repo that is applied by ArgoCD but I am afraid that config updates and Istio upgrades would not work in this way.
Thanks ahead!
4 Likes
Hi,
Both the istioctl and operator models are declarative - all state is captured in IstioOperator CR. The difference is that istioctl is invoked by the user (usually with the CR coming from a file, which can be in git) and doesn’t return until the desired state is reached, while the operator runs a controller in the cluster to do the same against the in-cluster CR.
If you don’t want to use istioctl or the operator to actually apply the resources to your cluster, you have a couple of options. One is to use manifest generate
to generate the output manifest and use some tool to apply that. Note that there are some drawbacks to doing it that way since istioctl and the operator have some custom logic for how the resources are applied which would be missing with manifest generate
.
Another way is to use the underlying Helm charts directly and use helm template to apply. This is also supported but hasn’t had the same level of testing so far as the istioctl path.
Note that using istioctl/operator to apply doesn’t preclude you from manifest generate
-ing and storing the manifest in git for audit history. That’s what manifest generate
is for primarily, auditing and inspecting the output manifest.
1 Like
Hi @ostromart,
Thanks a lot for the answer. It is getting clear for me know.
This “operator installation” was confusing to me. At the end, istioctl also deploys an IstioOperator CR called (installed-state). Now I understand that, if someone says “operator install” he means the one with a controller inside kubernetes (which is basically like the istioctl command line tool inside the cluster) and not the CR coming from the “istioctl install”.
I am totally fine with the operator (IstioOperator CR managed by the controller) method and to put the yaml of the short IstioOperator CR into git applied by a gitops tool.
What I do not see with this solution is (and I cannot find it in the docs): how an update of Istio works with this controller based solution. I kind of imagine that first I would need to update the controller then it would update Istio control plane. Do we have docs for this procedure?
The standalone operator installation method seems to be not recommended for production use. Is this something which is planned to come in the upcoming release for sure? (I would like to rely on a solution which will be supported in a couple of months/in a year).
Thanks ahead!
B
1 Like
So, probably I am getting there.
I think, the best what can be done to integrate Istio installation/update into gitops tools (like flux and argocd) is to generate the Istio operator manifests with helm template manifests/charts/istio-operator/ --set hub=docker.io/istio --set tag=1.6.1 --set operatorNamespace=istio-operator --set istioNamespace=istio-system > /tmp/helm_operator_install.yaml
and put it into git, then let the gitops tool apply this.
Update will be done in the same way: after you will get the new release, you get the manifest with the same command (obviously new version) and commit it into your git repo.
This seems to be something, which will be supported (and tested) for long term. Am I correct?
One question is still there: when will be the so called standalone operator based installation recommended for production use?
1 Like
Yes, that’s exactly right. Version X of the operator installs version X of Istio, because internally it has that version of charts built in. So installing a new version of operator will upgrade Istio in place.
In practice the operator manifest is not likely to change much except the tag because it’s so simple but best practice is to use the release manifests for the operator for each release.
This method is beta now, being used in production for some time and will be supported for the long term.
2 Likes
Thanks for the explanation @ostromart
Given that: “Version X of the operator installs version X of Istio” and “installing a new version of operator will upgrade Istio in place” - Is it possible to use the new canary upgrade method while using the operator?
Would you need to also “canary” the new operator version? If so, what is the method for doing so?
1 Like
We are using istioctl + Jenkins to install Istio. Now planning to move this to Standalone operator + flux.
It would be useful to understand how the canary method works with operator version.
It doesn’t fully, yet. It’s possible to use the operator to upgrade between patch versions by installing two IOPs with two revisions, but this is not going to work between minor versions since in that case two controllers are needed with two different versions of charts. We don’t have a way to scope IOPs to controller instances yet but are working on this for 1.7.
Is it possible to add custom profiles to istioctl like dev.yaml and prod.yaml? I tried to put them to /manifests/profiles/dev.yaml but istioctl cant finds it. It will be nice in gitops approach to trigger different jobs in the same repo with istio manifests.