While it is strongly recommended that you install via istioctl (or if you are willing to use alpha software, the istio operator https://istio.io/about/feature-stages/#core), there are ways to install helm charts via Terraform. However, doing so comes with a good deal of operational risk, especially if the deployment method you choose is using the Helm Tiller. If you really do need to install via helm in terraform, please try using
helm template instead to dump the chart as rendered YAML, and then using
kubectl apply in a script to install the generated templates. It is much safer than using normal Helm 2 when imbedded in Terraform. (Note: this may not apply to Helm 3, but it was released only five days ago and I have not evaluated it at all, nor do I know if it is even compatible with Helm 2 charts.)
To give some more background, if you haven’t delved very deep in Helm: the Helm Tiller is a pod that runs inside your cluster with very elevated permissions and keeps track of the state of charts you have deployed using configmaps or secrets it creates in the namespace. This internal state does not always accurately reflect the actual state of the resources in the cluster, and also tends not to play nice with the external state expected by Terraform. If you use one of the more deeply integrated tools (e.g. the Terraform Helm provider), you can run into situations where Terraform or Helm decide that the best way to get your chart into the target state is to delete it and recreate it, which on something that controls inter-pod communication in the cluster will cause serious issues. Rendering out to yaml via the Helm template command and running a
kubectl apply means that all the logic regarding reonciling cluster state with the target state is handled by the kubernetes cluster, which is really where it needs to happen.
I’d also be cautious of mixing your infrastructure deployment terraform code with the code used to deploy things on said infrastructure. It may be fixed in Terraform 12, but in Terraform 11 it was a bit tricky to write modules such that they would wait for the kubernetes cluster to be ready to accept kubectl commands before trying and failing to run the kubectl apply script. Working around this issue is doable, but it may be better to separate out the cluster creation code from the cluster bootstrapping code.
If you want to continue down this path, I can give you some additional resources. Just let me know a bit more about what you are trying to accomplish so I can make sure they are what you need.