I’ve performed an upgrade of Istio which only takes care of the control plane apparently.
Didn’t know that, another cluster I manage was really on a big skew.
istioctl tells me there’s a version skew, e.g.:
$ istioctl --kubeconfig ~/.kube/config-mycluster version --remote client version: 1.6.6 control plane version: 1.6.6 data plane version: 1.6.6 (6 proxies), 1.6.5 (12 proxies)
Now, in order to upgrade the data plane, I have to manually restart the deployments/statefulsets etc with a sidecar injected. That sounds invasive and a lot of manual work. Is that really necessary? I see other posts with the same conclusion, e.g. Data plane in place upgrade?
I’d like to know which of my components are still behind in order to get the data plane on par with the control plane.
- Does istioctl have a way to list more details? Couldn’t find one in the ‘version’ subcommand.
- I have tried using
kubectl get ... --field-selector=..., but I was unsuccessful, as I am not able to match on metadata annotation values (spec -> template -> metadata -> annotations ->
sidecar.istio.io/inject=true). This seems per design of kubectl, to not allow for annotations to be used for this.
I was hoping for a way to get to a single command that would allow me to get this over with:
for <each namespace>; do kubectl rollout restart --selector=...; done
FWIW, my cluster is without automatic sidecar injection policy, but it is manually selected onto some with the spec -> template -> metadata -> annotations ->
(I absolutely need to avoid restarting all of the services without the sidecar proxy as it’s quite disruptive, taking loads of resources and time consuming for some big heavy services without Istio involved to them.)
How do people do this at scale with hundreds of services? I can imagine scaling up to hundreds of these and I don’t want to keep monitoring on version skews everywhere.