How to select all auto-injected-sidecar deployments for data plane upgrade?

I’ve performed an upgrade of Istio which only takes care of the control plane apparently.
Didn’t know that, another cluster I manage was really on a big skew. :confused:

Now, istioctl tells me there’s a version skew, e.g.:

$ istioctl --kubeconfig ~/.kube/config-mycluster version --remote
client version: 1.6.6
control plane version: 1.6.6
data plane version: 1.6.6 (6 proxies), 1.6.5 (12 proxies)

Now, in order to upgrade the data plane, I have to manually restart the deployments/statefulsets etc with a sidecar injected. That sounds invasive and a lot of manual work. Is that really necessary? I see other posts with the same conclusion, e.g. Data plane in place upgrade?

I’d like to know which of my components are still behind in order to get the data plane on par with the control plane.

  • Does istioctl have a way to list more details? Couldn’t find one in the ‘version’ subcommand.
  • I have tried using kubectl get ... --field-selector=..., but I was unsuccessful, as I am not able to match on metadata annotation values (spec -> template -> metadata -> annotations -> This seems per design of kubectl, to not allow for annotations to be used for this. :confused:

I was hoping for a way to get to a single command that would allow me to get this over with:

for <each namespace>; do kubectl rollout restart --selector=...; done

FWIW, my cluster is without automatic sidecar injection policy, but it is manually selected onto some with the spec -> template -> metadata -> annotations -> annotation.

(I absolutely need to avoid restarting all of the services without the sidecar proxy as it’s quite disruptive, taking loads of resources and time consuming for some big heavy services without Istio involved to them.)

How do people do this at scale with hundreds of services? I can imagine scaling up to hundreds of these and I don’t want to keep monitoring on version skews everywhere.

I think your requirement is very modest and reasonable.
For me, I don’t even want rolling update. Sidecar is an infrastructure piece, it should not interfere with my business workloads. Many monitoring and config tools are tied to pod id/IP. Restarting them is part of the CD process. Doing rolling upgrades for sidecar means these tools have to treat sidecar upgrade as workloads upgrade. That is invasive.

@gertvdijk you can probably use the following script to do so:

# Triggers restart of pods with outdated Istio proxy (data plane)

newVersion=$(istioctl proxy-status | grep -i istio-ingressgateway | awk '{print $7}')
podList=($(istioctl proxy-status | grep -iv ${newVersion} | awk 'NR>1 {print $1}'))

for pod in $podList; do
    echo "restarting pod $names..."
    kubectl delete pod/${names[1]} -n ${names[2]} --wait=false