Kubectl drain with istiod 1.5.10

We have a single istiod pod (v1.5.10) deployed in our cluster. We need to run kubectl drain on our nodes. However, we are unable to drain the node that istiod is running on because it has a PodDisruptionBudget that says minAvailable 1. What should we do? Should we manually scale istiod to 2 pods before draining? Not sure of the implications of having two istiod pods running side-by-side.

istiod is control plane and isnt in data path, it can go down for some time without affecting traffic
new isito sidecar enabled pods starting will fail if istiod is down (i think existing pods restart will also affect until istiod is up)

so its not a big problem with single istiod pod

but, as a best practice you should keep multiple replicas running, so increasing istiod to at least 2 makes sense, we have min replicas set as 2 in our env, also there is HPA which scales istiod pods when there is cpu load