I am looking to deploy istio using the CNI on a local microk8s cluster, is there anyone who has managed this?
The difficulties I am having stem from the read-only nature of the snap mounted microk8s file-system so that the install-cni container in the cni-node pod cannot copy the binaries over to the required cniBinDir.
On the other hand, if I manually configure the cniBinDir and copy over the native cni binaries, the istio install-cni container is able to complete but the native binaries are unable to execute and the pods are consequently left without IPs and unable to start up.
I cannot find any documentation around using the CNI with microk8s on the istio site, has anyone got this up and running successfully? I’m thinking there is perhaps some nuance in the configuration I am missing!
Hey @jesper, nice! I also tried this but found some issues with the microk8s CNI when I altered the kubelet config to use the new location - did you not hit this, or did you just not alter the kubelet config?
There’s no need to alter the kubelet config. It works out of the box when simply using /var/snap/microk8s/current/opt/cni/bin in the istioOperator.
I guess it works “under the hood” as a way to modify /snap/microk8s/current/opt/cni/bin which otherwise is write protected because of the snap installation method.