Install MultiCluster Multi-Primary not adding route to either cluster

hi all! i am trying to setup a multi-cluster bookinfo application with calico for the cni, and istio on the same network.

versions:
kubectl/kubeadm/kubelet - 1.23.4
istio - 1.13.4
cni-calico - 0.3.1

layout for cluster 1:

NAME   STATUS   ROLES                  AGE   VERSION
vm01   Ready    control-plane,master   36d   v1.23.4
vm02   Ready    <none>                 36d   v1.23.4
vm03   Ready    <none>                 36d   v1.23.4

layout for cluster 2:

NAME   STATUS   ROLES                  AGE   VERSION
vm04   Ready    control-plane,master   36d   v1.23.4
vm05   Ready    <none>                 36d   v1.23.4

the internal ips are the same except the last 3 digits range from 101-105 for each vm respectively.

for reference, i followed this tutorial for setting up istio and the necessary gateways

for debugging the multicluster:

i followed this tutorial to ensure that the CA certs were correct:

i was able to complete all the steps in the cluster setup and enabled the endpoints with the istioctl x create-remote-secret and verified that each cluster had the same token key, and that both were stored in the istio-system namespace.

running kubectl get po -A in cluster 1 gives me the following output:

NAMESPACE      NAME                                      READY   STATUS    RESTARTS       AGE     IP                NODE   NOMINATED NODE   READINESS GATES
istio-system   istio-ingressgateway-fb894bf95-dvd8j      1/1     Running   0              91m     192.168.99.2      vm02   <none>           <none>
istio-system   istiod-764b78c79-pgxfd                    1/1     Running   0              178m    192.168.70.23     vm03   <none>           <none>
kube-system    calico-kube-controllers-6b77fff45-2lf7b   1/1     Running   0              133m    192.168.99.1      vm02   <none>           <none>
kube-system    calico-node-j98sz                         1/1     Running   0              36d     <vm03-ip>.103     vm03   <none>           <none>
kube-system    calico-node-mdmfp                         1/1     Running   1 (36d ago)    36d     <vm01-ip>.101     vm01   <none>           <none>
kube-system    calico-node-vw7hk                         1/1     Running   0              36d     <vm02-ip>.102     vm02   <none>           <none>
kube-system    coredns-64897985d-f4bjr                   1/1     Running   10 (36d ago)   36d     192.168.197.130   vm01   <none>           <none>
kube-system    coredns-64897985d-z9j89                   1/1     Running   1 (36d ago)    36d     192.168.197.129   vm01   <none>           <none>
kube-system    etcd-vm01                                 1/1     Running   3 (36d ago)    36d     <vm01-ip>.101     vm01   <none>           <none>
kube-system    kube-apiserver-vm01                       1/1     Running   2 (36d ago)    36d     <vm01-ip>.101     vm01   <none>           <none>
kube-system    kube-controller-manager-vm01              1/1     Running   3 (36d ago)    36d     <vm01-ip>.101     vm01   <none>           <none>
kube-system    kube-proxy-9jkjh                          1/1     Running   0              36d     <vm03-ip>.103     vm03   <none>           <none>
kube-system    kube-proxy-v9qjh                          1/1     Running   0              36d     <vm02-ip>.102     vm02   <none>           <none>
kube-system    kube-proxy-x7c2v                          1/1     Running   1 (36d ago)    36d     <vm01-ip>.101     vm01   <none>           <none>
kube-system    kube-scheduler-vm01                       1/1     Running   2 (36d ago)    36d     <vm-01-ip>.101    vm01   <none>           <none>

running the same command in cluster 2 gives me:

NAMESPACE      NAME                                      READY   STATUS    RESTARTS   AGE    IP                NODE   NOMINATED NODE   READINESS GATES
istio-system   istio-ingressgateway-76dfc8b8d4-qxxl5     1/1     Running   0          133m   192.168.236.39    vm05   <none>           <none>
istio-system   istiod-6c9949bdb6-j5m8w                   1/1     Running   0          24d    192.168.236.24 
kube-system    calico-kube-controllers-6b77fff45-k9cmr   1/1     Running   0          174m   192.168.211.129   vm04   <none>           <none>
kube-system    calico-node-smvxw                         1/1     Running   0          36d    <vm04-ip>.104     vm04   <none>           <none>
kube-system    calico-node-xq6rc                         1/1     Running   0          36d    <vm05-ip>.105     vm05   <none>           <none>
kube-system    coredns-64897985d-hdp9x                   1/1     Running   0          36d    192.168.236.3    
kube-system    coredns-64897985d-kvglc                   1/1     Running   0          36d    192.168.236.2    
kube-system    etcd-vm04                                 1/1     Running   0          36d    <vm04-ip>.104     vm04   <none>           <none>
kube-system    kube-apiserver-vm04                       1/1     Running   0          36d    <vm04-ip>.104     vm04   <none>           <none>
kube-system    kube-controller-manager-vm04              1/1     Running   0          36d    <vm04-ip>.104     vm04   <none>           <none>
kube-system    kube-proxy-49jkv                          1/1     Running   0          36d    <vm05-ip>.105     vm05   <none>           <none>
kube-system    kube-proxy-mllb5                          1/1     Running   0          36d    <vm04-ip>.104     vm04   <none>           <none>
kube-system    kube-scheduler-vm04                       1/1     Running   0          36d    <vm04-ip>.104     vm04   <none>           <none>

(the entries with the missing node and following information are all vm05. they were formatting weird, sorry.)

within each cluster, i am able to ping all the ip’s that the pods are using to get a response, which is expected, however, i would also expect, after enabling the endpoints between the clusters, that they would also be reachable, or that i could ping the pods that are in the network. when i try to add a static route with ip route add, it is automatically removed from the table. i’m not certain if this is calico, istio, or even the machine that i am running on doing this automatically, but when i tried to verify that cross-cluster traffic worked as expected using this tutorial here, i got the same result:

Hello version: v1, instance: helloworld-v1-fdb8c8c58-hjwwc
Hello version: v1, instance: helloworld-v1-fdb8c8c58-hjwwc
Hello version: v1, instance: helloworld-v1-fdb8c8c58-hjwwc
...

here is the output of ip route for cluster 1

default via <ip-addr>.254 dev ens192 proto static metric 100
<vm01-ip-addr>/24 dev ens192 proto kernel scope link src <vm01-ip-addr>.101 metric 100
<pub-ip-addr>/16 dev docker0 proto kernel scope link src <pub-ip-addr>
<pub-ip-addr>/16 dev br-710b985cba4a proto kernel scope link src <pub-ip-addr>
<pub-ip-addr>/16 dev br-b859b5e776e1 proto kernel scope link src <pub-ip-addr>
192.168.70.0/26 via <vm03-ip-addr>.103 dev tunl0 proto bird onlink
192.168.99.0/26 via <vm02-ip-addr>.102 dev tunl0 proto bird onlink
blackhole 192.168.197.128/26 proto bird
192.168.197.129 dev cali75bab066177 scope link
192.168.197.130 dev cali9163325cd04 scope link

here is the output of ip route for cluster 2

default via <ip-addr>.254 dev ens192 proto static metric 100
<vm04-ip-addr>/24 dev ens192 proto kernel scope link src <vm04-ip-addr>.104 metric 100
<pub-ip-addr>/16 dev docker0 proto kernel scope link src <pub-ip-addr>
blackhole 192.168.211.128/26 proto bird
192.168.211.129 dev calib0d7e98dac5 scope link
192.168.236.0/26 via <vm05-ip-addr>.105 dev tunl0 proto bird onlink

calico created a tunl0 route for all the respective worker nodes under the master, but they are still unable to find each other. the routes for the clusters to communicate are not being added automatically when exposing the endpoints to each other and anytime we try to add a static route, it gets deleted, and i’m not sure which process is causing that.

any help or suggestions for getting a multicluster bookinfo application up would be great and much appreciated, thank you!