Istio Multicluster

I’m trying to setup a multi cluster environment here. I’ve got 2 clusters, and the PODs can ping each other. In order to achieve that, I’ve added a route on cluster-a nodes to cluster-b POD CIDR using cluster-b master node as the router. Did the same thing on cluster-b, added a route on cluster-b nodes to cluster-a POD CIDR using cluster-a master node as the router. That way, all the PODs can ping each other.
I’ve followed the Istio MultiCluster Setup here:
The only difference is I dind’t use envoyStatsd options.
The cluster seems to work fine, I’ve deployed bookinfo on clustera and deleted the reviews-v3 deployment. Deployed reviews-v3 on clusterb, together with reviews service and ratings service (as reviews-v3 needs to call ratings service).
Application works fine, jaeger ui shows tracing fine, however ServiceGraph/Kiali shows it as unknown. By troubleshooting, the only clue I’ve got is mixer debug logs:

could not find pod for (uid: kubernetes://reviews-v3-5b994cb49d-7nblf.default, key: default/reviews-v3-5b994cb49d-7nblf)

Any help would be appreciated.


I believe you need to set up the kubernetesenv adapter to point at both API servers (I don’t think that this was added to the docs you reference).

The PR that added the Mixer functionality was: IIRC, this functionality hasn’t been backported to 1.0, so it is only available in 1.1 snapshots.

I have not personally verified that it works, but I believe that the authors of the PR were/have been testing with the changes.

Hope that helps,


Thanks for the feedback. I’m trying to figure out how to run 1.1.0-snapshot4 here. The helm command is failing. Do you think if I use the yaml of my 1.0.5 and change the container version of the mixer to tag 1.1.0-snapshot4 should do the trick? Any advice on how to generate the yaml with helm on 1.1.0-snapshot4?

Error: found in requirements.yaml, but missing in charts/ directory: sidecarInjectorWebhook, security, ingress, gateways, mixer, nodeagent, pilot, grafana, prometheus, servicegraph, tracing, galley, kiali, istiocoredns, certmanager, telemetry-gateway

Thanks again,

Solved this one by copying all the contents of subcharts do istio/charts/. Will keep working.

I’ve got 1.1.0-snapshot4 running, and still getting the error:

2019-01-12T19:28:06.007672Z     debug   api     Dispatching Preprocess
2019-01-12T19:28:06.007748Z     debug   api     Dispatching Preprocess
2019-01-12T19:28:06.007901Z     debug   begin dispatch: destination='kubernetes:kubernetesenv.istio-system(kubernetesenv)'
2019-01-12T19:28:06.007996Z     debug   begin dispatch: destination='kubernetes:kubernetesenv.istio-system(kubernetesenv)'
2019-01-12T19:28:06.007987Z     debug   adapters        could not find pod for (uid: kubernetes://reviews-v3-5b994cb49d-6xhxz.default, key: default/reviews-v3-5b994cb49d-6xhxz)        {"adapter": "kubernetesenv.istio-system"}
2019-01-12T19:28:06.008305Z     debug   complete dispatch: destination='kubernetes:kubernetesenv.istio-system(kubernetesenv)' {err:<nil>}
2019-01-12T19:28:06.008391Z     debug   complete dispatch: destination='kubernetes:kubernetesenv.istio-system(kubernetesenv)' {err:<nil>}

Got trace for another call here:

2019-01-12T19:38:20.858882Z     debug   adapters        could not find pod for (uid: kubernetes://reviews-v3-5b994cb49d-6xhxz.default, key: default/reviews-v3-5b994cb49d-6xhxz)        {"adapter": "kubernetesenv.istio-system"}*Scope).emit

Any advice? Should I open something on github?

Can you post the config you are using for the kubernetesenv handler?

I’ve opened an issue on istio github as well and posted some details there. I am not sure how I can get that detail for you. Can you explain? Should I run the kubernetes dump?
The only detail I see is on pod description:



kind: CustomResourceDefinition
  annotations: crd-install |
  creationTimestamp: "2019-01-12T17:47:50Z"
  generation: 1
    app: mixer
    chart: istio
    heritage: Tiller
    istio: mixer-adapter
    package: kubernetesenv
    release: istio
  resourceVersion: "21074"
  selfLink: /apis/
  uid: 2e3f0cf4-1692-11e9-a6d6-0021f613393c
    strategy: None
    - istio-io
    - policy-istio-io
    kind: kubernetesenv
    listKind: kubernetesenvList
    plural: kubernetesenvs
    singular: kubernetesenv
  scope: Namespaced
  version: v1alpha2
  - name: v1alpha2
    served: true
    storage: true
    - istio-io
    - policy-istio-io
    kind: kubernetesenv
    listKind: kubernetesenvList
    plural: kubernetesenvs
    singular: kubernetesenv
  - lastTransitionTime: "2019-01-12T17:47:50Z"
    message: no conflicts found
    reason: NoConflicts
    status: "True"
    type: NamesAccepted
  - lastTransitionTime: null
    message: the initial names have been accepted
    reason: InitialNamesAccepted
    status: "True"
    type: Established
  - v1alpha2

is this what you’re looking for?

According to , it would default to istio-system in case the parameter clusterRegistriesNamespace is not set. This is exactly where clusterb secret is.

Today the pod was showing some RBAC problems, strange, I haven’t change anything. So in order to workaround the RBAC issues, I’ve created a clusterrolebinding on the remote giving istio-multi cluster-admin.
Now it seems to work however I see stuff twice as per screenshot.


Happy to see stuff is working-ish now. Just curious: do you have multiple deployments of bookinfo?


Only one deploy. Unfortunately I’ve got a problem with my private cloud storage, so I’m stuck right now. Will need to replace one of the Hard Drives. As soon as I fix that, I’ll get back to testing.


Doug, finally I’ve managed to get it working… Got some issues during the path, would like to contribute, can you help me out? there are 2 things that I think needs to be corrected on Istio.


Sure. Would love to help! Please let me how.

From what I’ve seen here, there are 2 charts that needs work. May I submit pull requests on GitHub?
First, when I create the istio-control-plane using install/kubernetes/helm/istio, it doesn’t include the kiali secret, which makes the kiali service crash (1.1.0-snapshot5).
Second, when I create the istio-remote using install/kubernetes/helm/istio-remote, the clusterrole for istio-reader is not correct and needs to be changed to apiGroups *, and add replicasets and replicationcontrollers to the resources.
Those are the 2 that I’ve got at the top of my head.


Please! PRs are most definitely welcome.

Re the first problem this is by design for security reasons. The website at has instructions for getting things moving althoufh they are 1.1 solutions. The Kigali community is working on making Kiali at least not crash when the secret is not present.

The second problem we are fixing by consolidating istio-remote into the istio chart. I am unclear if this work will be merged into the 1.1 branch. By consolidating the charts config details such as RBAC we will be maintaining only one set of manifests permitting us to fix this bug automatically with 1.1.

Are you using 1.0? If so we could use a PR for the second problem although 1.0 was extensively tested. Perhaps something had changed in a later version of Kuberbetes.


re: kiali crashing when its secret is not present

That has been fixed. It is this:

The latest Kiali release v0.14 has this fix. I’ll be submitting a PR to Istio’s helm charts soon so it pulls this latest.

1 Like

It was 1.1.0-snapshot5. I only had to make those 2 little changes to the istio-reader ClusterRole in order for it to stop erroring out.