Istio Multicluster

I’m trying to setup a multi cluster environment here. I’ve got 2 clusters, and the PODs can ping each other. In order to achieve that, I’ve added a route on cluster-a nodes to cluster-b POD CIDR using cluster-b master node as the router. Did the same thing on cluster-b, added a route on cluster-b nodes to cluster-a POD CIDR using cluster-a master node as the router. That way, all the PODs can ping each other.
I’ve followed the Istio MultiCluster Setup here: https://istio.io/docs/setup/kubernetes/multicluster-install/
The only difference is I dind’t use envoyStatsd options.
The cluster seems to work fine, I’ve deployed bookinfo on clustera and deleted the reviews-v3 deployment. Deployed reviews-v3 on clusterb, together with reviews service and ratings service (as reviews-v3 needs to call ratings service).
Application works fine, jaeger ui shows tracing fine, however ServiceGraph/Kiali shows it as unknown. By troubleshooting, the only clue I’ve got is mixer debug logs:

could not find pod for (uid: kubernetes://reviews-v3-5b994cb49d-7nblf.default, key: default/reviews-v3-5b994cb49d-7nblf)

{"level":"info","time":"2019-01-11T15:10:21.821678Z","instance":"accesslog.logentry.istio-system","apiClaims":"","apiKey":"","clientTraceId":"","connection_security_policy":"none","destinationApp":"","destinationIp":"10.61.0.2","destinationName":"unknown","destinationNamespace":"default","destinationOwner":"unknown","destinationPrincipal":"","destinationServiceHost":"reviews.default.svc.cluster.local","destinationWorkload":"unknown","httpAuthority":"reviews:9080","latency":"58.440379ms","method":"GET","protocol":"http","receivedBytes":798,"referer":"","reporter":"destination","requestId":"ec775803-97a6-459f-8b70-b7da5c290753","requestSize":0,"requestedServerName":"","responseCode":200,"responseSize":375,"responseTimestamp":"2019-01-11T15:10:21.879868Z","sentBytes":549,"sourceApp":"productpage","sourceIp":"10.51.0.10","sourceName":"productpage-v1-54d799c966-w42qf","sourceNamespace":"default","sourceOwner":"kubernetes://apis/apps/v1/namespaces/default/deployments/productpage-v1","sourcePrincipal":"","sourceWorkload":"productpage-v1","url":"/reviews/0","userAgent":"python-requests/2.18.4","xForwardedFor":"0.0.0.0"}
Any help would be appreciated.

Thanks

I believe you need to set up the kubernetesenv adapter to point at both API servers (I don’t think that this was added to the docs you reference).

The PR that added the Mixer functionality was: https://github.com/istio/istio/pull/8536. IIRC, this functionality hasn’t been backported to 1.0, so it is only available in 1.1 snapshots.

I have not personally verified that it works, but I believe that the authors of the PR were/have been testing with the changes.

Hope that helps,
Doug.

Doug,

Thanks for the feedback. I’m trying to figure out how to run 1.1.0-snapshot4 here. The helm command is failing. Do you think if I use the yaml of my 1.0.5 and change the container version of the mixer to tag 1.1.0-snapshot4 should do the trick? Any advice on how to generate the yaml with helm on 1.1.0-snapshot4?

Error: found in requirements.yaml, but missing in charts/ directory: sidecarInjectorWebhook, security, ingress, gateways, mixer, nodeagent, pilot, grafana, prometheus, servicegraph, tracing, galley, kiali, istiocoredns, certmanager, telemetry-gateway

Thanks again,
Marcelo

Solved this one by copying all the contents of subcharts do istio/charts/. Will keep working.

I’ve got 1.1.0-snapshot4 running, and still getting the error:

2019-01-12T19:28:06.007672Z     debug   api     Dispatching Preprocess
2019-01-12T19:28:06.007748Z     debug   api     Dispatching Preprocess
2019-01-12T19:28:06.007901Z     debug   begin dispatch: destination='kubernetes:kubernetesenv.istio-system(kubernetesenv)'
2019-01-12T19:28:06.007996Z     debug   begin dispatch: destination='kubernetes:kubernetesenv.istio-system(kubernetesenv)'
2019-01-12T19:28:06.007987Z     debug   adapters        could not find pod for (uid: kubernetes://reviews-v3-5b994cb49d-6xhxz.default, key: default/reviews-v3-5b994cb49d-6xhxz)        {"adapter": "kubernetesenv.istio-system"}
2019-01-12T19:28:06.008305Z     debug   complete dispatch: destination='kubernetes:kubernetesenv.istio-system(kubernetesenv)' {err:<nil>}
2019-01-12T19:28:06.008391Z     debug   complete dispatch: destination='kubernetes:kubernetesenv.istio-system(kubernetesenv)' {err:<nil>}

Got trace for another call here:

2019-01-12T19:38:20.858882Z     debug   adapters        could not find pod for (uid: kubernetes://reviews-v3-5b994cb49d-6xhxz.default, key: default/reviews-v3-5b994cb49d-6xhxz)        {"adapter": "kubernetesenv.istio-system"}
istio.io/istio/pkg/log.(*Scope).emit
        /workspace/go/src/istio.io/istio/pkg/log/scope.go:281
istio.io/istio/pkg/log.(*Scope).Debug
        /workspace/go/src/istio.io/istio/pkg/log/scope.go:229
istio.io/istio/mixer/pkg/runtime/handler.logger.Debugf
        /workspace/go/src/istio.io/istio/mixer/pkg/runtime/handler/logger.go:66
istio.io/istio/mixer/adapter/kubernetesenv.(*handler).findPod
        /workspace/go/src/istio.io/istio/mixer/adapter/kubernetesenv/kubernetesenv.go:242
istio.io/istio/mixer/adapter/kubernetesenv.(*handler).GenerateKubernetesAttributes
        /workspace/go/src/istio.io/istio/mixer/adapter/kubernetesenv/kubernetesenv.go:211
istio.io/istio/mixer/template.glob..func4
        /workspace/go/src/istio.io/istio/mixer/template/template.gen.go:333
istio.io/istio/mixer/pkg/runtime/dispatcher.(*dispatchState).invokeHandler
        /workspace/go/src/istio.io/istio/mixer/pkg/runtime/dispatcher/dispatchstate.go:143
istio.io/istio/mixer/pkg/runtime/dispatcher.(*dispatchState).(istio.io/istio/mixer/pkg/runtime/dispatcher.invokeHandler)-fm
        /workspace/go/src/istio.io/istio/mixer/pkg/runtime/dispatcher/session.go:271
istio.io/istio/mixer/pkg/pool.(*GoroutinePool).AddWorkers.func1
        /workspace/go/src/istio.io/istio/mixer/pkg/pool/goroutine.go:82

Any advice? Should I open something on github?

Can you post the config you are using for the kubernetesenv handler?

I’ve opened an issue on istio github as well and posted some details there. I am not sure how I can get that detail for you. Can you explain? Should I run the kubernetes dump?
The only detail I see is on pod description:

  --monitoringPort=9093
  --address
  unix:///sock/mixer.socket
  --configStoreURL=mcp://istio-galley.istio-system.svc:9901
  --configDefaultNamespace=istio-system
  --trace_zipkin_url=http://zipkin:9411/api/v1/spans

Thanks,
Marcelo

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  annotations:
    helm.sh/hook: crd-install
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apiextensions.k8s.io/v1beta1","kind":"CustomResourceDefinition","metadata":{"annotations":{"helm.sh/hook":"crd-install"},"labels":{"app":"mixer","chart":"istio","heritage":"Tiller","istio":"mixer-adapter","package":"kubernetesenv","release":"istio"},"name":"kubernetesenvs.config.istio.io"},"spec":{"group":"config.istio.io","names":{"categories":["istio-io","policy-istio-io"],"kind":"kubernetesenv","plural":"kubernetesenvs","singular":"kubernetesenv"},"scope":"Namespaced","version":"v1alpha2"}}
  creationTimestamp: "2019-01-12T17:47:50Z"
  generation: 1
  labels:
    app: mixer
    chart: istio
    heritage: Tiller
    istio: mixer-adapter
    package: kubernetesenv
    release: istio
  name: kubernetesenvs.config.istio.io
  resourceVersion: "21074"
  selfLink: /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/kubernetesenvs.config.istio.io
  uid: 2e3f0cf4-1692-11e9-a6d6-0021f613393c
spec:
  conversion:
    strategy: None
  group: config.istio.io
  names:
    categories:
    - istio-io
    - policy-istio-io
    kind: kubernetesenv
    listKind: kubernetesenvList
    plural: kubernetesenvs
    singular: kubernetesenv
  scope: Namespaced
  version: v1alpha2
  versions:
  - name: v1alpha2
    served: true
    storage: true
status:
  acceptedNames:
    categories:
    - istio-io
    - policy-istio-io
    kind: kubernetesenv
    listKind: kubernetesenvList
    plural: kubernetesenvs
    singular: kubernetesenv
  conditions:
  - lastTransitionTime: "2019-01-12T17:47:50Z"
    message: no conflicts found
    reason: NoConflicts
    status: "True"
    type: NamesAccepted
  - lastTransitionTime: null
    message: the initial names have been accepted
    reason: InitialNamesAccepted
    status: "True"
    type: Established
  storedVersions:
  - v1alpha2

is this what you’re looking for?

According to https://preliminary.istio.io/docs/reference/config/policy-and-telemetry/adapters/kubernetesenv/ , it would default to istio-system in case the parameter clusterRegistriesNamespace is not set. This is exactly where clusterb secret is.

Today the pod was showing some RBAC problems, strange, I haven’t change anything. So in order to workaround the RBAC issues, I’ve created a clusterrolebinding on the remote giving istio-multi cluster-admin.
Now it seems to work however I see stuff twice as per screenshot.

Thanks,
Marcelo45

Happy to see stuff is working-ish now. Just curious: do you have multiple deployments of bookinfo?

Doug,

Only one deploy. Unfortunately I’ve got a problem with my private cloud storage, so I’m stuck right now. Will need to replace one of the Hard Drives. As soon as I fix that, I’ll get back to testing.

Thanks,
Marcelo

Doug, finally I’ve managed to get it working… Got some issues during the path, would like to contribute, can you help me out? there are 2 things that I think needs to be corrected on Istio.

Thanks,
Marcelo

Sure. Would love to help! Please let me how.

From what I’ve seen here, there are 2 charts that needs work. May I submit pull requests on GitHub?
First, when I create the istio-control-plane using install/kubernetes/helm/istio, it doesn’t include the kiali secret, which makes the kiali service crash (1.1.0-snapshot5).
Second, when I create the istio-remote using install/kubernetes/helm/istio-remote, the clusterrole for istio-reader is not correct and needs to be changed to apiGroups *, and add replicasets and replicationcontrollers to the resources.
Those are the 2 that I’ve got at the top of my head.

2 Likes

Please! PRs are most definitely welcome.

Re the first problem this is by design for security reasons. The website at prelimary.istio.io has instructions for getting things moving althoufh they are 1.1 solutions. The Kigali community is working on making Kiali at least not crash when the secret is not present.

The second problem we are fixing by consolidating istio-remote into the istio chart. I am unclear if this work will be merged into the 1.1 branch. By consolidating the charts config details such as RBAC we will be maintaining only one set of manifests permitting us to fix this bug automatically with 1.1.

Are you using 1.0? If so we could use a PR for the second problem although 1.0 was extensively tested. Perhaps something had changed in a later version of Kuberbetes.

Cheers,
Steve

re: kiali crashing when its secret is not present

That has been fixed. It is this:

The latest Kiali release v0.14 has this fix. I’ll be submitting a PR to Istio’s helm charts soon so it pulls this latest.

1 Like

It was 1.1.0-snapshot5. I only had to make those 2 little changes to the istio-reader ClusterRole in order for it to stop erroring out.