How to setup multicluster (single controlplane same VPC) in EKS

Hi all,

I’ve been banging my head for weeks and I am not able to create a multicluster with Istio 1.6.1 with a shared controlplane on the same VPC. I tried literally everything in my power, but the amount of issue is really unbearable. This below are the two configurations I am using:

Controlplane cluster configuration

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
  name: istiocontrolplane
spec:
  profile: default
  addonComponents:
    tracing:
      enabled: true
    kiali:
      enabled: true
    prometheus:
      enabled: true

  values:
    gateways:
      istio-ingressgateway:
        env:
          ISTIO_META_NETWORK: "vpc-name"
        serviceAnnotations:
          service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-xxxx"
          service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
          service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
          service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:eu-west-1:xxxx:certificate/xxx"
          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    # enable multicluster
    global:
      logging:
        level: "default:debug"
      multiCluster:
        clusterName: controlplane
      network: vpc-name

      meshNetworks:
        vpc-name:
          endpoints:
            - fromRegistry: Kubernetes
          gateways:
            - registry_service_name: istio-ingressgateway.istio-system.svc.cluster.local
              port: 443
  components:
    pilot:
      k8s:
        serviceAnnotations:
          service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: "sg-XXXXX"
          service.beta.kubernetes.io/aws-load-balancer-internal: "0.0.0.0/0"
          service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
          service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:eu-west-1:XXX:certificate/XXXXX"
          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
        service:
          type: LoadBalancer
          ports:
            - name: dns
              port: 53
              protocol: TCP

Remote cluster configuration

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: remote
  namespace: istio-system
spec:
  addonComponents:
    tracing:
      enabled: true
  values:
    global:
      jwtPolicy: first-party-jwt
      multiCluster:
        clusterName: remote
      network: vpc-name
      remotePilotAddress: internal-a4148c920dd834xxxxxxxxxxeu-west-1.elb.amazonaws.com
      remotePilotCreateSvcEndpoint: false

  components:
    ingressGateways:
      - name: istio-ingressgateway
        enabled: false

I’d like to use an Internal Loadbalancer for the communications but I am open to anything as long as I am able to solve the issue. At the moment, the istiod-remote pod is not able to start due to the fact that I am using a hostname in the remotePilotAddress instead of an IP. In Istio 1.6.1 the changelog claims that this is possible, but yet it doesn’t work. I tried using an IP but still the pod doesn’t pop-up.

What am I missing?

Thank you in advance