Background: 2 x AWS EKS stack, Kubernetes version 1.14, Platform version eks.9, istio-1.5.0
I’m following this guide to setup Shared control plane (multi-network), got these errors when doing “Setup cluster 2”.
Any tips? Thanks!
`$ istioctl manifest apply --context=CTX_CLUSTER2 \
--set profile=remote \
--set values.gateways.enabled=true \
--set values.security.selfSigned=false \
--set values.global.createRemoteSvcEndpoints=true \
--set values.global.remotePilotCreateSvcEndpoint=true \
--set values.global.remotePilotAddress={LOCAL_GW_ADDR}
–set values.global.remotePolicyAddress={LOCAL_GW_ADDR} \
--set values.global.remoteTelemetryAddress={LOCAL_GW_ADDR}
–set values.gateways.istio-ingressgateway.env.ISTIO_META_NETWORK=“network2”
–set values.global.network=“network2”
–set values.global.multiCluster.clusterName=${CLUSTER_NAME}
-
Applying manifest for component Base…
2020-03-13T14:11:19.644688Z error installer error running kubectl: exit status 1
✘ Finished applying manifest for component Base. -
Applying manifest for component Pilot…
Finished applying manifest for component Pilot.
2020-03-13T14:11:29.235035Z error installer Failed to wait for resource: resources not ready after 10m0s: services “istio-pilot” not found -
Applying manifest for component IngressGateways…
Finished applying manifest for component IngressGateways.
Component Base - manifest apply returned the following errors:
Error: error running kubectl: exit status 1
✘ Errors were logged during apply operation. Please check component installation logs above.
Error: failed to apply manifests: errors were logged during apply operation
`
P.S. the way I created 2 x EKS clusters, i.e. cluster1 and cluster2
$ eksctl create cluster
–name cluster1
–region us-east-1
–nodegroup-name standard-workers
–node-type t3.medium
–nodes 2
–nodes-min 1
–nodes-max 3
–ssh-access
–ssh-public-key eks
–managed