Issue installing 1.5 with istioctl

Hi!

I have an issue while deploying Istio 1.5.

With this configuration file:

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  addonComponents:
    grafana:
      enabled: true
    kiali:
      enabled: true
    prometheus:
      enabled: true
    tracing:
      enabled: true
  components:
    galley:
      enabled: true
    telemetry:
      enabled: true
  values:
    global:
      mtls:
        enabled: true
      controlPlaneSecurityEnabled: true
      k8sIngress:
        enabled: true
        enableHttps: true
        gatewayName: ingressgateway
    grafana:
      storageClassName: "grafana-ssd-storage"
      persist: true
      accessMode: ReadWriteOnce
    gateways:
      istio-ingressgateway:
        sds:
          enabled: true
        env:
          ISTIO_META_HTTP10: '"1"'

Then

$ istioctl manifest apply -f manifest.yaml --verbose
proto: tag has too few fields: "-"
- Applying manifest for component Base...
✔ Finished applying manifest for component Base.
- Applying manifest for component Pilot...
✔ Finished applying manifest for component Pilot.
  Waiting for resources to become ready...
  Waiting for resources to become ready...
- Applying manifest for component IngressGateways...
- Applying manifest for component Galley...
- Applying manifest for component Telemetry...
- Applying manifest for component AddonComponents...
✔ Finished applying manifest for component Galley.
2020-03-20T18:45:54.566657Z	error	installer	error running kubectl: exit status 1
✘ Finished applying manifest for component AddonComponents.
^[[B2020-03-20T18:46:48.597062Z	error	installer	error running kubectl: exit status 1
✘ Finished applying manifest for component IngressGateways.

Gets stuck there.

If I check the running pods:

NAME                                   READY   STATUS              RESTARTS   AGE
grafana-5cc7f86765-j8jth               1/1     Running             0          8m31s
istio-galley-697658d949-qvxt6          0/2     ContainerCreating   0          9m5s
istio-ingressgateway-68f869776-m5r7c   1/1     Running             0          7m36s
istio-tracing-8584b4d7f9-jr6dp         1/1     Running             0          8m31s
istiod-6b7d5b54b7-kkpwq                1/1     Running             0          9m17s
kiali-76f556db6d-nqtg7                 1/1     Running             0          8m30s
prometheus-7fd55497dd-tw77h            2/2     Running             0          8m30s

The events:

Events:
  Type     Reason       Age                  From                                                      Message
  ----     ------       ----                 ----                                                      -------
  Normal   Scheduled    10m                  default-scheduler                                         Successfully assigned istio-system/istio-galley-697658d949-qvxt6 to gke-sandbox-0320-n1-standard-4-hd-9d67a45c-q4q4
  Warning  FailedMount  8m2s                 kubelet, gke-sandbox-0320-n1-standard-4-hd-9d67a45c-q4q4  Unable to attach or mount volumes: unmounted volumes=[istio-certs], unattached volumes=[istio-certs envoy-config config mesh-config istio-galley-service-account-token-kzmz5]: timed out waiting for the condition
  Warning  FailedMount  5m43s                kubelet, gke-sandbox-0320-n1-standard-4-hd-9d67a45c-q4q4  Unable to attach or mount volumes: unmounted volumes=[istio-certs], unattached volumes=[config mesh-config istio-galley-service-account-token-kzmz5 istio-certs envoy-config]: timed out waiting for the condition
  Warning  FailedMount  3m28s                kubelet, gke-sandbox-0320-n1-standard-4-hd-9d67a45c-q4q4  Unable to attach or mount volumes: unmounted volumes=[istio-certs], unattached volumes=[istio-galley-service-account-token-kzmz5 istio-certs envoy-config config mesh-config]: timed out waiting for the condition
  Warning  FailedMount  111s (x12 over 10m)  kubelet, gke-sandbox-0320-n1-standard-4-hd-9d67a45c-q4q4  MountVolume.SetUp failed for volume "istio-certs" : secret "istio.istio-galley-service-account" not found
  Warning  FailedMount  72s                  kubelet, gke-sandbox-0320-n1-standard-4-hd-9d67a45c-q4q4  Unable to attach or mount volumes: unmounted volumes=[istio-certs], unattached volumes=[mesh-config istio-galley-service-account-token-kzmz5 istio-certs envoy-config config]: timed out waiting for the condition

Services account:

kubectl get serviceAccount                          
NAME                                   SECRETS   AGE
default                                1         14m
istio-galley-service-account           1         13m
istio-ingressgateway-service-account   1         13m
istio-mixer-service-account            1         13m
istio-reader-service-account           1         13m
istiod-service-account                 1         13m
kiali-service-account                  1         13m
prometheus                             1         13m

Trying to find the issue. I have something wrong on the manifest?

Thanks!

Take a look at these:



The api server must be running with the ServiceAccountTokenVolumeProjection parameters…

Had Similar Issue. Was resolved when I added CertManager in my mesh.

I have the same issue, I’ve followed the steps shown here which fixed the general istiod deploy but I’m having the same issue with galley now

Update: seems that if you want to install galley you also need to install citadel, after installing citadel galley started working without any other changes

@murbano gallay and citadel are part of istiod now, i am curious to know why are you enabling them explicitly and want the separate pods?

In my case, the galley metrics aren’t exposed in prometheus unless I manually enable the extra pod in the IstioOperator CRD

Yeah, I just keep 1.4, because I dint have time to continue, I will try again more in deep and let you know what I did wrong.

Thanks all

Here is what I did to make istio 1.5 work with k8s 1.16

export existing config:

kubeadm config view > kube-api.yaml

add these additions:

   extraArgs:
     service-account-issuer: kubernetes.default.svc
     service-account-signing-key-file: /etc/kubernetes/pki/sa.key

Patch the k8s control plane (master1):
kubeadm upgrade diff v1.16.7 --config kube-api.yaml
kubeadm upgrade apply v1.16.7 --config kube-api.yaml

Roll out to the other control planes:
kubectl upgrade node

The this should give you some output rather than being blank:
kubectl get --raw /api/v1 | jq '.resources[] | select(.name | index("serviceaccounts/token"))'