Istio-telemetry pod goes in CrashBackOff

Hello Everyone !

During the installation of istio 1.4.6 the istio-telemetry pod goes in CrashBackOff
i used minikube v1.10.1 ,k8s v1.18.2, using demo profile. how to tackle this. any help would be appreciated.

NAME                                      READY   STATUS             RESTARTS   AGE
grafana-b64886fff-bs7lk                   1/1     Running            0          14m
istio-citadel-7869c8c6cb-cp52n            1/1     Running            0          14m
istio-egressgateway-5969658c7f-k9kxm      1/1     Running            0          14m
istio-galley-675bc47b9d-4zqlj             1/1     Running            0          14m
istio-ingressgateway-59f54d89fc-vqdv9     1/1     Running            0          14m
istio-pilot-695d8ddf5-vwsj2               1/1     Running            0          14m
istio-policy-86f5d85565-jhw2q             1/1     Running            1          14m
istio-sidecar-injector-77497cf9d8-n5vkh   1/1     Running            0          14m
istio-telemetry-767bf8bf6c-w4d4v          0/1     CrashLoopBackOff   7          14m
istio-tracing-857496f799-5rl48            1/1     Running            0          14m
kiali-579dd86496-hvw76                    1/1     Running            0          14m
prometheus-946f9f9d8-vjb8d                1/1     Running            0          14m
>

Hi Shubham!
Find out “Events” section after executing command
kubectl describe pod -n istio-system
there should be some useful info on what was happened.

I did this. the output is:

Events:
  Type     Reason   Age                   From               Message
  ----     ------   ----                  ----               -------
  Warning  BackOff  98s (x494 over 106m)  kubelet, minikube  Back-off restarting failed container

Don’t know this. any help

ok, let`s try to get logs from the pod with command
kubectl get logs --previous (help)

Hi mike
When i ran the kubectl logs -n istio-system. it show nothing.

Sorry for incorrect command.
In the link I posted, the set of useful commands, as well as kubectl logs usage. Next parameter to this command, the name of falling pod should be passed.
Like:
kubectl get logs istio-telemetry-767bf8bf6c-w4d4v --previous

Dont be sorry
i ran the this command but i forget to write here (don’t know how)
kubectl logs istio-telemetry-767bf8bf6c-rd4nk -n istio-system and also with --previous

but it show nothing

Fine. Anyway, we need to get all possible info from this pod. Could you share complete output for kubectl describe pod istio-telemetry-xxx-xxx -n istio-system, not just Events section, please.

Ofcourse. this is it.

Name:         istio-telemetry-767bf8bf6c-rd4nk
Namespace:    istio-system
Priority:     0
Node:         minikube/172.17.0.3
Start Time:   Wed, 13 May 2020 10:06:19 +0000
Labels:       app=telemetry
              istio=mixer
              istio-mixer-type=telemetry
              pod-template-hash=767bf8bf6c
Annotations:  sidecar.istio.io/inject: false
Status:       Running
IP:           172.18.0.11
IPs:
  IP:           172.18.0.11
Controlled By:  ReplicaSet/istio-telemetry-767bf8bf6c
Containers:
  mixer:
    Container ID:  docker://a46a0e0ee1946037a55588c8a4fc1a18e4bc6808ca61658379c44cb23b5fca1f
    Image:         docker.io/istio/mixer:1.4.6
    Image ID:      docker-pullable://istio/mixer@sha256:3e28dec5103561c478b2a382c730bcbe424d6fc1e3e7335dcdacbb3b7441a5d7
    Ports:         9091/TCP, 15014/TCP, 42422/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP
    Args:
      --monitoringPort=15014
      --address
      tcp://0.0.0.0:9091
      --log_output_level=default:info
      --configStoreURL=mcp://istio-galley.istio-system.svc:9901
      --configDefaultNamespace=istio-system
      --useAdapterCRDs=false
      --useTemplateCRDs=false
      --trace_zipkin_url=http://zipkin.istio-system:9411/api/v1/spans
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:415: setting cgroup config for procHooks process caused \\\"failed to write \\\\\\\"480000\\\\\\\" to \\\\\\\"/sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod5eb9f7a7-089f-454f-a23c-d4cfe730cd8f/a46a0e0ee1946037a55588c8a4fc1a18e4bc6808ca61658379c44cb23b5fca1f/cpu.cfs_quota_us\\\\\\\": write /sys/fs/cgroup/cpu,cpuacct/kubepods/burstable/pod5eb9f7a7-089f-454f-a23c-d4cfe730cd8f/a46a0e0ee1946037a55588c8a4fc1a18e4bc6808ca61658379c44cb23b5fca1f/cpu.cfs_quota_us: invalid argument\\\"\"": unknown
      Exit Code:    128
      Started:      Thu, 14 May 2020 11:48:29 +0000
      Finished:     Thu, 14 May 2020 11:48:29 +0000
    Ready:          False
    Restart Count:  312
    Limits:
      cpu:     4800m
      memory:  4G
    Requests:
      cpu:     50m
      memory:  100Mi
    Liveness:  http-get http://:15014/version delay=5s timeout=1s period=5s #success=1 #failure=3
    Environment:
      POD_NAMESPACE:  istio-system (v1:metadata.namespace)
      GOMAXPROCS:     6
    Mounts:
      /etc/certs from istio-certs (ro)
      /sock from uds-socket (rw)
      /var/run/secrets/istio.io/telemetry/adapter from telemetry-adapter-secret (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from istio-mixer-service-account-token-2hx6b (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  istio-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  istio.istio-mixer-service-account
    Optional:    true
  uds-socket:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  telemetry-adapter-secret:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  telemetry-adapter-secret
    Optional:    true
  telemetry-envoy-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      telemetry-envoy-config
    Optional:  false
  istio-mixer-service-account-token-2hx6b:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  istio-mixer-service-account-token-2hx6b
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason   Age                     From               Message
  ----     ------   ----                    ----               -------
  Warning  BackOff  2m39s (x7200 over 25h)  kubelet, minikube  Back-off restarting failed container

if I understand correct, the error message telling us:
setting cgroup config for procHooks process caused \\\"failed to write \\\\\\\"480000\\\\\\\"
it seems here is the error

do you use custom configs to install istio?

No i don’t usre any custom configs. i just follow the docs. install istio using demo profile (istioctl manifest apply --set profile=demo). it is first time i see this error. i installed istio(same version) before but never get this error.
so there is some resource problem?

Not sure if this is still a problem or not but FWIW I also hit the exact same error when my minikube had been started using the Docker driver. After removing the --driver=docker option from the minikube start command and allowing the default hyperkit driver to be used (I’m on macOS) the problem went away.

Hi @George_Harley

AFAIK hyperkit driver is only for macos. it varies with different os. i am on ubuntu os and used virtualbox driver.