Understanding the concurrency parameter ProxyConfig. From the official documentation, the definition of concurrency
is defined as :
The number of worker threads to run. If unset, this will be automatically determined based on CPU requests/limits. If set to 0, all cores on the machine will be used. Default is 2 worker threads.
Question: When we specify concurrency
parameter, does it utilize the CPU allocated to sidecar or utilize the sidecar hosted node’s CPU (which is other than what allocated to pod) ?
Config:
annotations:
sidecar.istio.io/proxyCPULimit : "4"
sidecar.istio.io/proxyCPU : "100m"
sidecar.istio.io/proxyMemoryLimit : "1Gi"
sidecar.istio.io/proxyMemory : "128Mi"
proxy.istio.io/config: |
concurrency: 20
In above configuration set at pod/deployment level, does concurrency utilizes the CPU allocated to side car (4 in above case) or does it utilize the CPU on the K8s Node ?