Understanding the concurrency parameter in proxy config

Understanding the concurrency parameter ProxyConfig. From the official documentation, the definition of concurrency is defined as :
The number of worker threads to run. If unset, this will be automatically determined based on CPU requests/limits. If set to 0, all cores on the machine will be used. Default is 2 worker threads.

Question: When we specify concurrency parameter, does it utilize the CPU allocated to sidecar or utilize the sidecar hosted node’s CPU (which is other than what allocated to pod) ?

Config:

annotations:
    sidecar.istio.io/proxyCPULimit : "4"
    sidecar.istio.io/proxyCPU	: "100m"
    sidecar.istio.io/proxyMemoryLimit	: "1Gi"
    sidecar.istio.io/proxyMemory : "128Mi"
    proxy.istio.io/config: |
       concurrency: 20

In above configuration set at pod/deployment level, does concurrency utilizes the CPU allocated to side car (4 in above case) or does it utilize the CPU on the K8s Node ?

According to istio/inject.go at 72e3587fe6212df7765f04e94e61dca2204b9259 · istio/istio · GitHub (as well as to Istio / ProxyConfig) you have to explicitly set

    proxy.istio.io/config: |
       concurrency: 0

if you want the container proxyCPULimit/proxyCPU to apply (if both limit and request are set, limit takes precedence).

If you don’t specify concurrency, it defaults to 2 (you can see the behavior implemented in that function)

(no idea why Istio / Global Mesh Options says a different thing)