"http://127.0.0.1:15000/stats?usedonly" response takes a long time

Hi.

I’m verifying istio 1.6.3 now.
It appears to be working fine, but only certain Pods frequently output the following log from the istio-proxy container.

istio-proxy 2020-06-25T00:24:28.709648Z    warn    Envoy proxy is NOT ready: failed to get readiness stats: Get "http://127.0.0.1:15000/stats?usedonly&filter=^(server.state|listener_manager.workers_started)": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

The actual access on the istio-proxy container can certainly be more than 1s.

time curl "http://127.0.0.1:15000/stats?usedonly&filter=%5E%28server.state%7Clistener_manager.workers_started%29"

real	0m5.600s
user	0m0.000s
sys	0m0.011s

To try it out, I removed the filter from the above query and accessed it.

curl “http://127.0.0.1:15000/stats?usedonly

Then the following records were returned over 1,000 like the following

reporter=.=source;.;source_workload=.=foo-main-service;.;source_workload_namespace=.=foo;.;source_principal=.=unknown;.;source_app=.=foo-main-service;.;source_version=.=unknown;.;source_canonical_service=.=foo-main-service;.;source_canonical_revision=.=latest;.;destination_workload=.=bar-master-service;.;destination_workload_namespace=.=bar;.;destination_principal=.=unknown;.;destination_app=.=bar-master-service;.;destination_version=.=unknown;.;destination_service=.=bar-master-service.bar.svc.cluster.local;.;destination_service_name=.=bar-master-service;.;destination_service_namespace=.=bar;.;destination_canonical_service=.=bar-master-service;.;destination_canonical_revision=.=latest;.;request_protocol=.=grpc;.;response_code=.=200;.;grpc_response_status=.=0;.;response_flags=.=-;.;connection_security_policy=.=unknown;.;_istio_response_bytes: P0(nan,330.0) P25(nan,545.636) P50(nan,860.983) P75(nan,2904.91) P90(nan,2961.97) P95(nan,2980.98) P99(nan,2996.2) P99.5(nan,2998.1) P99.9(nan,2999.62) P100(nan,3000.0)

Then more than 1,000 records were returned, as shown below.
I think this is the cause, but what is the solution?

I solved this problem by setting spec.values.telemetry.v2. prometheus.enabled=false in the Istio configuration file.

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  name: istio_operator
spec:
...
  values:
...
    telemetry:
      v2:
        prometheus:
          enabled: false
...

@11110 I am also facing the same issue. Some of the pods giving this error. I can try the solution that you provided but will that disable metrics?