Istio-proxy sidecar memory leak

Hello

We have very specific case - we’ve enabled Istio sidecar injection for our nginx ingress controller which serves incoming requests to the cluster, we did it in order to make some tests, see metrics, etc

So at the moment we have Istio deployed into cluster and enabled for the nginx only

NAME                                                                    CLUSTER                         CDS        LDS        EDS        RDS          ECDS         ISTIOD                      VERSION
istio-gateway-east-west-768dfd9ff6-2vrf4.istio-gateway-east-west        *                               SYNCED     SYNCED     SYNCED     NOT SENT     NOT SENT     istiod-79676db54c-plvn5     1.15.3
istio-gateway-east-west-768dfd9ff6-6jmbb.istio-gateway-east-west        *                               SYNCED     SYNCED     SYNCED     NOT SENT     NOT SENT     istiod-79676db54c-plvn5     1.15.3
istio-gateway-north-south-7854f58cd-2dsbj.istio-gateway-north-south     *                               SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-79676db54c-plvn5     1.15.3
istio-gateway-north-south-7854f58cd-nh5ns.istio-gateway-north-south     *                               SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-79676db54c-plvn5     1.15.3
nginx-ingress-private-74b8685f89-55jr7.nginx-ingress-private            *                               SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-79676db54c-plvn5     1.15.3
nginx-ingress-private-74b8685f89-hc2zc.nginx-ingress-private            *                               SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-79676db54c-plvn5     1.15.3
nginx-ingress-private-74b8685f89-wqlnz.nginx-ingress-private            *                               SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-79676db54c-plvn5     1.15.3
nginx-ingress-public-765cdd9f78-2ntqt.nginx-ingress-public              *                               SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-79676db54c-plvn5     1.15.3
nginx-ingress-public-765cdd9f78-k9rtd.nginx-ingress-public              *                               SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-79676db54c-plvn5     1.15.3
nginx-ingress-public-765cdd9f78-sqvn7.nginx-ingress-public              *                               SYNCED     SYNCED     SYNCED     SYNCED       NOT SENT     istiod-79676db54c-plvn5     1.15.3

The issue we’re facing seems like the memory leaking for the private ingress controller

At the same time public one works perfectly fine

I’ve tried to to follow the instructions from this document but the output doesn’t contain any information helping to analyse the root cause

kubectl exec -it -n nginx-ingress-private nginx-ingress-private-74b8685f89-55jr7  -c istio-proxy -- curl -X POST -s "http://localhost:15000/heapprofiler?enable=y"
sleep 300
kubectl exec -it -n nginx-ingress-private nginx-ingress-private-74b8685f89-55jr7  -c istio-proxy -- curl -X POST -s "http://localhost:15000/heapprofiler?enable=n"
rm -rf /tmp/envoy
kubectl cp -n nginx-ingress-private nginx-ingress-private-74b8685f89-55jr7:/var/lib/istio/data /tmp/envoy -c istio-proxy
kubectl cp -n nginx-ingress-private nginx-ingress-private-74b8685f89-55jr7:/lib/x86_64-linux-gnu /tmp/envoy/lib -c istio-proxy
kubectl cp -n nginx-ingress-private nginx-ingress-private-74b8685f89-55jr7:/usr/local/bin/envoy /tmp/envoy/lib/envoy -c istio-proxy

pprof /tmp/envoy/lib/envoy /tmp/envoy/envoy.prof.0001.heap
(pprof) top 10
Total: 11.7 MB
    10.0  85.3%  85.3%     10.0  85.3% 0000563f5fa0e328
     0.6   5.5%  90.8%      0.6   5.5% 0000563f5fc78558
     0.4   3.1%  93.9%      0.4   3.1% 0000563f5fb58b3d
     0.1   1.1%  95.0%      0.1   1.1% 0000563f5fa0cca1
     0.1   0.6%  95.6%      0.1   0.6% 0000563f5fa562b7
     0.0   0.3%  96.0%      0.0   0.3% 0000563f5fa383d1
     0.0   0.3%  96.3%      0.0   0.3% 0000563f5fb4e490
     0.0   0.3%  96.6%      0.0   0.3% 0000563f5fb51162
     0.0   0.3%  96.9%      0.0   0.3% 0000563f5fb4cb3f
     0.0   0.3%  97.1%      0.0   0.3% 0000563f5f753485

The output doesn’t contain any information regarding what process takes all memory and I cannot figure out if I did heap collection wrong or something

I do understand that this is a numb question but so far due the lack of knowledge I’m have no clue how to analyse it further and would be very appreciate if anyone can give me a hand with few ideas about what might be wrong

istioctl version
client version: 1.16.0
control plane version: 1.15.3
data plane version: 1.15.3