Ingress gateway connection leaks causes OOM killed pod

Currently, we are using istio v1.12.3 and encountering that the ingress gateway connections are increasing gradually.
Ideally, I think it should depend on the request traffic trend but different from my thought.

Below is the opened socket connection count between ingressgateway and each destination side.

istio-proxy@istio-ingressgateway-5459f6c59-qsxhl:/$ lsof -i -a -p 19 | grep frontend | wc -l
1791
envoy 19 istio-proxy 4770u IPv4 395129722 0t0 TCP istio-ingressgateway-5459f6c59-qsxhl:38926->10-105-29-12.dictation-frontend.frontend.svc.cluster.local:5080 (ESTABLISHED)
envoy 19 istio-proxy 4772u IPv4 395130210 0t0 TCP istio-ingressgateway-5459f6c59-qsxhl:57144->10-105-22-88.dictation-frontend.frontend.svc.cluster.local:5080 (ESTABLISHED)
envoy 19 istio-proxy 4775u IPv4 395188423 0t0 TCP istio-ingressgateway-5459f6c59-qsxhl:35366->10-105-19-18.dictation-frontend.frontend.svc.cluster.local:5080 (ESTABLISHED)

istio-proxy@istio-ingressgateway-5459f6c59-qsxhl:/$ lsof -i -a -p 19 | grep monitoring | wc -l
2515
envoy 19 istio-proxy 4804u IPv4 395205332 0t0 TCP istio-ingressgateway-5459f6c59-qsxhl:8443->10-105-4-91.promthanos-monitoring-node-exporter-svc.monitoring.svc.cluster.local:28883 (ESTABLISHED)
envoy 19 istio-proxy 4805u IPv4 395244756 0t0 TCP istio-ingressgateway-5459f6c59-qsxhl:8443->10-105-1-247.promthanos-monitoring-node-exporter-svc.monitoring.svc.cluster.local:34150 (ESTABLISHED)
envoy 19 istio-proxy 4806u IPv4 395222202 0t0 TCP istio-ingressgateway-5459f6c59-qsxhl:8443->10-105-10-142.promthanos-monitoring-node-exporter-svc.monitoring.svc.cluster.local:36640 (ESTABLISHE

To avoid this issue, I have tried to configure the envoyFilter to customize the envoy parameters like below so that useless socket connection can be closed(idle_timeout, max_connection_duration)

Spec:
Config Patches:
Apply To: NETWORK_FILTER
Match:
Listener:
Filter Chain:
Filter:
Name: envoy.filters.network.http_connection_manager
Patch:
Operation: MERGE
Value:
Name: envoy.filters.network.http_connection_manager
typed_config:
@type: type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
common_http_protocol_options:
idle_timeout: 5s
max_connection_duration: 300s

Workload Selector:
Labels:
Istio: ingressgateway

Even though I’ve applied above configuration, same issue is happening now
Also noteworthy is that this issue was not happened when we use the istio v1.4.6.

Do anybody have any ideas on this issue?