Configuration to avoid growing proxy memory for inaccessible services?

We have been using Istio in a production cluster with over 2000 pods, 600 services, across over 100 namespaces. We have a multi-tenant situation where we create a new namespace (and resource quota) for each user’s workload. We are using one Istio control plane for all namespaces. We are using this configuration for the proxy https://github.com/astronomer/terraform-google-astronomer-cloud/blob/0c090d70a8e6db37321db57da0411c6caf66cd3b/locals.tf#L326-L339 . With that configuration, some proxy containers are now exceeding 80% memory (100 * container_working_set_bytes / container_spec_memory_limit_bytes).

I have seen the memory on the sidecars is growing as we gain more users (and therefore more services). I have referenced this issue to reduce sidecar concurrency https://github.com/istio/istio/issues/8247. Reducing concurrency to 1 improved memory. We have also reconfigured the memory limit globally a few times. We do not wish to continue to globally increase the memory limit for pods because that increases the required size of our customer’s resource quotas (limit).

I would like to know if there is a filter or something like a network policy that we can use specifically to limit memory growth on proxy containers. For example, since our customers’ pods are not allowed to talk to each other (which we enforce with Calico-based network policies), we hope there is a way for us to prevent the proxy container’s memory on an arbitrary pod in a customer’s namespace from growing when additional services are created in inaccessible namespaces (new services in a different customer’s namespace). I am wondering if I am missing some existing configuration option. There is only one namespace that all other namespaces need network access to, this is where our platform’s components reside.

Thank you for your help.

Does this custom resource exist for the reason described above? https://istio.io/docs/reference/config/networking/sidecar/

" By default, Istio will program all sidecar proxies in the mesh with the necessary configuration required to reach every workload instance in the mesh, as well as accept traffic on all the ports associated with the workload. The Sidecar configuration provides a way to fine tune the set of ports, protocols that the proxy will accept when forwarding traffic to and from the workload. In addition, it is possible to restrict the set of services that the proxy can reach when forwarding outbound traffic from workload instances."

This does sound very promising, I just don’t know if I should expect memory improvement.

I tested it, and I can confirm that egress rules on the Sidecar resource make a huge difference in my case.

EDIT: however, it should be noted I am using not strict mutual TLS. I have more validation to do to make sure no functionality was lost.

This configuration is what we are going with https://github.com/astronomer/helm.astronomer.io/pull/305

Note that if you find an error “short names (non FQDN) are not allowed” this is because you need to specify service names like this:
“namespace-name/service-name.namespace-name.svc.cluster.local”
not just
“namespace-name/service-name”

2000 pods is a small cluster for me.
what I find best is:
first: use istio namespace isolation . it helps to reduce config time. it even makes sense in a multi-tenant setup.
2nd: put limits on your sidecar by using correct pod annotation. my sidecars rarely pass the 37MB mark .
3rd: there are few memory leaks in sidecars yet to be handled. so keep that in mind.