How to use Envoy Rate Limiting with Istio-based Rate Limiting Service

Unfortunately I am not very familiar with Envoy filters, but as I see, all examples of Rate Limiting with Envoy filters are variation of the example at Istio / Enabling Rate Limits using Envoy

          name: envoy.filters.http.ratelimit
          typed_config:
            "@type": type.googleapis.com/envoy.extensions.filters.http.ratelimit.v3.RateLimit
            # domain can be anything! Match it to the ratelimter service config
            domain: productpage-ratelimit
            failure_mode_deny: true
            timeout: 10s
            rate_limit_service:
              grpc_service:
                envoy_grpc:
                  cluster_name: rate_limit_cluster
              transport_api_version: V3

Here a cluster rate_limit_cluster is referenced, and all examples use then something like

    - applyTo: CLUSTER
      match:
        cluster:
          service: ratelimit.default.svc.cluster.local
      patch:
        operation: ADD
        # Adds the rate limit service cluster for rate limit service defined in step 1.
        value:
          name: rate_limit_cluster
          type: STRICT_DNS
          connect_timeout: 10s
          lb_policy: ROUND_ROBIN
          http2_protocol_options: {}
          load_assignment:
            cluster_name: rate_limit_cluster
            endpoints:
            - lb_endpoints:
              - endpoint:
                  address:
                     socket_address:
                      address: ratelimit.default.svc.cluster.local
                      port_value: 8081

to create a “Cluster”…

However when using that Rate Liming Service with Istio, there is already some configuration of the service. like

istioctl proxy-config all istio-ingressgateway-5f5f67cdd5-46r2v  -o json
{
 "version_info": "2021-11-16T13:08:53Z/1270",
 "cluster": {
  "@type": "type.googleapis.com/envoy.config.cluster.v3.Cluster",
  "name": "outbound|8081||ratelimit.api.svc.cluster.local",
  "type": "EDS",

It is possible to reference that Cluster in the Envoy filter instead of creating a new one with STRICT_DNS?

As far as observe, it seems that this STRICT_DNS config circumvents Istio’s GRPC Load Balancing, since when auto scaling the rate limiting service, GRPC requests are distributed quite unevenly between the ratelimit 's pods and it seems to use long-living GRPC connections which are going to certain pods which are then overloaded.

( similar comment on how to configure envoyfilter to support ratelimit in istio 1.5.0? · Issue #22068 · istio/istio · GitHub )

So now I tried to do that:

                envoy_grpc:
                  authority: ratelimit.api.svc.cluster.local:8081
                  cluster_name: outbound|8081||ratelimit.api.svc.cluster.local

and at first sight, it seems to be working.
Is this a good approach or are there any caveats?