I’m working with EnvoyFilter
s to set up rate limiting on a cluster.
I have the following config patch to add the rate limit service as a cluster:
configPatches:
- applyTo: CLUSTER
match:
cluster:
# kubernetes dns of your ratelimit service
service: envoy-limitsvc.istio-system.svc.cluster.local
patch:
operation: ADD
value:
name: rate_limit_cluster
type: STRICT_DNS
connect_timeout: 10s
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
load_assignment:
# arbitrary name
cluster_name: rate_limit_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
# kubernetes dns of your ratelimit service
address: envoy-limitsvc.istio-system.svc.cluster.local
port_value: 42081
I’m trying to figure out exacly how the match is being used to change the cluster part of Envoy’s configuration. The docs don’t really give much details except basically saying “match is used to match.”
When I look at the cluster configuration for the ingress gateway pods, which have the rate_limit_cluster
applied in their cluster configs, (using istioctl proxy-config cluster $POD.$NAMESPACE -o json
), I don’t see anything with a field "service": "envoy-limitsvc.istio-system.svc.cluster.local"
. I do see that DNS name in fields like "outbound|443||envoy-limitsvc.istio-system.svc.cluster.local"
. However, I also see those fields in egress gateway pods, and yet they don’t have the rate_limit_cluster
.
This leads me to believe that the match is occurring on Istio’s internal state, which can’t be introspected. So I’m wondering how these match rules work so I can fine tune them and fix them in case they break.