Can we rate limit external services which is included into mesh via ServiceEntry?

#1

Hi, Guys,

I have one external service and I have included it into istio mesh via service entry. Do we support defining rate limit policy against using the related mixer rules? If so is there anything special when configuring this?

Thanks a lot.

Iris Ding

#2

Play with this a little bit and confirmed it works!

1 Like
#3

Hey Iris,

Can you please share your config using which you got it working?

Thanks,
Nupur

1 Like
#4

Hi Nupur,

I am using memory handler and below is a snippet for the configuration:

kind: handler
metadata:
  name: quotahandler
  namespace: default
spec:
  compiledAdapter: memquota
  params:
    quotas:
    - name: requestcountquota.instance.default
      maxAmount: 1000
      validDuration: 2s
      overrides:
      - dimensions:
          destination: external.example.common.HelloService
        maxAmount: 1
        validDuration: 10s

---
apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
  name: requestcountquota
  namespace: default
spec:
  compiledTemplate: quota
  params:
    dimensions:
      destination: destination.labels["app"] | destination.service.host | "unknown"
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
  name: quota
  namespace: default
spec:
  actions:
  - handler: quotahandler
    instances:
    - requestcountquota```
#5

I am able to successfully rate limit internal calls, for example, ingress requests into my web-frontend service, but after several attempts, I am unable to configure rate limiting to external services. I have deployed a pod with a go script that makes several (100+) concurrent requests to the postman-echo.com test API including random strings as parameters. I have found no way to apply rate limiting to these outgoing requests.

I have tried deploying memquota resources in both istio-system and default namespaces.

I have a serviceEntry for postman-echo.com, and also a gateway resource with a destinationRule and virtual service routing traffic, round_robin, to 3 replica istio-egressgateway pods deployed on separate nodes. (the routing is working fine: switching nodes every ~100 requests)

Here are my memquota configs:

apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
  name: quotahandler
  namespace: istio-system
spec:
  compiledAdapter: memquota
  params:
    quotas:
    - name: requestcountquota.instance.istio-system
      maxAmount: 500
      validDuration: 1s
      # The first matching override is applied.
      # A requestcount instance is checked against override 
dimensions.
      overrides:
# Rate Limiting the web-frontend service is working:
#      - dimensions:
#          destination: web-frontend
#        maxAmount: 2
#        validDuration: 60s
# Rate Limiting external or egress services do not work:
      - dimensions:
          destination: postman-echo.com
        maxAmount: 2
      - dimensions:
          source: postman-echo.com
        maxAmount: 2
        validDuration: 120s
      - dimensions:
          destination: external.postman-echo.com
        maxAmount: 2
      - dimensions:
          source: external.postman-echo.com
        maxAmount: 2
        validDuration: 120s
      - dimensions:
          destination: istio-egressgateway
        maxAmount: 2
        validDuration: 100s
      - dimensions:
          source: istio-egressgateway
        maxAmount: 2
        validDuration: 100s
      - dimensions:
          destination: istio-egressgateway.istio-system.svc.local.cluster
        maxAmount: 2
        validDuration: 100s
      - dimensions:
          source: istio-egressgateway.istio-system.svc.local.cluster
        maxAmount: 2
        validDuration: 100s
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
  name: requestcountquota
  namespace: istio-system
spec:
  compiledTemplate: quota
  params:
    dimensions:
      source: request.headers["host"] | source.workload.name | 
source.labels["app"] | "unknown"
      destination: destination.labels["app"] | destination.service.name | 
destination.service.host | destination.workload.name | 
request.headers["host"] | "unknown"
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
  name: request-count
  namespace: istio-system
spec:
  rules:
  - quotas:
    - charge: 1
      quota: requestcountquota
---
apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
  name: request-count
  namespace: istio-system
spec:
  quotaSpecs:
  - name: request-count
    namespace: istio-system
  services:
  - service: '*'  # Uncomment this to bind *all* services to request-count
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
  name: quota
  namespace: istio-system
spec:
  actions:
  - handler: quotahandler
    instances:
    - requestcountquota

Does anyone has a recommended logging procedure to use so that I could see the actual values for the external request? eg:

source.workload.name | 
source.labels["app"] 
destination: destination.labels["app"] | destination.service.name | 
destination.service.host | destination.workload.name | 
request.headers["host"] 

In an older version (i think) I used to be able to print logs from the mixer container in the telemetry pod and grep this type of info.

#6
  • 1, I have not been able to rate limit external traffic too…

You can use https://istio.io/docs/tasks/telemetry/logs/collecting-logs/ task to configure access logs to print the values for external request’s attributes…

#7

A couple random ideas (hacks) I have been thinking of:

Create an additional proxy service as a sidecar container inside the istio-egressgateway spec that intercepts traffic to a specific host, then rate limiting is applied from that proxy service to the normal istio-egressgateway proxy.

or

somehow routing traffic from the istio-egressgateway proxy back to itself into a different port, and rate limiting that somehow?

ideally rate limiting could be applied in a node/originating-node-IP specific way. i have applied podAntiAffinity to self on the istio-egressgateway deployment, so it scales horizontally to unique nodes.

#8

It occurs to me that my istio-egressgateway pods are not istio sidecar injected which may explain why I cant apply rate limiting using the handler…?

#9

Agreed… I see the same thing for istio-egressgateway
@mandarjog , @douglas-reid any ideas here?

#10

In my case, I have not used egress gateway. So only ServiceEntry + policy rules works. The tricky part is I only have one rule defined in the instance:

destination: destination.labels[“app”] | destination.service.host | “unknown”

#11

@irisdingbj , did you use the QuotaSpec and QuotaSpecBinding resources as in the documentation? what service did you list in the quotaspecbinding? it will not let me list a host address there. is the prefix “external” required?

I am unable to rate limit external calls to postman-echo.com using your configuration without a gateway and just the service entry, either. Testing from both a deployed go script and also curl from ‘sleep’ pod (as in the docs).

#12

I deployed the istio-egressgateway pod resources to the default namespace. I am still not able to rate limit the outbound requests or egressgateway service/pods in any way. I even tried dimension-ing by the specific egressgateway pod names and source.name. No result, traffic flows through unlimited.

#13

The best way I can think to implement this is to just code a separate sevice that only makes the REST calls (as a sort of outbound-edge service abstraction), then deploy this in an istio injected namespace (or default) and apply rate limiting to that internal service. When other pods need to consume the external API, just make calls to that “edge-REST” service instead. And dont forget to set podAntiAffinity to itself in the deploy so that it only scales to unique nodes.

Similar to the productpage service example in the docs, except in a way which is agnostic of client IP and instead rate limited per destination pod?:
Screenshot%20from%202019-05-07%2012-46-11

But Im not sure if this can be done without specifically using
dimensions: destination: destination.name | destination.IP
which are ephemeral and wont auto-scale.

This is how I wish the istio-egressgateway proxy resource worked, but it apparently does not.

#14

@irisdingbj; I have debug logs enabled, I don’t even see a check call happening if I just use serviceentry. How are you making the calls to the external service? And as @mike-holberger asked, did you use the QuotaSpec and QuotaSpecBinding resources ?

#15

In my case, I have deployed a consumer service into mesh and using the consumer to access the external services which are included via service entry. So this case works with only service entry + policy.

#16

@mike-holberger: Have a PR out to fix ratelimiting using egress gateway : https://github.com/istio/istio/pull/13976

#17

@gargnupur i ended up coding a golang edge service that makes rate limited requests to the mesh-external API service.

I believe this solution makes more sense (for mesh-external traffic) than the istio redisquota/memquota based solution because each pod can easily keep track of the required request limit in memory, using a native golang channel object, throttled to my desired rate. incoming requests to this service are denied with the 429 too many requests after only a few dozen are queued, so that the user is not left waiting in line for the call to complete. This outbound edge-service encapsulates access to the external service and provides a common GRPC API to my mesh-internal services.

i can see how keeping track of incoming requests on a per IP basis might require a more substatial solution, like the memquota/redis quota implementations. However, keeping track of the rate of outgoing requests from each pod instance can easily be implemented in the service’s code.

*This edge-service implementation is deployed with podAntiAffinity to itself so that each pod-instance is scheduled to a unique node in order to effectively enact the outgoing rate-limit on a per-node basis.