Rate limiting behavior question

#1

Hey,

I am busy with setting up rate limiting for one of our services. We are using redisquota for this and we do see that when the quota is reached we get response code 429 as expected.

For testing purposes we use maxAmount: 2 and validDuration: 10s with the rate limit algorithm set to FIXED_WINDOW. Basically I want to allow 2 requests within the span of 10 seconds. The problem is that once I reach response code 429 I keep getting it for 10+ seconds with intermittent 200s in between, which I wouldn’t expect. This behavior is reproducible for both GET and POST requests. I am aware that the FIXED_WINDOW approach can allow 2x peak specified rate but I wouldn’t expect to return unexpected 429s.

Is it possible that one of the components (like the mixer or envoy) is caching this type of responses (429)?
We are using Istio 1.0.3 on GKE.

If there is anyone that has a better understanding of how the rate limiting works please shed some light of how it is suppose to work. Thank you!

0 Likes

#2

The proxies do cache responses. They also handle some quota prefetching. I’ll see if I can find some more detailed docs.

Can you expand on what you mean by “intermittent 200s” in this case? It sounds like you are seeing mostly 429s with some 200s every-so-often over a period of more than 10s, which sorta sounds like what one might expect.

@gargnupur can you provide any details on the FIXED_WINDOW behavior with redisquota?

0 Likes

#3

FIXED_WINDOW algorithm is supposed to rate limit based on max amount and valid duration set by you.
What is the value of bestEffort variable set in QuotaArgs(https://github.com/istio/istio/blob/405f47f54d19e6d998aa3e863ec9e2bc23c157f1/mixer/pkg/adapter/quotas.go#L40:3)? I think this could lead to “intermittent 200’s” when it sees that some quota is available?

0 Likes

#4

What I meant was that I get 429 responses for a longer duration than the configured quota window. And after the quota window is over, this can easily be only after 20 seconds while I have a 10 seconds window configured, I sometimes get a 200 but again not consistent. Sometimes I get one 200, a couple of 429s again and then a 200. It seems different apps/proxies cache the 429 with a different timeout. And in no way it matches the configured 10 seconds window with a max of 2 requests. (Note that the application I am making requests towards is running with 3 replicas)

The inconsistency in responses makes me suspect a cache issue. Are there any options of disabling or changing this behavior? In the current form it doesn’t seem usable for me.

@gargnupur could you please explain where I can see this configuration in my cluster? I’m using GKE with the default Istio add-on.

0 Likes

#5

@Cristina85 are you able to share your quota config here? that might help the debugging effort. Even knowing the dimensions on which you are applying quota would be a big help.

0 Likes

#6

For now I was trying to be as closed to the documentation as possible, please see bellow:

    apiVersion: "config.istio.io/v1alpha2"
    kind: redisquota
    metadata:
      name: handler
      namespace: custom-namespace
    spec:
      redisServerUrl: rate-limit-redis:6379
      connectionPoolSize: 10
      quotas:
      - name: requestcount.quota.custom-namespace
        maxAmount: 2
        validDuration: 10s
        rateLimitAlgorithm: FIXED_WINDOW
    ---
    apiVersion: "config.istio.io/v1alpha2"
    kind: quota
    metadata:
      name: requestcount
      namespace: custom-namespace
    spec:
      dimensions:
        source: request.headers["x-forwarded-for"] | "unknown"
        destination: destination.labels["app"] | destination.workload.name | "unknown"
        destinationVersion: destination.labels["version"] | "unknown"
    ---
    apiVersion: config.istio.io/v1alpha2
    kind: rule
    metadata:
      name: quota
      namespace: custom-namespace
    spec:
      actions:
      - handler: handler.redisquota
        instances:
        - requestcount.quota
    ---
    apiVersion: config.istio.io/v1alpha2
    kind: QuotaSpec
    metadata:
      name: request-count
      namespace: custom-namespace
    spec:
      rules:
      - quotas:
        - charge: 1
          quota: requestcount
    ---
    apiVersion: config.istio.io/v1alpha2
    kind: QuotaSpecBinding
    metadata:
      name: request-count
      namespace: custom-namespace
    spec:
      quotaSpecs:
      - name: request-count
        namespace: custom-namespace
      services:
      - name: my-service
        namespace: custom-namespace
    ---
0 Likes

#7

Doesn’t look like a problem with config… You can disable Quota Cache using client config disable_quota_cache(https://github.com/istio/api/blob/5a79ba0ecbec2285e8efd97fd4e9fdc7b6141b51/mixer/v1/config/client/client_config.proto)

0 Likes

#8

@gargnupur I looked in the documentation and it is not clear to me how to set this setting. Could you point me to an example or a description on how to use the client config? It is a bit unclear on how to set different settings in the mixer.

Thanks.

0 Likes

#9

Not entirely sure, I see it getting used in the test here by setting http client’s transport config:

@douglas-reid, @kuat: Can you please help on how we can set client configs?

0 Likes

#10

@Cristina85 I did some more digging, but did not find any detailed docs. However in the process, I was reminded of an important item:

In the 1.0.X releases, each worker thread in a proxy is prefetching quota. The number of worker threads could be quite large, depending on your deployment. This could ultimately result in the appearance of a much more restrictive quota than specified. This might be affecting you – especially given the relatively low limit that you are testing.

This is discussed in: https://github.com/istio/istio/issues/3028#issuecomment-442550779

The other thing that may be impacting you is if, for some reason, clients aren’t setting the “x-forwarded-for” header, and then are all resolving to the same bucket. I don’t think that is what is happening, but it is possible.

/cc @mandarjog for more information here.

0 Likes

#11

Hi,
I started using Rate Limiting too and I am with some trouble with it (doesn’t work). Maybe you guys could you, please, indicate if I am missing anything:
apiVersion: “config.istio.io/v1alpha2
kind: memquota
metadata:
name: handler
namespace: istio-system
spec:
quotas:
- name: requestcount.quota.istio-system
maxAmount: 1
validDuration: 60s
overrides:
- dimensions:
destination: <service_name>
maxAmount: 1
validDuration: 60s

apiVersion: “config.istio.io/v1alpha2
kind: quota
metadata:
name: requestcount
namespace: istio-system

apiVersion: config.istio.io/v1alpha2
kind: QuotaSpec
metadata:
name: request-count
namespace: istio-system
spec:
rules:
- quotas:
- charge: 1
quota: request-count

apiVersion: config.istio.io/v1alpha2
kind: QuotaSpecBinding
metadata:
name: request-count
namespace: istio-system
spec:
quotaSpecs:
- name: request-count
namespace: istio-system
services:
- name: <service_name>
namespace: <namespace_name>

apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: quota
namespace: istio-system
spec:
actions:
- handler: handler.memquota
instances:
- requestcount.quota

Thank you.

0 Likes

#12

@juliabenatti: in your QuotaSpec config
rules:

  • quotas:
  • charge: 1

doesn’t seemed to be aligned well… could that be the problem?

0 Likes