HTTP/2 cors_response seems to block/delay following request to pod when done in same connection

TL;DR; HTTP/2 requests are not seen/answered in the pod logs after OPTIONS is answered by istio-proxy

After a lot of troubleshooting after user complaints I have found an issue which is intermittent and difficult to diagnose, maybe I am doing something wrong in the setup, however, I need to get a pointer where to start looking for answers. There seems to be no similar issues when I have searched here in the discussions.

The setup is following:
Cloudflare → istio(ingress-gateway) → istio-proxy(sidecar) → api

When hitting the API, the browser sends OPTIONS and then immediately a PUT/POST/GET if the request method for both requests is HTTP/1.1.

If the browser decides to do OPTIONS with HTTP/2 there is no following PUT/POST/GET and after one minute, give or take a second, there is a HTTP/1.1 entry with the intended PUT/POST/GET operation.

During this time, the browser hangs (vue SPA) and the end user has to wait for the operation to complete.

When analyzing the logs from one of these incidents, I can not see the HTTP/2 POST hit the /v1/b/e47 when running HTTP/2, the browser seem to time-out and retry using HTTP/1.1 but from what I get from the end-user there is nothing in the logs on the browser or anything, just browser hanging and the call takes a long time.

It seems like the browser is hitting the istio-ingressgateway with OPTIONS for cors, but the following request is timed-out. Surely, the problem can also be in Cloudflare, but I dont really know where to start in all of this.

The browser in question is Chrome v 89.0.4389.90 on Windows.

istio-ingressgateway:
[2021-03-24T21:31:51.813Z] "OPTIONS /v1/b/e47 HTTP/2" 200 - cors_response - "-" 0 0 0 - "CENSORED(clientip),10.244.0.12" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36" "397c1b21-6035-4b06-9b31-34ba8545676c" "apphost.censored.com" "-" - - 10.244.0.160:8443 clientip:0 apphost.censored.com -
[2021-03-24T21:32:52.081Z] "POST /v1/b/e47 HTTP/1.1" 200 - via_upstream - "-" 84 4 276 276 "CENSORED(clientip),CENSORED(nodeip)" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36" "ef41c9dd-9575-47ac-ac75-118055149270" "apphost.censored.com" "10.244.0.105:80" outbound|80||api.hh.svc.cluster.local 10.244.0.160:37374 10.244.0.160:8443 clientip:0 apphost.censored.com -

istio-proxy: (on api)
[2021-03-24T21:32:52.081Z] "POST /v1/b/e47 HTTP/1.1" 200 - via_upstream - "-" 84 4 274 272 "CENSORED(clientip),CENSORED(nodeip)" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36" "ef41c9dd-9575-47ac-ac75-118055149270" "apphost.censored.com" "127.0.0.1:80" inbound|80|| 127.0.0.1:33446 10.244.0.105:80 nodeip:0 outbound_.80_._.api.hh.svc.cluster.local default

Any ideas where to start looking for the issue, if anyone has seen this before or give some pointer to other posts with similar issues would be welcome. Are there any additional loggers that can be turned on which are valuable for mitigating the issue?

Sincerely,
Erik

An update to the case - after disabling Cloudflare (going from orange to grey cloud - direct IP connection) and having the client connect directly to ingressgateway the issue has yet to re-surface.

It seems like the issue occurs when Cloudflare speaks HTTP/2 to the origin server, something that according to their own documentation should never happen.

I tried putting a question in to the Cloudflare community page regarding this too, but no answers so far.

Now all traffic between the browser and istio-ingressgateway is http/2 and the only delays I have seen so far is when the backend services are taking their time to reply.