we have run some benchmarks against an application stack (Gatling tests). We use the following setup:
/ --> wiremock 1 Requests -> nginx-ingress-controller -> Service | \ --> wiremock 2
We run two scenarios, one with Istio and one without.
With Istio, traffic to the wiremock instances goes via a virtualservice/destination rule:
Spec: Hosts: wiremock Http: Match: Headers: Test Exact: A Route: Destination: Host: wiremock Subset: A Match: Headers: Test: Exact: B Route: Destination: Host: wiremock Subset: B
for all requests, headers of
Test: A and
Test: B are evenly distributed
without Istio, traffic goes via a regular k8s service and requests are distributed independently of the header.
So far, so good. Our analysis now shows that request times asignificantly lower in the scenario with Istio than in the scenario without. This, to us, seems counterintuitive because of the expected latency added by envoy-proxy. So, we assume some request optimisation is happening behind the scene.
Do you have an idea/explenation for why this could be the case?