Multiple filter chains with the same matching rules are defined - is a sanity check possible?

recently I’ve been fighting with the following error:
Internal:Error adding/updating listener(s) error adding listener '': multiple filter chains with the same matching rules are defined

After digging into it I realized that a virtualservice was misconfigured - there was actually a conflict, with 2 different virtualservices, referencing the same host and rules, pointing to 2 different k8s services. That makes perfectly sense, Envoy doesn’t know how to route the traffic and complains about it.
What I don’t understand is why Istio LDS gets to stale. This is a big issue, because basically all the gateways and virtualservices created AFTER the problematic one, won’t take any effect - the listener table doesn’t get updated indeed.
Given that we are managing Kubernetes for the entire company, we have many gw and virtualservices, and that issue leads to:

  • Very difficult troubleshooting: finding the wrong virtualservice definition is very complicated (unless I am looking at the wrong place, but the pilot logs don’t tell you which virtualservice is causing the problem)
  • If someone creates gw/vs to expose her/his service after the LDS gets to stale, it won’t work (leading to people asking: why it’s not working? I copied that definition from docs etc…)

I think there is a way to get out of it and it’s a sanity check (or preflight) before deploying it - something that checks for conflicts in gw/vs definitions before applying those to K8s. I am wondering if Istio has a mechanism to do that or if I need to find an alternative way.

Ah, this has been tested in 1.2.7 and 1.4.6 with the same result.

Any help here please?


FYI - I am using OPA (Open policy agent) webhook to catch those errors before a wrong manifest gets applied to the cluster.
Seems to work!