We have an EnvoyFilter
for route http request to upstream application port as below:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: my-envoy-filter
spec:
workloadSelector:
labels:
app: my-app
configPatches:
- applyTo: ROUTE_CONFIGURATION
match:
context: SIDECAR_INBOUND
routeConfiguration:
portNumber: 80
vhost:
name: "inbound|http|80"
patch:
operation: MERGE
value:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*.myservice.com"
routes:
- match: { prefix: "/" }
route:
cluster: mycluster
priority: HIGH
- applyTo: CLUSTER
match:
context: SIDECAR_INBOUND
patch:
operation: ADD
value:
name: mycluster
type: LOGICAL_DNS
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: mycluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 9999
It works properly when I applied it to workloads which doesn’t have any rediness gate property (for associated AWS target group).
However, if a workload has its own rediness gate and the rediness check failed, then the EnvoyFilter doesn’t seem to be applied properly.
Is it an intended result?
Are the proxy configurations are applied after the rediness gate confirmed the health of the proxy?
Is there anyway to apply proxy configurations such as EnvoyFilter before the rediness gate confirmation?