Service is not getting routed to v1 version, it’s getting on both v1 and v2 version.
If I do istioctl proxy-config routes $(k get pods -l app=frontend -o=jsonpath='{.items[*].metadata.name}') -o json, I see not routes related to echo-service and everything proxy-status says everything is in sync.
Deployed pod status
$ > k get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
echo-deploy-v1-7468d898b8-64zcp 2/2 Running 0 5m35s app=echo-app,pod-template-hash=7468d898b8,version=1
echo-deploy-v2-b65565566-5xsxv 2/2 Running 0 5m3s app=echo-app,pod-template-hash=b65565566,version=2
frontend-v2-7b9bd94b49-xhdgh 2/2 Running 0 19m app=frontend,pod-template-hash=7b9bd94b49,vm=v2
Not sure what I’m missing over help, any pointers will be helpful
I have correct port name in the service but I am still experiencing the same issue .
Here is the complete yaml hello-both-http-port.yaml
It is a simple python flask app that returns version, and protocol and port
Calling the endpoint from any other pod returns both version equally . It should return version1 99% of the times though.
Hello, I’m facing similar issue: request within the cluster doesn’t seem to respect destination rules defined. I’ve followed the rule for service port name, but without success. I am expecting that request for my service which doesn’t have specific property set in header and doesn’t have specified prefix to be failed. Instead all request to hello service returns 200 OK, no matter what header and prefix in request is specified. Any idea what can be wrong: yaml definition for deployment, service, virtual service and destination rule is here: https://github.com/arturkociuba/service-mesh/blob/827d58829d695328b8ef19a658b0568857955815/hello.yaml
thanks for reply, but this is not the issue. I’ve changed one of deployment to have label v1.
Issue I’m facing is that request to hello always returns 200, but goal is to route it to specified pod (by version) based on prefix and header value.
I’ve added testing curl to the code.
UPDATE:
not an issue any more, there was not in fact. The issue which I observed (ignored rules) existed when I was testing by curl request from pod that has not istio sidecar injected (lesson learned )
Thanks
istioctl version client version: 1.4.2 control plane version: 65b6870f1e3bf73fc688a02d10963ea0158e96c6-dirty data plane version: 1.2.10-gke.0 (22 proxies)
Confirmed with https , the traffic split is not working for me . I bounced pilot and http at least started working )
Here is the yaml for https with port 443 for k8s service
Running this command from sleep pod curl https://hello -k results in 50-50 traffic split. Not expected
Changing service port to any other (say 9443) keeping https does not work either .
Here is the yaml for http with port 80 for k8s service