Istio destination rule not working

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: echo-vsvc
spec:
  hosts:
  - echo-svc.default.svc.cluster.local
  http:
  - match:
    - uri:
        prefix: "/v1"
    route:
    - destination:
        host: echo-svc.default.svc.cluster.local
        subset: v1
  - route:
    - destination:
        host: echo-svc.default.svc.cluster.local
        subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: echo-destination
spec:
  host: echo-svc.default.svc.cluster.local
  subsets:
  - name: v1
    labels:
      version: "1"
  - name: v2
    labels:
      version: "2"

Trying to call this service from a virtualservice from flask app:

@app.route("/e1")
def f1():
    tracking_headers = getForwardHeaders(request)
    return requests.get('http://echo-svc.default.svc.cluster.local/v1', headers=tracking_headers).content


@app.route("/e2")
def f2():
    tracking_headers = getForwardHeaders(request)
    return requests.get('http://echo-svc.default.svc.cluster.local', headers=tracking_headers).content

Service is not getting routed to v1 version, it’s getting on both v1 and v2 version.

If I do istioctl proxy-config routes $(k get pods -l app=frontend -o=jsonpath='{.items[*].metadata.name}') -o json, I see not routes related to echo-service and everything proxy-status says everything is in sync.

Deployed pod status

$ > k get pods --show-labels
NAME                              READY   STATUS    RESTARTS   AGE     LABELS
echo-deploy-v1-7468d898b8-64zcp   2/2     Running   0          5m35s   app=echo-app,pod-template-hash=7468d898b8,version=1
echo-deploy-v2-b65565566-5xsxv    2/2     Running   0          5m3s    app=echo-app,pod-template-hash=b65565566,version=2
frontend-v2-7b9bd94b49-xhdgh      2/2     Running   0          19m     app=frontend,pod-template-hash=7b9bd94b49,vm=v2

Not sure what I’m missing over help, any pointers will be helpful

After pulling my hairs for days, playing with all permuation and combinations. Found my mistake at: https://github.com/istio/istio/issues/9696.

It was related to service named port. I had named it just “web”, updating it to “http-web”, worked for me, and everything else worked fine.

Stackoverflow answer link : https://stackoverflow.com/questions/54197734/istio-destination-rule-subsets-not-working/54209833#54209833.

I have correct port name in the service but I am still experiencing the same issue .
Here is the complete yaml
hello-both-http-port.yaml

It is a simple python flask app that returns version, and protocol and port
Calling the endpoint from any other pod returns both version equally . It should return version1 99% of the times though.

There isn’t any gateway involved here . Just service to service communication. Here is the simple flask app
server.py

BTW if just change the service port from standard port 80 or 443 to something else , then everything works fine.
For example
image

Now curling the service works as expected

/ # curl http://hello:8181/
Hello world!!!
https : False
port : 8080
version : v2
/ # curl http://hello:8181/
Hello world!!!
https : False
port : 8080
version : v1
/ # curl http://hello:8181/
Hello world!!!
https : False
port : 8080
version : v1
/ # curl http://hello:8181/
Hello world!!!
https : False
port : 8080
version : v1

Hello, I’m facing similar issue: request within the cluster doesn’t seem to respect destination rules defined. I’ve followed the rule for service port name, but without success. I am expecting that request for my service which doesn’t have specific property set in header and doesn’t have specified prefix to be failed. Instead all request to hello service returns 200 OK, no matter what header and prefix in request is specified. Any idea what can be wrong: yaml definition for deployment, service, virtual service and destination rule is here: https://github.com/arturkociuba/service-mesh/blob/827d58829d695328b8ef19a658b0568857955815/hello.yaml

@arturkociuba I found there are two “version: v2” in your 2 Deployment, please check it.

thanks for reply, but this is not the issue. I’ve changed one of deployment to have label v1.
Issue I’m facing is that request to hello always returns 200, but goal is to route it to specified pod (by version) based on prefix and header value.
I’ve added testing curl to the code.

What version of Istio are you using? I think that in Istio 1.4.0 is use of 443 (not sure here about 80) forbidden.

Edit: for http protocol, https should work on 443

I am using istio 1.4.3 (latest), port for Service is 8087, target port (on pod) is 80 but I was testing it with other (eg 8023) and still no success, still seams to me like this what is defined in virtual service and destination rule is ignored :confused:
code https://github.com/arturkociuba/service-mesh/blob/b791d7def3fbab02e9314aa4fb6eb7ca40004711/hello.yaml

UPDATE:
not an issue any more, there was not in fact. The issue which I observed (ignored rules) existed when I was testing by curl request from pod that has not istio sidecar injected :confused: (lesson learned :wink: )
Thanks

kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.4", GitCommit:"67d2fcf276fcd9cf743ad4be9a9ef5828adc082f", GitTreeState:"clean", BuildDate:"2019-09-18T14:51:13Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.4-gke.22", GitCommit:"a6ba43f5a24ac29e631bb627c9b2a719c4e93638", GitTreeState:"clean", BuildDate:"2019-11-26T00:40:25Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}

istioctl version
client version: 1.4.2 control plane version: 65b6870f1e3bf73fc688a02d10963ea0158e96c6-dirty data plane version: 1.2.10-gke.0 (22 proxies)

Confirmed with https , the traffic split is not working for me . I bounced pilot and http at least started working )

Here is the yaml for https with port 443 for k8s service

hello-both-https-port.yaml.txt

Running this command from sleep pod
curl https://hello -k results in 50-50 traffic split. Not expected
Changing service port to any other (say 9443) keeping https does not work either .

Here is the yaml for http with port 80 for k8s service

hello-both-http-port.yaml.txt

Running this command from sleep pod
curl http://hello -k results in 99-1 traffic split. Expected