Routing not working as expected

I am testing some traffic management concepts using https://github.com/istio/istio/tree/master/samples/helloworld as an example on istio-1.1.0 on k8s cluster which has PSP enabled but seems as if either my destinationrules or virtual services are not working. After debugging using istioctl proxy-config routes I am able to to see the correct route but when I actually hit the service i don’t see the rules being applied.
These are my virtualservice and destinationrules

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: helloworld-vs
spec:
  hosts:
  - helloworld
  http:
  - route:
    - destination:
        host: helloworld
        subset: v2
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: helloworld
spec:
  host: helloworld
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
---

One thing to note is that we have PSP enabled on this cluster and it is an on-prem cluster.

Your virtual service does not seem to be using gateway (as in the examples), why have you removed it?

You could get some clues about misconfiguration from Kiali.

@rvansa the original virtual-service has not been removed(helloworld-gateway.yaml which is shipped with istio).

All of these work fine on one of our cluster but on another one which has psp applied it does not work there.

Ok, im working with @AshishThakur. let me get clear some confusion here.

  1. So we installed the 1.1.0 istio on our on-prem environment
  2. We then installed the helloworld sample service and its working as expected. When we do curl, the requests can be routed to either v1 or v2 helloworld
  3. Then we tried to test the request routing. We applied, for example, below virtual service to try to route all request(either external from curl or internal from other service) to v2 helloworld, but it does not work. Requests still go to either v1 or v2.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: helloworld
spec:
hosts:
- helloworld
http:
- route:
- destination:
host: helloworld
subset: v2

Any suggestions?

FYI https://istio.io/help/ops/component-debugging/

are you sure iptables are setup properly? and that envoy is running as user 1337? cc @Deepa_Kalani who was also tinkering with PSPs on pks clusters

yea, i have been tinkering with psp but don’t see any issues (although my deployment might be slightly different) given i’m using istio-cni.

can you post the destination rules as well ?

actually I just saw you did…, do you want to try applying the Virtual Service to the gateway ?

something like this :

gateways:

  • helloworld-gateway

@Deepa_Kalani
Please find the destinationrules

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: helloworld
spec:
  host: helloworld
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2

and virtualservice

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: helloworld-vs
spec:
  hosts:
  - helloworld
  http:
  - route:
    - destination:
        host: helloworld
        subset: v1

The weird issue is that in case I curl from sleep to helloworld…it seems envoy proxy is being bypassed
22%20PM

could you help how I can validate if iptables have been set properly?

yes envoy is running as user 1337.
38%20PM

nsenter -t {pid} -n iptables -t nat -S

check the iptables rule setting , the pid is the id of your app pocess seen from host.

In my env,

-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N ISTIO_IN_REDIRECT
-N ISTIO_OUTPUT
-N ISTIO_REDIRECT
-A OUTPUT -p tcp -j ISTIO_OUTPUT
-A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15001
-A ISTIO_OUTPUT ! -d 127.0.0.1/32 -o lo -j ISTIO_REDIRECT
-A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN
-A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN
-A ISTIO_OUTPUT -j ISTIO_REDIRECT
-A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001

check the logs at “helloworld” proxy “kubectl logs -f -n manual -c istio-proxy” and see if the log appears when you hit curl

Thanks everybody for help. It was being caused by CNI plugin. We had to install CNI plugin explicitly as mentioned in https://istio.io/docs/setup/kubernetes/additional-setup/cni/

I have the exact similar issue, and I can’t get it to work. The virtual service is able to divert traffic correctly via istio ingress gateway(i.e. 100% to v2), but a pod-to-pod curl request always does a 50/50 traffic split between the two subsets of apps.

K8s : 1.15 on local kind cluster
istio: 1.2.2 with cni plugin and istio_cni:enabled: true

Here is my manifest:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    run: app-edge
  name: app-edge
  namespace: commontools
spec:
  replicas: 1
  selector:
    matchLabels:
      run: app-edge
      version: v1
  template:
    metadata:
      labels:
        run: app-edge
        version: v1
    spec:
     containers:
      - image: docker.io/kennethreitz/httpbin
        command: ["gunicorn", "--access-logfile", "-", "-b", "0.0.0.0:80", "httpbin:app"]
        imagePullPolicy: Always
        name: httpbin
        ports:
        - containerPort: 80
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    run: app-edge-2
  name: app-edge-2
  namespace: commontools
spec:
  replicas: 1
  selector:
    matchLabels:
      run: app-edge
      version: v2
  template:
    metadata:
      labels:
        run: app-edge
        version: v2
    spec:
     containers:
      - image: ajitchahal/nginx-2
        imagePullPolicy: Always
        name: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    getambassador.io/config: |
      ---
      apiVersion: ambassador/v0
      kind:  Mapping
      name:  httpbin_mapping
      prefix: /hello/
      service: app-edge.default:80
      tls: upstream
      rewrite: /
  labels:
    run: app-edge
  name: app-edge
  namespace: commontools
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
    name: http
  selector:
    run: app-edge
  type: ClusterIP
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: app-edge
  namespace: istio-system #commontools
spec:
  hosts:
  - "edge.ajit.de"
  - app-edge.commontools.svc.cluster.local
  gateways:
  - istio-my-gateway
  http:
  - match:
    route:
    - destination:
        port:
          number: 80
        host: app-edge.commontools.svc.cluster.local
        subset: v2
      weight: 100
    - destination:
        port:
          number: 80
        host: app-edge.commontools.svc.cluster.local
        subset: v1
      weight: 0
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: app-edge
  namespace: commontools
spec:
  host: app-edge.commontools.svc.cluster.local
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL

same for me, is this thread still alive?

I have a similar issue that I reported on this thread

My port name is correct but the traffic split does not work. It works if I change the service port from 80 or 443 to some non standard port like 8181

Here is a the bug I opened Port 80/443 not working with virtual service · Issue #19835 · istio/istio · GitHub

This complete yaml file will reproduce the issue hello-both-http-port.yaml