Configurable tracing


#1

I have installed istio with jaeger. I see traces from prometheus and kubelet health probes - but not from actual traffic.
My traffic is k8s ingress -> k8s service -> pods. Istio is installed via the side-car/init-container.
Is there a way to disable tracing for the prometheus and health-check probes?
Also - should not traffic floating in from the ingress -> service -> pod appear in jaeger?

I found https://github.com/istio/istio/issues/10336 and applied the rule as following:

apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
  creationTimestamp: "2019-02-13T09:34:07Z"
  generation: 1
  labels:
    app: mixer
    chart: mixer
    heritage: Tiller
    release: istio
  name: promhttp
  namespace: istio-system
  resourceVersion: "24823596"
  selfLink: /apis/config.istio.io/v1alpha2/namespaces/istio-system/rules/promhttp
  uid: 82a5538d-2f72-11e9-8b52-005056b8160d
spec:
  actions:
  - handler: handler.prometheus
    instances:
    - requestcount.metric
    - requestduration.metric
    - requestsize.metric
    - responsesize.metric
  match: (context.protocol == "http" || context.protocol == "grpc") && (match((request.useragent
    | "-"), "kube-probe*") == false) && (match((request.useragent | "-"), "Prometheus*")
    == false)

but I still see Prometheus probes

Sorry for not providing more in-depth information - Iā€™m quite new to istio. I can dig up whatever info is needed.


#2

Please take a look at the FAQ to see if you can discover what may be preventing your traces from showing up: https://preliminary.istio.io/help/faq/distributed-tracing/#no-tracing


#3

Yeah - I should have mentioned that I actually had a look at that - my trace percentage is ā€œ100ā€ so my proper calls should have been traced. As long as I see the prometheus and kubelet probes showing up this is a proof of tracing actually being in place. The absurd is that everything I do NOT want traced is traced and vice-versa! :slight_smile:


#4

Did you see the bits about port naming and container ports? This is typically what causes issues.


#5

yes, from my deployment yaml:

...
 ports:
        - containerPort: 8080
          name: http

and services.yaml:

 ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http

also if that was not right the prometheus/kubelet requests should not have shown .