[tracing] x-b3-sampled header is always set to 0

I’m not able to make tracing to work in Istio 1.6.8
my setup is [nginx-ingres-controller] -> [proxy<->ServiceA] -> [proxy<->ServiceB]

and here is my Istio config

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
  meshConfig:
    enableTracing: true
    defaultConfig:
      tracing:
        sampling: 100
  addonComponents:
    tracing:
      enabled: true
    grafana:
      enabled: false
    istiocoredns:
      enabled: false
    kiali:
      enabled: false
    prometheus:
      enabled: false
  values:
    tracing:
      enabled: true
    pilot:
      traceSampling: 100

When Im sending requests to ingress-controller I can see that ServiceA receives tracing headers from the proxy

x-b3-traceid: d9bab9b4cdc8d0a7772e27bb7d15332f
x-request-id: 60e82827a270070cfbda38c6f30f478a
x-envoy-internal: true
x-b3-spanid: 772e27bb7d15332f
x-b3-sampled: 0
x-forwarded-proto: http

But x-b3-sampled is always set to 0 although I’ve configured Istio with sampling = 100%
I can manually add x-b3-sampled:1 to the request and then everything works fine. I can see the trace in Jaeger UI.

Thanks!

btw, I also tried to deploy Istio with demo profile and was getting same behaviour

another data point: when I expose ServiceA through Istio ingressgateway (instead of regular ingress) it also works fine. x-b3-sampled is always set to 1 and spans are getting pushed to Jaeger

Thanks for the details… this looks more of nginx issue in that case as istio-ingressgateway is working.
Could it be related to https://github.com/kubernetes/ingress-nginx/issues/4933?

tracing is disabled on ingress controller side and I’m 100% sure that ingress controller is not passing any of the x-b3-* headers (including x-b3-sampled). So most probably envoy proxy does that. And for some reason it makes a decision that non of the requests from ingress controller should be traced (x-b3-sampled:0)

When I replace ingress-controller with any other service and initialize request to serviceA from inside the cluster it works fine. So I think it has something to do with the fact that requests are getting forwarded from outside of the cluster.

I’m trying to reconfigure ingress controller to remove all the X-Forwarded-* headers from the request to upstream services. Hopefully this way request will look like it was initiated from inside the cluster.

After few days of digging I’ve figured it out. Problem is in the format of the x-request-id header that nginx ingress controller uses.

Envoy proxy expects it to be an UUID (e.g. x-request-id: 3e21578f-cd04-9246-aa50-67188d790051) but ingrex controller passes it as a non-formatted random string (x-request-id: 60e82827a270070cfbda38c6f30f478a). When I pass properly formatted x-request-id header in the request to ingress controller its getting passed down to envoy proxy and request is getting sampled as expected. I also tried to remove
x-request-id header from the request from ingress controller to ServiceA with a simple EnvoyFilter. And it also works as expected. Envoy proxy generates a new x-request-id and request is getting traced.

1 Like

@arkadi4 I have exactly the same problem but in my setup, I have an Envoy sidecar along side ingress-nginx. As I understand, the x-request-id header is generated by the Envoy itself and then passed down to the upstream. So why doesn’t Envoy format it to be a valid UUID?

It looks like setting generate-request-id to false in ingress-nginx ConfigMap solves the issue.