Problems after installing 1.1

Hi, I have recently installed 1.1 and I have some issues:

Every second request fails. Not half of requests (that would indicate some randomness) but really every second one gives me 400 Bad Request with a message (in body) app prober config does not exists for /my/path. At this point I am not doing anything fancy; Just one gateway (on port 80), one virtualservice rewriting the path and 3 subsets to match. I can see those logs also in istio-ingressgateway. Strange enough, Kiali reports all requests being successful.

I can’t get intra-mesh communication to work (I had that problem just before reinstall, though). I have a pod with application that can call other HTTP endpoint - /proxy?url=http://some.url:8080 and passes the result. When I oc exec into the container I can run curl without issues, but when I call that app even within it (through curl localhost:8080/proxy?url= it gives me 404. Since curl works, the problem is probably not in the sidecar receiving the communication but in the one attached to the node which is initiating it. Strange enough, I’d expect that the iptable rules apply to myself as exec’d user as well.

And I’ve forgot to mention one other thing: when I start the Pilot with --appNamespace my-namespace and restart the istio-ingressgateway, it does not boot; the logs say

Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected

I found that Istio does not have any fingers with half the requests failing; I had an OpenShift route going to the istio-ingressgateway service without target port being specified. When a service exposes multiple ports, OpenShift router accesses the ports round robin (I wonder why it did access only 2 of those 9 ports, though). In any case, it was a misconfiguration elsewhere.

However the outgoing traffic problem persists. I’ve attached busybox container trying to curl -v and from logs I see that it got 404 as well.

Btw. the DNS resolution part is okay; I can see in some logs that it fetched the correct IP. Just the request itself fails. I suspect that envoy in the sidecar might be messing with that…

And to be complete, I have outboundTrafficPolicy.mode set to ALLOW_ANY.

Hey @rvansa, Can you please let me know where you added this config to take effect ? I don’t do installation with “helm”. Let me know what annotation/config-map, I need to add this to take effect.

I found out … changed config-map “istio” and restarted pilot. Still I am not able to reach outside.

Maybe because of the warning here:

“Some ports, for example port 80, have HTTP services inside Istio by default. Because of this caveat, you cannot use this approach for services using those ports.”

If you want to avoid this problem, you can create a Sidecar resource to not import these services into the namespace. Something like:

kind: Sidecar
  name: side
  - hosts:
    - istio-system/istio-telemetry.istio-system
    - istio-system/istio-mixer.istio-system
      number: 9091
      name: grpc-mixer
      protocol: GRPC
  - hosts:
    - istio-system/istio-telemetry.istio-system
    - istio-system/istio-mixer.istio-system
      number: 15004
      name: grpc-mixer-mtls
      protocol: GRPC
  - hosts:
    - "./*"

Should work

Hey @howardjohn … Can you have a look at “Istio-1.1 - "Kind: sidecar" implementation issues” . I have a query regarding “kind: Sidecar”.

I will be really grateful if I can find a solution using this.


@howardjohn Thank you for your answer, though it’s probably related only to reaching I have tried to reach other external service on port 8888 and that worked.

However I have trouble with intra-mesh traffic being blocked (therefore without configuring egress); I would assume that without extra configuration any service in the mesh should be able to reach any other service within the mesh. The service I am trying to reach ( is not even running on the ‘special’ port 80, though there are other services in the cluster (in other namespaces) that use port 8080, too.

Hi @rvansa, @howardjohn @Sourabh_Wadhwa,

I got the same issue with the following scenario:

  • Installed Istio Version: 1.1.3
  • Kubernetes dist: OpenShift 3.11
  • I have exposed ingress gateway using oc route:
oc create route edge --service=istio-ingressgateway --insecure-policy=Redirect
  • outboundTrafficPolicy is set to REGISTRY_ONLY
  • Jaeger tracing is backed by a managed Elasticsearch instance on AWS and a ServiceEntry has been defined to enable the traffic to that instance through the egress controller.

When as expose my service through oc route, I was been able to reach it but when trying through the ingress gateway I got the mentioned error message : app prober config does not exists for.

My first intention was to upgrade to Istio 1.1.6 but I still have the issue. Then I figure out that the ingress gateway is not listening on port 80. The used port 15020.

So it seems that the OpenShift route picks randomly one of the ports. After explicitly set the route port to 80, I was able to access the service through the ingress gateway.

oc create route edge --service=istio-ingressgateway --insecure-policy=Redirect --port=80

Hope that will help.

@rafik8 I’ve found that my trouble were caused by using absolute URLs in the request and missing mesh gateway in virtualservice; see Blocking of ports in mesh (pre-1.1.3) for details.

Thank you for the update.