Hi, I have recently installed 1.1 and I have some issues:
Every second request fails. Not half of requests (that would indicate some randomness) but really every second one gives me 400 Bad Request with a message (in body) app prober config does not exists for /my/path. At this point I am not doing anything fancy; Just one gateway (on port 80), one virtualservice rewriting the path and 3 subsets to match. I can see those logs also in istio-ingressgateway. Strange enough, Kiali reports all requests being successful.
I can’t get intra-mesh communication to work (I had that problem just before reinstall, though). I have a pod with application that can call other HTTP endpoint - /proxy?url=http://some.url:8080 and passes the result. When I oc exec into the container I can run curl http://app.my-namespace.svc.cluster.local:8080/foo without issues, but when I call that app even within it (through curl localhost:8080/proxy?url=http://app.my-namespace.svc.cluster.local:8080/foo) it gives me 404. Since curl works, the problem is probably not in the sidecar receiving the communication but in the one attached to the node which is initiating it. Strange enough, I’d expect that the iptable rules apply to myself as exec’d user as well.
And I’ve forgot to mention one other thing: when I start the Pilot with --appNamespace my-namespace and restart the istio-ingressgateway, it does not boot; the logs say
Envoy proxy is NOT ready: config not received from Pilot (is Pilot running?): cds updates: 0 successful, 0 rejected; lds updates: 0 successful, 0 rejected
I found that Istio does not have any fingers with half the requests failing; I had an OpenShift route going to the istio-ingressgateway service without target port being specified. When a service exposes multiple ports, OpenShift router accesses the ports round robin (I wonder why it did access only 2 of those 9 ports, though). In any case, it was a misconfiguration elsewhere.
However the outgoing traffic problem persists. I’ve attached busybox container trying to curl -v http://google.com/ and from logs I see that it got 404 as well.
Btw. the DNS resolution part is okay; I can see in some logs that it fetched the correct IP. Just the request itself fails. I suspect that envoy in the sidecar might be messing with that…
And to be complete, I have outboundTrafficPolicy.mode set to ALLOW_ANY.
Hey @rvansa, Can you please let me know where you added this config to take effect ? I don’t do installation with “helm”. Let me know what annotation/config-map, I need to add this to take effect.
UPDATE:
I found out … changed config-map “istio” and restarted pilot. Still I am not able to reach outside.
“Some ports, for example port 80, have HTTP services inside Istio by default. Because of this caveat, you cannot use this approach for services using those ports.”
If you want to avoid this problem, you can create a Sidecar resource to not import these services into the namespace. Something like:
@howardjohn Thank you for your answer, though it’s probably related only to reaching google.com. I have tried to reach other external service on port 8888 and that worked.
However I have trouble with intra-mesh traffic being blocked (therefore without configuring egress); I would assume that without extra configuration any service in the mesh should be able to reach any other service within the mesh. The service I am trying to reach (http://app.my-namespace.svc.cluster.local:8080) is not even running on the ‘special’ port 80, though there are other services in the cluster (in other namespaces) that use port 8080, too.
Jaeger tracing is backed by a managed Elasticsearch instance on AWS and a ServiceEntry has been defined to enable the traffic to that instance through the egress controller.
When as expose my service through oc route, I was been able to reach it but when trying through the ingress gateway I got the mentioned error message : app prober config does not exists for.
My first intention was to upgrade to Istio 1.1.6 but I still have the issue. Then I figure out that the ingress gateway is not listening on port 80. The used port 15020.
So it seems that the OpenShift route picks randomly one of the ports. After explicitly set the route port to 80, I was able to access the service through the ingress gateway.
@rafik8 I’ve found that my trouble were caused by using absolute URLs in the request and missing mesh gateway in virtualservice; see Blocking of ports in mesh (pre-1.1.3) for details.