I had upgraded the istio from 1.0.6 to 1.1.1 with helm. Upgrade was successful, I recreated the pods so the sidecar can be refreshed as i’m using sidecar autoinjection enabled. However after that RabbitMQ not connect from that pods. My rabbitmqa is in another namespace, not sidecar inject enable. I do also try with a service entry with without virtual service no luck!
I do also tunnel the kiali and found “404 page not found” that was working before upgrade.
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=kiali -o jsonpath=’{.items[0].metadata.name}’) 20001:20001
Any help or a clue is appreciated.
What URL are you using to connect to kiali? Remember to use the /kiali context root (which is the default)… so with the port forward its something like http://localhost:20001/kiali/console
1 Like
kiali working fine did not notice the url changes. thanks, @jmazzitelli. now need to figure out rabbitmq issue
output of
./istioctl proxy-config clusters -n istio-system istio-ingressgateway-794cfcf8bc-v25kr looks normal
rabbitmq-cluster-rabbitmq-ha.default.svc.cluster.local 5672 - outbound &{ORIGINAL_DST}
rabbitmq-cluster-rabbitmq-ha.default.svc.cluster.local 15672 - outbound &{ORIGINAL_DST}
I can also connect rabbitmq from other pods, not istio sidecar injected. The rabbitmq is healthy, can tunnel management port. Look like no issue from rabbitmq. The only sidecar enables pods cannot connect to rabbitmq. curl is working for management port, nc can connect 5672 port.
What I’m missing? For information only, rabbitmq port 5672 using amqp protocol.
I’m revert back to istio 1.0.6 with same setting rabbitmq working. What i’m missing? Must be something changed in istio1.1.1.
you can try debug refer to the help doc
Hey @Nahidul_Kibria - did you deploy rabbitmq with MTLS? as this is an issue i’m trying to solve (for stateless services)
Cool have a buddy with the similar issue, I did not use MTLS, I do also found some issue like “Readiness probe failed: HTTP probe failed with statuscode: 503” while creating pods not all suspecting my node does not have enough CPU, had tried --set global.proxy.readinessInitialDelaySeconds=30 no luck. I failed twice upgrading from 1.0.6 to 1.1.1. Then revert back to 1.0.6 as I cannot stop the team using the cluster. Will try again in a new cluster. I do use EKS. k8s 1.11. However, in my mac machine docker to desktop does not have the issue. eagerly waiting to resolve it.
Try following this:
i’ve managed to set up rabbit with istio sidecar, but no tls unfortunately so i’m abandoning istio now, maybe will try to check consul
I do place rabbitmq in another namespace and do not inject the sidecar there. all of my setup running in 1.0.6 very good. Not leaving istio i’m in love with it.
I’ve managed actually to solve the mtls issue, it’s all here: