Getting Kafka to "work" with Istio

I have been trying to find a way to get Istio to work on micro-services in a k8s cluster that also has kafka in the cluster. All the micro-services (apps) use kafka as their message bus between apps and when I inject Istio into just the app pods they stop working.
I am looking for the right settings to allow the kafka protocol to flow through the app Istio sidecar without being altered.
If this is not the right forum for this type of question would you please let me know an alternate?

Thank you. – Pippin Wallace

How are you referring to kafka within K8. I have kafka deployed outside GKE but in the same VPC and after several tries I have been able to create approprioate ServiceEntry to allow Application Pods access Kafka.

Thanks Animesh for replying!
I have Kafka running inside the same GKE cluster where all my apps are deployed so they are all on the same subnet and prior to Istio sidecar injection could communicate freely over SSL with each other.
The apps just publish or consume messages off of Kafka to communicate between the apps. It is all event driven.
When I injected the istio sidecar into two apps that communicate via Kafka the apps stopped working.
I am new enough to this that I am not sure how best to troubleshoot.
Thanks again – Pippin

Hoe is kafka deployed? Can you ping Kafka brokers from your application pods? Istio hooks up to K8 service registry and knows how to access services added to K8. In my case since Kafka was deployed outside of GKE I had to set up a ServiceEntry like following to let Istio know that there is such a service

apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: kafka-external
spec:
hosts:

  • “$KAFKA_BROKER_HOSTNAME”
    ports:
  • number: 8082
    name: kafka-rest
    protocol: http
  • number: 9092
    name: kafka
    protocol: tcp
  • number: 2181
    name: kafka-zk
    protocol: tcp
    location: MESH_EXTERNAL
    resolution: NONE

Kafka is deployed into its own namespace in the cluster and the apps are in their own namespace however all application pods and kafka pods including brokers share the same internal IP network space.
This allows for direct ICMP pings between the application pods and kafka pods using their shared internal IP subnet. Also cluster internal DNS / service discovery is used when the pods communicate between each other.
Maybe what I got wrong was not injecting sidecars into the kfaka pods.
My thinking was just pick two of our many apps to test istio out with by only injecting the sidecars into these two apps. I was thinking that if I injected an Envoy sidecar into a kafka pod it would “break” things due to Envoy not being able to handle the Kafka protocol.
Am I not thinking about this correctly?
Regards – Pippin

Hi Pippin,

I faced problem with my GRPC client running in a nodejs server (inside a pod) communicating with other micro services after injecting istio envoy proxy as sidecar into my pods with mTLS enabled.
Disabling mTLS worked for me.
Can you verify if you have mTLS enabled for your service using the following command.

istioctl authn tls-check

To disable mTLS you need to delete the ‘MeshPolicy’ object, named ‘default’ .

Thanks,
Amit

I tried several options, in the end I managed to make it work by configuring the traffic in the sidecar level for each pod. For example this deployment pods can connect to my kafka, notice the traffic.sidecar.istio.io/excludeOutboundIPRanges: 0.0.0.0/0 annotation:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: kapi
  labels:
    app: details
    version: v1alpha
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kapi
      version: v1alpha
  template:
    metadata:
      labels:
        app: kapi
        version: v1alpha
      annotations:
        traffic.sidecar.istio.io/excludeOutboundIPRanges: 0.0.0.0/0
    spec:
      serviceAccountName: kapi
      containers:
      - image: MY_IMAGE_REFERENCE_HERE
        imagePullPolicy: Always
        name: kapi
        ports:
        - containerPort: 8080

Use a more restricted CIDR to your own convenience!

Hope it helps!