K8S Istio sidecar injection with other init containers


I have istio-injection: enabled in my project’s namespace.
I have a job that uses an initContainer and a normal container (to provide some ordering).

This seems to be incompatible with Istio sidecar…
It seems that istio-init is an initContainer which finishes before istio-proxy.
istio-proxy must be running before anything else on the pod can connect to outside the pod, correct?

So, because I have my own initContainer, that must finish before the normal containers are started, and this container requires network traffic… so it fails because istio-proxy is not yet running…

Are there some configuration options within Istio that would help here?
There is no way to enforce ordering of containers other than initContainers and containers

Or is the answer: exclude jobs that use initContainers from sidecar injection, accepting the loss of traffic monitoring/security?

Thanks for any help.

1 Like

There is no easy solution. Your init container will run before the sidecar starts. If your container runs before Istio’s init container it will not be secure. If your container runs after Istio’s it will not have network access.

If you can avoid doing network I/O in your init containers you should. If you must use init containers that expect connectivity, you’ll need a work-around.

If an application uses an Init Container for orchestration (for example trying to contact a remote CouchDB before starting up an app that depends on immediate connectivity to CouchDB) the pod initialization will not complete.

If your init container is just a tiny script you can sometimes move it from its own container to the app container with a hack like the following in your Deployment .yaml.

command: ["/bin/bash", "-c"]
args: ["until curl --head localhost:15000 ; do echo Waiting for Sidecar; sleep 3 ; done ; echo Sidecar available; ./init-stuff.sh && ./startup.sh"]

(where init-stuff.sh gets replaced by whatever your init used to do, and startup.sh is whatever command starts your app container).

1 Like
Deploy args not running after envoy 1.1rc4

There are a number of things coming down the pipeline which will help address this, including some changes that are being suggested within upstream kubernetes, however this is an issue for the current istio installations.

One topic of interest is the sidecar proposal, this will allow us to mark containers specifically as sidecars and have them follow a different lifecycle to the application containers (and init containers). This is likely many months away if not further out.

Another is enhancements we are making to the CNI plugin so it can start/stop proxy. We have a PoC of this running now but have one last problem to solve before we bring it to the community, accountability for the resource usage. This will also help us remove the requirement for the application’s service account to have elevated privileges.

The same person on our team is involved with both efforts, Marko Luksa.

1 Like

Thank you @ed.snible for the suggestion, helps us going forward for now.
Thank you @kconner for the information, we’ll be keeping an eye on this - looking forward to that future.


I had the same problem with a NodeJS microservice inside an Istio mesh that need to connect to a MongoDB instance at startup outside of the mesh. My NodeJs application always started before the istio proxy and the connection to MongoDB used to fail.
I solved my proble using an HTTP livenessProbe which call a route (/ping) that is created only if the Mongo connection succeed.
So, when the pod is deployed, the mongo connection fails, then the istio proxy starts, and then the probe fails which restart the NodeJS container. After that, the mongo connection succeed.

1 Like