Getting false tcp connection from withing container

Hi Experts,

I am facing a situation where I have to check tcp connection from within container to a remote application. And source container runs with sidecar container of istio-proxy.

When sidecar is there, and I telnet to any IP and port, it always gives output as connected. Irrespective of whether that IP is reachable or not and whether the port is open or not on remote end.

Istio version I am using is 1.6.2.
Kubernetes version is: 1.17.4

How to replicate.

Create two namespaces one with label istio-injection=enabled and other without that label.
Deploy nginx pod in both the namespaces.
Install telnet in both pods(apt-get update, apt install telnet)
Now telnet will show connected for all IP and ports from the pod that is in namespace where istio-injection is enabled. While in second namespace from nginx pod, telnet will work fine(As expected).
ncat utility also provides same results as telnet.

Request to provide further guidance on this.

Does this seem related to your issue? Envoy is potentially trying to inspect your connection

Hi @nick_tetrate

Thanks for pointing out related issue.

I have gone through the link and it can be related to my problem. I am also testing by creating policies(But facing issues[1]).

But I suspect it is more about the problem when destination is having istio-proxy sidecar. In my case source application is having sidecar and getting issue while connecting to any service whether it is inside or outside the cluster.

I have gone thorough also. Which also tells to disable or use strict mTLS for mysql related issues when some application tries to connect to mysql.

Is there any way to stop inspection of packets at source level itself? As in policy, the service is the destination service.

[1](Error from server: error when creating “policy.yaml”: admission webhook “” denied the request: unrecognized type Policy)

kind: Policy
name: nginx-nomtls-authn
namespace: my-namespace

  • name: # The name of your K8s Service

Hi @nick_tetrate

I found out that Policy is outdated and it has been replaced by PeerAuthentication from here.

I applied PeerAuthentication in STRICT mode and re-created the pods, but still having same behavior. (Tried by applying this in both soruce and destination pod’s namespace).
I didnt find DISABLED mode in PeerAuthentication so tested with STRICT only.

kind: PeerAuthentication
name: nginx-nomtls-authn
namespace: my-namespace
run: nginx
mode: STRICT


Can anyone help in this?