My application has the following ports open:8088:31097/TCP,19888:32150/TCP,8042:32604/TCP Without Istio being installed all the pods in my Kubernetes cluster are able to connect to this service on port curl http://service.default.svc.cluster.local:8088/ succeeds with a webpage.
After I enable istio-injection and redeploy my application. Sidecar is added and now the curl command at port 8088 is failing to return the html response. No other change was done other than adding istio. Here is the sidecar config from Kubernetes Dashboard. Can you please help me find out what is happening?
proxy
sidecar
--domain
$(POD_NAMESPACE).svc.cluster.local
--configPath
/etc/istio/proxy
--binaryPath
/usr/local/bin/envoy
--serviceCluster
hadoop.$(POD_NAMESPACE)
--drainDuration
45s
--parentShutdownDuration
1m0s
--discoveryAddress
istio-pilot.istio-system:15010
--zipkinAddress
zipkin.istio-system:9411
--proxyLogLevel=warning
--proxyComponentLogLevel=misc:error
--connectTimeout
10s
--proxyAdminPort
15000
--concurrency
2
--controlPlaneAuthPolicy
NONE
--dnsRefreshRate
300s
--statusPort
15020
--applicationPorts
8088,7077,6066,8080
--trust-domain=cluster.local
Hugo
March 20, 2020, 11:30am
2
Hello,
What do you mean by
all my other pods in the cluster are unable to make the curl command at port 8088
Do you have an error code or a timeout? Are your pods up and running?
Regards,
Hugo
Hi Hugo,
I should have been more specific. I am executing this curl command. This fails when I run it after I bash into the resource-manager-0 container, the livy container which is all green and running fine and a sample app container.
curl http://resource-manager-0.resource-manager.voting.svc.cluster.local:8088/cluster [FAILS curl: (56) Recv failure: Connection reset by peer ]
Please help. If you can direct me to find more logs which can clearly suggest what is blocking this it will be really helpful.
Hugo
March 23, 2020, 5:03pm
4
Hello,
Do you have any PeerAuthentication policies?
kubectl get peerauthentication --all-namespaces
Regards,
Hugo
Hugo
March 23, 2020, 9:31pm
6
Hello,
Is it possible to have the YAML file corresponding to your service and deployment?
Regards,
Hugo
Hugo
March 25, 2020, 3:05pm
7
Hello,
Could you run the following command?
istioctl proxy-config listeners <pod_name> --type HTTP -o json | jq ".[].address"
Thanks,
Hugo
istioctl proxy-config listeners resource-manager-0 -n voting --type HTTP -o json | jq “. .address”
{
“socketAddress”: {
“address”: “0.0.0.0”,
“portValue”: 15090
}
}
Hugo
March 26, 2020, 5:52pm
9
Hello,
It’s weird because we should see the port used by the container as well: 8088 in your case. It seems that the configuration isn’t correct…anyway, it explains why your pod can’t be reached through the proxy.
Is it possible to have the YAML file corresponding to your service and deployment?
Could you run the command kubectl get deploy -n istio-system
?
Regards,
Hugo
PS D:\Git\hdiaks-core-services\hadoop> kubectl get deploy -n istio-system
NAME READY UP-TO-DATE AVAILABLE AGE
istio-citadel 1/1 1 1 9d
istio-galley 1/1 1 1 9d
istio-ingressgateway 1/1 1 1 9d
istio-pilot 1/1 1 1 9d
istio-policy 5/5 5 5 9d
istio-sidecar-injector 1/1 1 1 9d
istio-telemetry 1/1 1 1 9d
istio-tracing 1/1 1 1 9d
kiali 1/1 1 1 9d
prometheus 1/1 1 1 9d
My YAML file looks like this for the service:
apiVersion: v1
kind: Service
metadata:
name: resource-manager
# annotations:
# service.beta.kubernetes.io/azure-load-balancer-internal: "true"
labels:
app: resource-manager
spec:
# type: ClusterIP
ports:
- name: rm-scheduler
port: 8030
protocol: TCP
targetPort: 8030
- name: rm
port: 8032
protocol: TCP
targetPort: 8032
- name: rm-resoure-tracker
port: 8031
protocol: TCP
targetPort: 8031
- name: rm-web
port: 8088
protocol: TCP
targetPort: 8088
clusterIP: None
selector:
app: resource-manager
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: resource-manager
labels:
app: resource-manager
spec:
replicas: 1
serviceName: "resource-manager"
selector:
matchLabels:
app: resource-manager
template:
metadata:
labels:
app: resource-manager
spec:
# dnsPolicy: ClusterFirst
containers:
- name: resource-manager
image: anushreeacr.azurecr.io/anushreehdi:v1
imagePullPolicy: Always
# livenessProbe:
# httpGet:
# path: /ws/v1/cluster
# port: 8088
# initialDelaySeconds: 15
# timeoutSeconds: 3 # Default is 1
# periodSeconds: 5 # Default is 10
# failureThreshold: 5 # Default is 3
# readinessProbe:
# httpGet:
# path: /ws/v1/cluster
# port: 8080
# initialDelaySeconds: 3
# periodSeconds: 5 # Default is 10
# failureThreshold: 5 # Default is 3
ports:
- containerPort: 9099
name: web
command:
- "/bin/bash"
args:
- "-c"
- " /opt/hadoop/etc/hadoop/start-resourcemanager.sh && tail -f /opt/hadoop/logs/*"
resources:
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "500m"
volumeMounts:
- name: docker
mountPath: /tmp
volumes:
- name: docker
hostPath:
path: /tmp
for Istio actually, I made zero configuration changes and I have only added a very simple path route for the Istio Gateway. Just installed it using a tutorial as here: About service meshes - Azure Kubernetes Service | Microsoft Learn
Hugo
March 26, 2020, 6:18pm
11
Thanks for the information.
Could you please apply this YAML:
---
apiVersion: v1
kind: Service
metadata:
name: hello
labels:
app: hello
spec:
ports:
- port: 8080
name: http
selector:
app: hello
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: hello
labels:
app: hello
spec:
replicas: 1
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
name: hello
ports:
- containerPort: 8080
resources:
requests:
cpu: "200m"
memory: "200Mi"
limits:
cpu: "500m"
memory: "500Mi"
Then, give me the results of the following commands:
kubectl get pod -l "app=hello"
istioctl proxy-config listeners $(kubectl get pod -l "app=hello" --output=jsonpath={.items..metadata.name}) --type HTTP -o json | jq ".[].address"
kubectl exec -it resource-manager-0 sh curl hello:8080
Thanks,
Hugo
PS D:\Git\hdiaks-core-services\hadoop> kubectl get pod -l "app=hello" -n voting
NAME READY STATUS RESTARTS AGE
hello-d77cdd79c-vp6v9 2/2 Running 0 26s
anusri@ANUSRI-Z840:~/istio-1.4.0$ istioctl proxy-config listeners hello-d77cdd79c-vp6v9 -n voting --type HTTP -o json | jq ".[].address"
{
"socketAddress": {
"address": "10.244.14.153",
"portValue": 8080
}
}
{
"socketAddress": {
"address": "0.0.0.0",
"portValue": 15090
}
}
anusri@ANUSRI-Z840:~/istio-1.4.0$ kubectl exec -it resource-manager-0 -n voting sh curl hello:8080
Defaulting container name to resource-manager.
Use ‘kubectl describe pod/resource-manager-0 -n voting’ to see all of the containers in this pod.
sh: 0: Can’t open curl
command terminated with exit code 127
Weird but I ssh into the Container and was able to get a successful reply:
E:\HDIProjects\KubernetesHDI\Istio\DebuggingWithHugo>kubectl exec -it resource-manager-0 -n voting -- /bin/bash
Defaulting container name to resource-manager.
Use 'kubectl describe pod/resource-manager-0 -n voting' to see all of the containers in this pod.
root@resource-manager-0:/# curl hello:8080
Hello, world!
Version: 1.0.0
Hostname: hello-d77cdd79c-vp6v9
hmm in your case we do see port 8080 in listeners… What’s wrong with my yaml, did I miss some label?
Hugo
March 26, 2020, 6:50pm
13
Could you change the kind StatefulSet with Deployment?
Then, list the listener configuration:
istioctl proxy-config listeners <pod_name> --type HTTP -o json | jq ".[].address"
I’m wondering if your issue is related to https://github.com/istio/istio/issues/10659 .
I had no idea about issues that Istio has with StatefulSets. Thanks for sharing it.
I made the change in kind to Deployment.
anusri@ANUSRI-Z840:~/istio-1.4.0$ istioctl proxy-config listeners resource-manager-78d7b8587d-sqcs4 -n voting --type HTTP -o json | jq “. .address”
{
“socketAddress”: {
“address”: “0.0.0.0”,
“portValue”: 15090
}
}
PS D:\Git\hdiaks-core-services\hadoop> kubectl get services -n voting
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
details ClusterIP 10.0.96.100 <none> 9080/TCP 9d
hello ClusterIP 10.0.152.165 <none> 8080/TCP 37m
livy ClusterIP None <none> 8998/TCP 6d23h
nodemanager ClusterIP 10.0.23.105 <none> 80/TCP 71s
productpage ClusterIP 10.0.217.47 <none> 9080/TCP 9d
ratings ClusterIP 10.0.38.103 <none> 9080/TCP 9d
resource-manager ClusterIP 10.0.74.22 <none> 8030/TCP,8032/TCP,8031/TCP,8088/TCP 5m34s
reviews ClusterIP 10.0.158.126 <none> 9080/TCP 9d
voting-analytics ClusterIP 10.0.198.254 <none> 8080/TCP 9d
voting-app ClusterIP 10.0.123.181 <none> 8080/TCP 9d
voting-storage ClusterIP 10.0.86.97 <none> 6379/TCP 9d
This is my changed yaml:
apiVersion: v1
kind: Service
metadata:
name: resource-manager
# annotations:
# service.beta.kubernetes.io/azure-load-balancer-internal: "true"
labels:
app: resource-manager
spec:
# type: ClusterIP
ports:
- name: rm-scheduler
port: 8030
protocol: TCP
targetPort: 8030
- name: rm
port: 8032
protocol: TCP
targetPort: 8032
- name: rm-resoure-tracker
port: 8031
protocol: TCP
targetPort: 8031
- name: rm-web
port: 8088
protocol: TCP
targetPort: 8088
#clusterIP: None
selector:
app: resource-manager
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: resource-manager
labels:
app: resource-manager
spec:
replicas: 1
#serviceName: "resource-manager"
selector:
matchLabels:
app: resource-manager
template:
metadata:
labels:
app: resource-manager
spec:
# dnsPolicy: ClusterFirst
containers:
- name: resource-manager
image: anushreeacr.azurecr.io/anushreehdi:v1
imagePullPolicy: Always
# livenessProbe:
# httpGet:
# path: /ws/v1/cluster
# port: 8088
# initialDelaySeconds: 15
# timeoutSeconds: 3 # Default is 1
# periodSeconds: 5 # Default is 10
# failureThreshold: 5 # Default is 3
# readinessProbe:
# httpGet:
# path: /ws/v1/cluster
# port: 8080
# initialDelaySeconds: 3
# periodSeconds: 5 # Default is 10
# failureThreshold: 5 # Default is 3
ports:
- containerPort: 8088
name: web
command:
- "/bin/bash"
args:
- "-c"
- " /opt/hadoop/etc/hadoop/start-resourcemanager.sh && tail -f /opt/hadoop/logs/*"
resources:
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "500m"
volumeMounts:
- name: docker
mountPath: /tmp
volumes:
- name: docker
hostPath:
path: /tmp
Hugo
March 26, 2020, 8:50pm
16
I think there is a mistake in the YAML file:
resources:
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "500m"
Could you fix it and re apply the YAML?
Thanks,
Hugo
There was an extra resources:. I removed it. Also tried to make yaml as similar to the hello world as possible still my port doesn’t show up in listeners. What else can I check?
anusri@ANUSRI-Z840:~/istio-1.4.0$ istioctl proxy-config listeners resource-manager-c745c44c6-mxlpn -n voting --type HTTP -o json | jq “. .address”
{
“socketAddress”: {
“address”: “0.0.0.0”,
“portValue”: 15090
}
}
This is my yaml now
apiVersion: v1
kind: Service
metadata:
name: resource-manager
labels:
app: resource-manager
spec:
ports:
- port: 8030
name: rm-scheduler
- port: 8032
name: rm
- port: 8031
name: rm-resoure-tracker
- port: 8088
name: rm-web
selector:
app: resource-manager
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: resource-manager
labels:
app: resource-manager
spec:
replicas: 1
selector:
matchLabels:
app: resource-manager
template:
metadata:
labels:
app: resource-manager
spec:
containers:
- image: anushreeacr.azurecr.io/anushreehdi:v1
name: resource-manager
ports:
- containerPort: 8088
command:
- "/bin/bash"
args:
- "-c"
- " /opt/hadoop/etc/hadoop/start-resourcemanager.sh && tail -f /opt/hadoop/logs/*"
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "512Mi"
cpu: "500m"
volumeMounts:
- name: docker
mountPath: /tmp
volumes:
- name: docker
hostPath:
path: /tmp
Hugo
March 26, 2020, 10:37pm
18
I would like to run some tests on my own. Which image can I use instead of anushreehdi:v1
?
Hey Hugo. I pushed my repository to Github. Can you share your github username. I can give you access to build the image.
Hello @anushreeringne and @Hugo ,
did you guys figure out this? I am facing a similar issue.
I have a service running with ports 8080 and 8081 in a namespace in which sidecar injection is enabled. I am able to reach the service from pods running in the same namespace. However, getting errors when trying to reach service from pods in a namespace which doesn’t have sidecar injection enabled.
curl: (56) Recv failure: Connection reset by peer
Hi Anshul,
It is definitely the istio sidecar blocking the traffic. I realized that there are configurations needed specially to manage a stateful set with Isto that Hugo pointed out. I tried some solutions but couldn’t get it working. However we decided to hold off on Istio for now. Hence I didn’t investigate further.
If you do figure out your solution please do share it here.