Hi, thank you for your answer,
- Sidecar injection is OK: I have 2/2 containers on each pod and I can access envoy logs.
- Services and databases are in the same namespace.
- I don’t have DestinationRules.
In the meantime, I made more investigation and discovered that the problem does not seem to come from mTLS.
So I posted a second question but anti-spam blocks it:
« Our automated spam filter, Akismet, has temporarily hidden your post in Standard service to Stateful service communication error 503 for review.
A staff member will review your post soon, and it should appear shortly.
We apologize for the inconvenience. »
In this post I explain this:
- With mTLS disabled.
- “Deployment Service” can communicate with “Deployment Service”.
- “Deployment Service” can communicate with “StatefulSet Service” if it’s a TCP communication like with MongoDB.
- But “Deployment Service” can’t communicate with “StatefulSet Service” if it’s a HTTP communication.
- I can expose “StatefulSet Service (HTTP)” using ingress without problem.
So I may have a problem with mTLS (or not), but I must solve this new problem (maybe the same) before
My configuration: Istio 1.13.2, I configure HTTP redirector, HTTPS, JWK and route some paths to several services, it works fine (thanks to Istio documentation).
My problem:
I create an HTTP server as a StatefulSet service listed on the pod as follows:
my-app-0 2/2 Running 2 (48m ago) 54m
If I add an entry on the virtual service (ingress), I can access it on port 80 (from external).
If I do port forwarding, I can access it on port 2002→8080.
Everything seems to work fine, but when I try to access it from a Deployment service (not statefulset) using curl like this:
kubectl -n my-namespace exec my-service-fdd8cb667-mgd5h -c app -- curl -v "my-app-0.my-app.my-namespace.svc.cluster.local:80/version"
The client sends me this error: 0upstream connect error or disconnect/reset before headers. Reset
And the proxy gives this server-side error:
[2022-05-01T14:32:18.703Z] “GET /version HTTP/1.1” 503 UF upstream_reset_before_response_started{connection_failure,delayed_connect_error:111} - “-” 0 145 0 - “-” “curl/7.64.0” “b7f96317-d31d-42c7-8090-3dda0e773d89” “my-app-0.my-app.my-namespace.svc.cluster.local” “172.17.0.14:80” InboundPassthroughClusterIpv4 - 172.17.0.14:80 172.17.0.15:51438 outbound.80_._.my-app.my-namespace.svc.cluster.local default
So the connection is established (between deployment and statefulset) but something is broken on proxy?!
Note: I didn’t change the configuration of envoy or add rules, I just did what I said after "My configuration:… » a little higher in this post…
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app
namespace: my-namespace
---
apiVersion: v1
kind: Service
metadata:
name: my-app
namespace: my-namespace
labels:
app: my-app
spec:
clusterIP: None # Headless.
ports:
- name: http-web
port: 80
targetPort: 8080
selector:
app: my-app
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-app
namespace: my-namespace
labels:
app: my-app
spec:
selector:
matchLabels:
app: my-app
serviceName: my-app
replicas: 1
template:
metadata:
labels:
app: my-app
version: 0.0.1-beta2
annotations:
sidecar.istio.io/rewriteAppHTTPProbers: "false"
spec:
serviceAccountName: my-app
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: my-app:latest
imagePullPolicy: Never
name: my-app
ports:
- containerPort: 8080
securityContext:
privileged: false
# readinessProbe:
# httpGet:
# path: /ready
# port: 8080
# periodSeconds: 5
# failureThreshold: 10
# livenessProbe:
# httpGet:
# path: /live
# port: 8080
# initialDelaySeconds: 10
# periodSeconds: 30
# failureThreshold: 2
—
Thanks for your help,
Arnaud