ServiceEntry for https on resulting in CONNECT_CR_SRVR_HELLO using curl


I’ve installed istio 1.1.3 and have 2 services entries. The one below, doesn’t seem to work.

kube-shell> kubectl apply -f - <<EOF
            kind: ServiceEntry
              name: sslhttpbin-ext
              namespace: istio-system
              - number: 443
                name: https
                protocol: HTTPS
              resolution: DNS
              location: MESH_EXTERNAL
            EOF unchanged

I’m starting a bash session via
kubectl -n mobility run my-shell --rm -i --tty --image ellerbrock/alpine-bash-curl-ssl -- b ash

I’m getting an istio side car for my-shell

kube-shell> k -n mobility get pods | grep my-shell
my-shell-678c4759c7-lpc7f        2/2       Running   0          8s
k -n mobility logs -f my-shell-678c4759c7-lpc7f -c istio-proxy
2019-04-24T19:27:15.604678Z	info	Envoy proxy is ready

When I run curl I get

curl -v
*   Trying
* Connected to ( port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number
* stopped the pause stream!
* Closing connection 0
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number

Now I’ve also setup a service entry for non-https and that works.

kube-shell> k -n istio-system get serviceentry 
NAME             HOSTS           LOCATION        RESOLUTION   AGE
httpbin-ext      []   MESH_EXTERNAL   DNS          3h
sslhttpbin-ext   []   MESH_EXTERNAL   DNS          3h
  "headers": {
    "Accept": "*/*", 
    "Host": "", 
    "User-Agent": "curl/7.61.0", 
    "X-B3-Sampled": "0", 
    "X-B3-Spanid": "eaf977939871f88f", 
    "X-B3-Traceid": "e31b13a119326b0aeaf977939871f88f", 
    "X-Envoy-Decorator-Operation": "*", 
    "X-Envoy-Expected-Rq-Timeout-Ms": "3000", 
    "X-Istio-Attributes": "CikKGGRlc3RpbmF0aW9uLnNlcnZpY2UuaG9zdBINEgtodHRwYmluLm9yZwopChhkZXN0aW5hdGlvbi5zZXJ2aWNlLm5hbWUSDRILaHR0cGJpbi5vcmcKLwodZGVzdGluYXRpb24uc2VydmljZS5uYW1lc3BhY2USDhIMaXN0aW8tc3lzdGVtCj8KCnNvdXJjZS51aWQSMRIva3ViZXJuZXRlczovL215LXNoZWxsLTY3OGM0NzU5YzctbHBjN2YubW9iaWxpdHk="

from within the same container that I used for the https.

Why is my https serviceentry not working? I’m trying to follow this guide


Still having issues, I’ll just add my diagnosis as a running monologue.

kube-shell> istioctl proxy-status | grep my-shell
my-shell-678c4759c7-v754w.mobility          SYNCED     SYNCED     SYNCED (100%)     SYNCED     istio-pilot-7d6c946b9d-w9jvv     1.1.3
kube-shell> k -n istio-system get serviceentry 
NAME              HOSTS               LOCATION        RESOLUTION   AGE
edition-cnn-com   []                   DNS          15h
google            []    MESH_EXTERNAL   DNS          16h
kube-shell> k -n istio-system get serviceentry google -o yaml
kind: ServiceEntry
  annotations: |
  creationTimestamp: 2019-04-24T22:54:50Z
  generation: 1
  name: google
  namespace: istio-system
  resourceVersion: "23104520"
  selfLink: /apis/
  uid: f764f6a8-66e3-11e9-9f6c-0619cd612f5e
  location: MESH_EXTERNAL
  - name: https
    number: 443
    protocol: HTTPS
  resolution: DNS

and still can’t reach

bash-4.4$ curl
curl: (35) error:1400410B:SSL routines:CONNECT_CR_SRVR_HELLO:wrong version number

here is what my install looks like

kube-shell> k -n istio-system get pods
NAME                                      READY     STATUS      RESTARTS   AGE
grafana-c49f9df64-nqbzl                   1/1       Running     0          5d19h
istio-citadel-7f699dc8c8-7rgxl            1/1       Running     0          5d19h
istio-cleanup-secrets-1.1.3-qd8vc         0/1       Completed   0          5d19h
istio-galley-687664875b-mgmjv             1/1       Running     0          5d19h
istio-grafana-post-install-1.1.3-wlrgr    0/1       Completed   0          5d19h
istio-init-crd-10-fjfjr                   0/1       Completed   0          5d19h
istio-init-crd-11-pw5rm                   0/1       Completed   0          5d19h
istio-pilot-7d6c946b9d-w9jvv              1/1       Running     0          15h
istio-policy-67889fcb77-dvbkk             2/2       Running     1          5d19h
istio-security-post-install-1.1.3-n7wr4   0/1       Completed   0          5d19h
istio-sidecar-injector-d48786c5c-wwzkx    1/1       Running     0          5d19h
istio-telemetry-64b644d5b7-r2tlt          2/2       Running     2          5d19h
istio-tracing-79db5954f-m78kp             1/1       Running     0          5d19h
prometheus-67599bf55b-xwxml               1/1       Running     0          5d19h

I’m also noticing that when I look at the istio-proxy logs I don’t see any request logging going on. Which seems odd, that doesn’t match the troubleshooting examples i’m seeing around.

k -n mobility logs  my-shell-678c4759c7-v754w -c istio-proxy
2019-04-25T14:59:02.618320Z	info	Envoy proxy is ready
[2019-04-25 15:15:25.631][15][warning][misc] [external/envoy/source/common/protobuf/] Using deprecated option 'envoy.api.v2.Listener.use_original_dst' from file lds.proto. This configuration will be removed from Envoy soon. Please see for details.

nothing about requests no mater how many times I issue curl

Cluster config shows google present.

kube-shell> istioctl -n mobility proxy-config cluster my-shell-678c4759c7-v754w.mobility | grep google                                            443       -              outbound      &{STRICT_DNS}

Now that I’m looking at the routes I am seeing something odd.

kube-shell> istioctl -n mobility proxy-config routes my-shell-678c4759c7-v754w.mobility --name 443 -o json | head
        "name": "443",
        "virtualHosts": [
                "name": "i-grpctestingui-grpc.mobility.svc.cluster.local:443",

That virtual host actually shouldn’t exist in my cluster (a collegue added it), but strange that it somehow has priority as its the only virtual host on port 443. So that seems really mysterious. But since I don’t actually want that in my cluster this will be interesting to see how deleting it effects the routes…

after deleting that service

kube-shell> istioctl -n mobility proxy-config routes my-shell-678c4759c7-v754w.mobility --name 443 -o json 

So now the route is gone… sure that makes sense. Never knew why it was there to begin with… so magic.
And now let me try my curl

bash-4.4$ curl -I
HTTP/2 200 
date: Thu, 25 Apr 2019 15:53:36 GMT
expires: -1
cache-control: private, max-age=0
content-type: text/html; charset=ISO-8859-1
p3p: CP="This is not a P3P policy! See for more info."
server: gws
x-xss-protection: 0
x-frame-options: SAMEORIGIN
set-cookie: 1P_JAR=2019-04-25-15; expires=Sat, 25-May-2019 15:53:36 GMT; path=/;
set-cookie: NID=182=TqXfQWr5RoeG5ZywtcNk6apwfVyuzkFPYCR6jAjDD5IZ3xxv8vj39FJp-FPANy1WJn5SQ41TcYjAs3osDfxTv8h4jzKEdJ5S3kOH1YCo_TPxdDQqaDQznoEWvAoRqw_lQIm1h3AxrPQ1mJ6f_Rw4R1sjT01KoQuWGbRTxXVax3I; expires=Fri, 25-Oct-2019 15:53:36 GMT; path=/;; HttpOnly
alt-svc: quic=":443"; ma=2592000; v="46,44,43,39"
accept-ranges: none
vary: Accept-Encoding


Well!!! Look at that. Now the service entry works??!!!


So the summary was that a completely unrelated k8s service definition, in my case of type LoadBalancer and listening on port 443 was somehow becoming the virtual host in all other pods trying to reach services on port 443. Pretty magical behavior there. Possibly a bug?

Adding an example of the service I had to delete. some of the details were redacted.

apiVersion: v1
kind: Service
  name: i-grpctestingui-grpc
  namespace: mobility
  annotations: grpctestingui.corp.REDACTED tcp "true" arn:aws:acm:us-west-2:REDACTED:certificate/REDACTED "443"
  type: LoadBalancer
    run: grpctestingui
  - port: 443
    protocol: TCP
    name: http-grpc-grpctestingui
    targetPort: http-rest
  externalTrafficPolicy: Cluster
1 Like