[unsolved][Istio 1.1.1] How to setup k8s ingress resources properly

Hi,

our goal is to use istio as an ingress for our http(s) traffic with certificates from letsencrypt via cert-manager.
We want to use K8S ingress resources to setup virtual gateway and rules automatically.
We are using helm to setup our resources and most helm charts only support ingress resources out of the box.

Our Setup details you can find at the bottom.

We tried to find documentation about istio and K8s ingress resources, but most of it are github issues. We set global.k8sIngress.enabled in helm values.

To test istio we use the bookinfo sample in default namesapce, where we enable sidecar injection

$ kubectl label namespace default istio-injection=enabled
$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/platform/kube/bookinfo.yaml
$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/platform/kube/bookinfo-ingress.yaml

$ kc get ingresses.extensions gateway
NAME      HOSTS   ADDRESS   PORTS   AGE
gateway   *                 80      1h

But nothing happened. Ingress resource did NOT get any address. Checking istio-ingress if it’s ok:

$ kubectl -n istio-system get pod istio-ingressgateway-d57f5f484-pcvbk
NAME                                   READY   STATUS    RESTARTS   AGE
istio-ingressgateway-d57f5f484-pcvbk   1/1     Running   0          17h
$ kubectl -n istio-system get service istio-ingressgateway
NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP                                                                           PORT(S)                                                                                                                                      AGE
istio-ingressgateway   LoadBalancer   100.64.0.151   internal-HASH-NUMBERS.eu-central-1.elb.amazonaws.com   80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31821/TCP,15030:30521/TCP,15031:32503/TCP,15032:30272/TCP,15443:30582/TCP,15020:31460/TCP   17h

All good so, we debugged the istio-pilot discovery pod and we’ve found no trace that it’s watching for k8s ingress resources. We digged into the code and we realized that the mesh config from the helm template miss some settings ?BUG?.
I manually added the configmap to add 2 parameters to the mesh config and restarted pilot discovery process / container.

$ kubectl -n istio-system edit cm istio
ingressControllerMode: DEFAULT
ingressClass: istio

And the magic convertIngress happened.

$ kc get ingresses.extensions gateway
NAME      HOSTS   ADDRESS                                                                               PORTS   AGE
gateway   *       internal-HASH-NUMBERS.eu-central-1.elb.amazonaws.com   80      32m

Questions:

  1. Is the missing parameters in mesh config we’ve found in helm chart template a bug or desired?
  2. How to get HTTPS working with ingress resources?
    1. If we enable global.k8sIngress.enableHttps in helm chart, pilot fails since the secret to mount key files doesn’t exist, which is referenced in Gateway. This is desired, since we want to use cert-manager and k8s secrets, not mounted files. https://github.com/istio/istio/blob/master/install/kubernetes/helm/istio/charts/gateways/templates/preconfigured.yaml#L22
        - port:
            number: 443
            protocol: HTTPS
            name: https-default
          tls:
            mode: SIMPLE
            serverCertificate: /etc/istio/ingress-certs/tls.crt
            privateKey: /etc/istio/ingress-certs/tls.key
          hosts:
          - "*"
      
    2. How to setup istio-autogenerated-k8s-ingress to use cert-manager secrets from ingress resources?
    3. Does istio require to have the cert-manager certificates be created in istio system namespace?

We can succesfully create certificates via cert-manager defined by annotations in ingress resources.
This is how a ingress resource with TLS looks like. (Ports list 80 + 443)

$ kubectl -n monitoring get ingresses.extensions kube-prometheus-prometheus-alertmanager
NAME                                      HOSTS                                                       ADDRESS                                                                               PORTS     AGE
kube-prometheus-prometheus-alertmanager   alertmanager.sub.domain.com   internal-HASH-NUMBERS.eu-central-1.elb.amazonaws.com   80, 443   23h

$ kubectl -n monitoring get ingresses.extensions kube-prometheus-prometheus-alertmanager -oyaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    certmanager.k8s.io/acme-challenge-type: dns01
    certmanager.k8s.io/acme-dns01-provider: route53
    certmanager.k8s.io/cluster-issuer: letsencrypt-staging
    kubernetes.io/ingress.class: istio
  creationTimestamp: "2019-03-28T10:39:32Z"
  generation: 2
  labels:
    app: prometheus-operator-alertmanager
    chart: prometheus-operator-5.0.3
    heritage: Tiller
    release: kube-prometheus
  name: kube-prometheus-prometheus-alertmanager
  namespace: monitoring
  resourceVersion: "2042745"
  selfLink: /apis/extensions/v1beta1/namespaces/monitoring/ingresses/kube-prometheus-prometheus-alertmanager
  uid: c5dda5e2-5145-11e9-b1da-066ff4a1fb28
spec:
  rules:
  - host: alertmanager.sub.domain.example.com
    http:
      paths:
      - backend:
          serviceName: kube-prometheus-prometheus-alertmanager
          servicePort: 9093
  tls:
  - hosts:
    - alertmanager.sub.domain.example.com
    secretName: alertmanager-tls
status:
  loadBalancer:
    ingress:
    - hostname: internal-HASH-NUMBERS.eu-central-1.elb.amazonaws.com

And the certificate

$ kubectl -n monitoring get certificates alertmanager-tls -o yaml
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  creationTimestamp: "2019-03-28T13:01:36Z"
  generation: 1
  name: alertmanager-tls
  namespace: monitoring
  ownerReferences:
  - apiVersion: extensions/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: Ingress
    name: kube-prometheus-prometheus-alertmanager
    uid: c5dda5e2-5145-11e9-b1da-066ff4a1fb28
  resourceVersion: "2003681"
  selfLink: /apis/certmanager.k8s.io/v1alpha1/namespaces/monitoring/certificates/alertmanager-tls
  uid: 9ec0df5b-5159-11e9-b1da-066ff4a1fb28
spec:
  acme:
    config:
    - dns01:
        provider: route53
      domains:
      - alertmanager.sub.domain.example.com
  dnsNames:
  - alertmanager.sub.domain.example.com
  issuerRef:
    kind: ClusterIssuer
    name: letsencrypt-staging
  secretName: alertmanager-tls
status:
  conditions:
  - lastTransitionTime: "2019-03-28T13:03:18Z"
    message: Certificate is up to date and has not expired
    reason: Ready
    status: "True"
    type: Ready
  notAfter: "2019-06-26T12:03:16Z"
$ kubectl -n monitoring get secrets alertmanager-tls
NAME               TYPE                DATA   AGE
alertmanager-tls   kubernetes.io/tls   3      20h

Http requests work, but HTTPS won’t until the Gateway is also listening for https:

$ kubectl describe gateways.networking.istio.io istio-autogenerated-k8s-ingress
Name:         istio-autogenerated-k8s-ingress
Namespace:    istio-system
Labels:       app=gateways
              chart=gateways
              heritage=Tiller
              release=istio
Annotations:  <none>
API Version:  networking.istio.io/v1alpha3
Kind:         Gateway
Metadata:
  Creation Timestamp:  2019-03-28T14:20:16Z
  Generation:          1
  Resource Version:    2019287
  Self Link:           /apis/networking.istio.io/v1alpha3/namespaces/istio-system/gateways/istio-autogenerated-k8s-ingress
  UID:                 9bbb8910-5164-11e9-b1da-066ff4a1fb28
Spec:
  Selector:
    Istio:  ingressgateway
  Servers:
    Hosts:
      *
    Port:
      Name:      http
      Number:    80
      Protocol:  HTTP2
Events:          <none>
Setup
Kubernetes Cluster in AWS setup via KOPS 1.11.1 on EC2
K8S Version = Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.8", 
GitCommit:"4e209c9383fa00631d124c8adcc011d617339b3c", GitTreeState:"clean", 
BuildDate:"2019-02-28T18:40:05Z", GoVersion:"go1.10.8", Compiler:"gc", 
Platform:"linux/amd64"}
Helm Version = v2.13.1
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
cert-manager 9 Thu Mar 28 14:00:36 2019 DEPLOYED cert-manager-v0.7.0 v0.7.0 cert-manager
external-dns 11 Tue Mar 26 16:28:24 2019 DEPLOYED external-dns-1.7.0 0.5.9 system-addons
istio 3 Thu Mar 28 15:46:58 2019 DEPLOYED istio-1.1.0 1.1.0 istio-system
istio-cni 22 Thu Mar 28 15:46:33 2019 DEPLOYED istio-cni-0.1.0 0.1.0 istio-system
istio-init 31 Thu Mar 28 15:46:34 2019 DEPLOYED istio-init-1.1.1 1.1.1 istio-system
kube-prometheus 33 Thu Mar 28 13:50:37 2019 DEPLOYED prometheus-operator-5.0.3 0.29.0 monitoring
kube2iam 10 Tue Mar 26 11:06:41 2019 DEPLOYED kube2iam-0.10.0 0.10.4 system-addons
metrics-server 6 Tue Mar 26 16:30:11 2019 DEPLOYED metrics-server-2.5.0 0.3.1 kube-system

Helm repos (Jenkinsfile snippet)

                sh "helm init --client-only"
                sh "helm repo add istio.io https://storage.googleapis.com/istio-release/releases/1.1.1/charts"
                sh "helm repo update"
                sh "helm upgrade --install istio-cni istio.io/istio-cni --namespace ${NAMESPACE} -f provision/k8s/charts/istio/istio-cni-values.yaml"
                sh "helm upgrade --install istio-init istio.io/istio-init --namespace ${NAMESPACE} -f provision/k8s/charts/istio/values.yaml"
                sh "sleep 20"
                sh "helm upgrade --install istio istio.io/istio --namespace ${NAMESPACE} -f provision/k8s/charts/istio/values.yaml"

Helm values for istio + istio-init

---
#
# addon grafana configuration
#
grafana:
  enabled: false

#
# addon prometheus configuration
#
prometheus:
  enabled: false

# addon Istio CoreDNS configuration
#
istiocoredns:
  enabled: false

#
# addon jaeger tracing configuration
#
tracing:
  enabled: false

#
# install citadel
#
security:
  enabled: true

#
# Istio CNI plugin enabled
#   This must be enabled to use the CNI plugin in Istio.  The CNI plugin is installed separately.
#   If true, the privileged initContainer istio-init is not needed to perform the traffic redirect
#   settings for the istio-proxy.
#
istio_cni:
  enabled: true

#
# Gateways Configuration, refer to the charts/gateways/values.yaml
# for detailed configuration
#
gateways:
  enabled: true
  istio-ingressgateway:
    enabled: true
    serviceAnnotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: "true"

  istio-ilbgateway:
    enabled: false
    serviceAnnotations:
      service.beta.kubernetes.io/aws-load-balancer-internal: "true"

# Common settings used among istio subcharts.
global:
  hub: gcr.io/istio-release
  tag: release-1.1-latest-daily
  # Use the Mesh Control Protocol (MCP) for configuring Mixer and
  # Pilot. Requires galley (`--set galley.enabled=true`).
  useMCP: true

  k8sIngress:
    enabled: true
    # Gateway used for k8s Ingress resources. By default it is
    # using 'istio:ingressgateway' that will be installed by setting
    # 'gateways.enabled' and 'gateways.istio-ingressgateway.enabled'
    # flags to true.
    gatewayName: ingressgateway
    # enableHttps will add port 443 on the ingress.
    # It REQUIRES that the certificates are installed  in the
    # expected secrets - enabling this option without certificates
    # will result in LDS rejection and the ingress will not work.
    enableHttps: false

  sds:
    # SDS enabled. IF set to true, mTLS certificates for the sidecars will be
    # distributed through the SecretDiscoveryService instead of using K8S secrets to mount the certificates.
    enabled: true
    udsPath: "unix:/var/run/sds/uds_path"
    useTrustworthyJwt: false
    useNormalJwt: false

#
# nodeagent configuration
#
nodeagent:
  enabled: true
  image: node-agent-k8s
  env:
    CA_PROVIDER: "Citadel"
    CA_ADDR: "istio-citadel:8060"
    VALID_TOKEN: true

1 Like

We made some progress to create the HTTPs Gateway.

In this DOC we find a note, how we can use SDS for certificates.
With helm value global.k8sIngress.enableHttps: true we can create the HTTPs listener port in istio gateway istio-autogenerated-k8s-ingress and then use the patch command to adjust it for SDS.

kubectl -n istio-system \
  patch gateway istio-autogenerated-k8s-ingress --type=json \
  -p='[{"op": "replace", "path": "/spec/servers/1/tls", "value": {"credentialName": "ingress-cert-staging", "mode": "SIMPLE", "privateKey": "sds", "serverCertificate": "sds"}}]'

But this still has some downsides.

  1. Gateway listens for only ONE certificate in the k8s secret named ingress-cert-staging
  2. The secret needs to be in istios system namespace to be found
  3. Helm template doesn’t support this gateway settings (yet) and will overwrite it on next run
  4. Still not possible to use ingress resources to use TLS settings in istio ingress gateway
    1. Can istio ingress fully use k8s ingress resources (http + certificates + routing)?