Custom Gateway Deployment is not configured correctly by Pilot

I am attempting a setup for a multi-tenant cluster where each tenant should receive a separate istio-ingressgateway deployment and am facing issues with the configuration. Although labels of the Gateway CustomResource and my deployment seem to match, no listeners and routes are added to the workloads of my ingress gateway:

istioctl proxy-status
NAME                                                           CDS        LDS        EDS              RDS          PILOT                            VERSION
istio-ingressgateway-5c7965cf8c-8gh8s.istio-system             SYNCED     SYNCED     SYNCED (98%)     SYNCED       istio-pilot-559845496d-msmwr     1.1.3
istio-ingressgateway-5c7965cf8c-dsds9.istio-system             SYNCED     SYNCED     SYNCED (98%)     SYNCED       istio-pilot-559845496d-856tg     1.1.3
istio-ingressgateway-team-foo-fd49cdd44-rnvlx.istio-system     SYNCED     SYNCED     SYNCED (98%)     NOT SENT     istio-pilot-559845496d-856tg     1.1.3
istio-ingressgateway-team-foo-fd49cdd44-vvzfm.istio-system     SYNCED     SYNCED     SYNCED (98%)     NOT SENT     istio-pilot-559845496d-msmwr     1.1.3
sleep-svc-9f96bbf8-xh2z2.foo-ns                                SYNCED     SYNCED     SYNCED (50%)     SYNCED       istio-pilot-559845496d-856tg     1.1.3
test-v1-foo-ns-777964cbdf-hwwd2.foo-ns                         SYNCED     SYNCED     SYNCED (50%)     SYNCED       istio-pilot-559845496d-msmwr     1.1.3
test-v2-foo-ns-7b76654668-xz4vv.foo-ns                         SYNCED     SYNCED     SYNCED (50%)     SYNCED       istio-pilot-559845496d-856tg     1.1.3

istioctl proxy-config listeners istio-ingressgateway-team-foo-fd49cdd44-rnvlx.istio-system
ADDRESS     PORT      TYPE
0.0.0.0     15090     HTTP

The goal here is, to have the ingress gateways (CustomResource + Deployment) for each tenant in a tenant-specific namespace and it being configured to forward traffic to the tenant’s other namespaces using the namespace/dnsName syntax. As we perceived an buildGatewayListeners: no gateways for router istio-ingressgateway-team-foo error in pilot, which pointed to a potential bug where only gateway CRs in istio-system are used by Pilot, we wanted to deploy all team-gateways in istio-system as a temporary workaround. In the following setup the pilot does not log that line anymore, but our gateway’s listeners and routes are not configured as explained.

Here is my custom gateway deployment:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: istio-ingressgateway-team-foo
  namespace: istio-system
  labels:
    app: "istio-ingressgateway-team-foo"
spec:
  selector:
    app: "istio-ingressgateway-team-foo"
  servers:
  - port:
      name: http
      port: 80
      protocol: HTTP
    hosts:
    - foo-ns/*
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: istio-ingressgateway-team-foo
  namespace: istio-system
  labels:
    app: "istio-ingressgateway-team-foo"
spec:
  selector:
    matchLabels:
      app: "istio-ingressgateway-team-foo"
  template:
    metadata:
      labels:
        app: "istio-ingressgateway-team-foo"

        exposed: "true"
      annotations:
        sidecar.istio.io/inject: "false"
    spec:
      securityContext:
        runAsNonRoot: false
      serviceAccountName: istio-ingressgateway-team-foo
      containers:
        - name: istio-proxy
          image: "istio/proxyv2:1.1.7"
          imagePullPolicy:
          ports:
            - containerPort: 15020
            - containerPort: 8140
            - containerPort: 15029
            - containerPort: 15030
            - containerPort: 15031
            - containerPort: 15032
            - containerPort: 15443
            - containerPort: 80
            - containerPort: 443
            - containerPort: 15090
              protocol: TCP
              name: http-envoy-prom
          args:
          - proxy
          - router
          - --domain
          - $(POD_NAMESPACE).svc.cluster.local
          - --log_output_level=default:info
          - --drainDuration
          - '45s' #drainDuration
          - --parentShutdownDuration
          - '1m0s' #parentShutdownDuration
          - --connectTimeout
          - '10s' #connectTimeout
          - --serviceCluster
          - istio-ingressgateway-team-foo
          - --zipkinAddress
          - zipkin.istio-system:9411
          - --proxyAdminPort
          - "15000"
          - --statusPort
          - "15020"
          - --controlPlaneAuthPolicy
          - MUTUAL_TLS
          - --discoveryAddress
          - istio-pilot.istio-system:15011
          readinessProbe:
            failureThreshold: 30
            httpGet:
              path: /healthz/ready
              port: 15020
              scheme: HTTP
            initialDelaySeconds: 1
            periodSeconds: 2
            successThreshold: 1
            timeoutSeconds: 1
          resources:
            limits:
              cpu: 2000m
              memory: 1G
            requests:
              cpu: 100m
              memory: 128Mi

          env:
          - name: POD_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.name
          - name: POD_NAMESPACE
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.namespace
          - name: INSTANCE_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.podIP
          - name: HOST_IP
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: status.hostIP
          - name: ISTIO_META_POD_NAME
            valueFrom:
              fieldRef:
                apiVersion: v1
                fieldPath: metadata.name
          - name: ISTIO_META_CONFIG_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
          - name: ISTIO_META_ROUTER_MODE
            value: sni-dnat
          volumeMounts:
          - name: istio-certs
            mountPath: /etc/certs
            readOnly: true
          - name: ingressgateway-certs
            mountPath: "/etc/istio/ingressgateway-certs"
            readOnly: true
          - name: ingressgateway-ca-certs
            mountPath: "/etc/istio/ingressgateway-ca-certs"
            readOnly: true
          securityContext:
            readOnlyRootFilesystem: false
      volumes:
      - name: istio-certs
        secret:
          secretName: istio.istio-ingressgateway-team-foo
          optional: true
      - name: ingressgateway-certs
        secret:
          secretName: "istio-ingressgateway-certs"
          optional: true
      - name: ingressgateway-ca-certs
        secret:
          secretName: "istio-ingressgateway-ca-certs"
          optional: true
---
# Service
apiVersion: v1
kind: Service
metadata:
  name: istio-ingressgateway-team-foo
  namespace: istio-system
  annotations:
    prometheus-sd.po.k8s.zone/ignore_ports: "[853, 8140, 15029, 15030, 15031, 15032, 15443]"
    prometheus-sd.po.k8s.zone/job_type: "tcp_connect"
  labels:
    app: "istio-ingressgateway-team-foo"
spec:
  type: LoadBalancer
  selector:
    app: "istio-ingressgateway-team-foo"
  ports:
    -
      name: status-port
      port: 15020
      targetPort: 15020
    -
      name: https-puppet
      port: 8140
      targetPort: 8140
    -
      name: https-kiali
      port: 15029
      targetPort: 15029
    -
      name: https-prometheus
      port: 15030
      targetPort: 15030
    -
      name: https-grafana
      port: 15031
      targetPort: 15031
    -
      name: https-tracing
      port: 15032
      targetPort: 15032
    -
      name: tls
      port: 15443
      targetPort: 15443
    -
      name: tcp-pilot-grpc-tls
      port: 15011
      targetPort: 15011
    -
      name: tcp-mixer-grpc-tls
      port: 15004
      targetPort: 15004
    -
      name: tcp-citadel-grpc-tls
      port: 8060
      targetPort: 8060
    -
      name: tcp-dns-tls
      port: 853
      targetPort: 853

    -
      name: http
      port: 80
    -
      name: https
      port: 443
---
# HPA
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: istio-ingressgateway-team-foo
  namespace: istio-system
  labels:
    app: "istio-ingressgateway-team-foo"
spec:
  maxReplicas: 5
  minReplicas: 2
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: istio-ingressgateway-team-foo
  metrics:
    - type: Resource
      resource:
        name: cpu
        targetAverageUtilization: 80
---
# PodDisruptionBudget
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: istio-ingressgateway-team-foo
  namespace: istio-system
  labels:
    app: "istio-ingressgateway-team-foo"
spec:
  minAvailable: 1
  selector:
    matchLabels:
      app: "istio-ingressgateway-team-foo"
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: istio-ingressgateway-team-foo
  namespace: istio-system
  labels:
    app: "istio-ingressgateway-team-foo"
---
# RBAC
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: psp-restricted-for-rootservices-istio-ingressgateway-team-foo
  namespace: istio-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: koopa-psp-restricted-for-rootservices
subjects:
- kind: ServiceAccount
  name: istio-ingressgateway-team-foo
  namespace: istio-system

My test deployment looks as follows, and was applied using istioctl kube-inject -f deployment.yaml | kubectl apply -f

apiVersion: v1
kind: Namespace
metadata:
  name: foo-ns
---
apiVersion: v1
kind: Service
metadata:
  name: test
  namespace: foo-ns
  labels:
    app: test
spec:
  ports:
  - name: http
    port: 80
    targetPort: http
  selector:
    app: test
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-v1-foo-ns
  namespace: foo-ns
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
      version: v1
  template:
    metadata:
      labels:
        app: test
        version: v1
    spec:
      containers:
      - name: hello
        image: istio/examples-helloworld-v1
        ports:
          - containerPort: 5000
            name: http
        securityContext:
          runAsUser: 1000
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-v2-foo-ns
  namespace: foo-ns
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
      version: v2
  template:
    metadata:
      labels:
        app: test
        version: v2
    spec:
      containers:
      - name: hello
        image: istio/examples-helloworld-v2
        ports:
          - containerPort: 5000
            name: http
        securityContext:
          runAsUser: 1000
---
apiVersion: "networking.istio.io/v1alpha3"
kind: VirtualService
metadata:
  namespace: foo-ns
  name: test
  labels:
    app: test
spec:
  gateways:
  - mesh
  - istio-system/istio-ingressgateway-team-foo
  hosts:
  - test.foo-ns.svc.cluster.local
  http:
  - route:
    - destination:
        host: test.foo-ns.svc.cluster.local
        subset: v1
      weight: 100
---
apiVersion: "networking.istio.io/v1alpha3"
kind: DestinationRule
metadata:
  namespace: foo-ns
  name: test
  labels:
    app: test
spec:
  host: test.foo-ns.svc.cluster.local
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL

Has anybody else attempted a similar setup successfully and can point me to a solution? Is there an error on my side that I missed?

The Gateway config has an error:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: istio-ingressgateway-team-foo
  namespace: istio-system
  labels:
    app: "istio-ingressgateway-team-foo"
spec:
  selector:
    app: "istio-ingressgateway-team-foo"
  servers:
  - port:
      name: http
      number: 80 # <-- this must be `number` not  `port`
      protocol: HTTP
    hosts:
    - foo-ns/*