Restrict pod access to specific internal endpoints (VPC), services (K8s), and the entire internet

We want to limit the egress access of an application serving as a webhook service,
which allows users to input any desired endpoint. We’re considering implementing
restrictions on its connectivity as follows:

  1. Maintain internet-wide access so that customers can set up any hosts and
    ports.

  2. Limit its access to specific applications, such as app-1 and app-2. This
    can be achieved using a NetworkPolicy:
    Network Policies | Kubernetes.
    However, this would also disallow internet access.

  3. Ideally, we’d like to grant access only to specific internal AWS endpoints
    such as Aurora, MSK, and Cache, which are accessible within the same VPC where
    the K8s cluster is running.

I’ve looked into AuthorizationPolicy:
Istio / Authorization Policy,
and it appears that the hosts and notHosts

fields should generally only be
used for external traffic entering the mesh through a gateway, and not for traffic
within the mesh

Istio / Security Best Practices.

This means that our case might not be entirely covered by AuthorizationPolicy.

What I’ve came up with for now in NetworkPolicy:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: sandbox
  namespace: blues
spec:
  podSelector:
    matchLabels:
      k8s-app: sandbox
  policyTypes:
  - Egress
  egress:
  # Allow DNS lookups
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
      podSelector:
        matchLabels:
          k8s-app: kube-dns
    ports:
      - port: 53
        protocol: UDP
      - port: 53
        protocol: TCP
  # Allow outbound traffic to specified services
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: monitoring
      podSelector:
        matchLabels:
          app.kubernetes.io/name: victoria-metrics-single
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: kube-system
      podSelector:
        matchLabels:
          app.kubernetes.io/name: aws-cluster-autoscaler
  # Aurora, MSK, Cache
  - to:
    - ipBlock:
        cidr: 10.0.0.0/16
    ports:
      - port: 5432
        protocol: TCP
      - port: 9096
        protocol: TCP
      - port: 6379
        protocol: TCP
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
    ports:
      - port: 443
        protocol: TCP
      - port: 80
        protocol: TCP

But that would allow 443 and 80 to any service, either internal or external I guess.

I thought that maybe I’ve missed some Istio functionality that could have fit
the case ideally, but not sure which yet.

We decided to use aws lambda as a proxy. And forward all outer requests through it.