Service health check exposed for AWS NLB

We’re using the health check that’s automatically setup by k8s in-tree cloud service controller. If the service has externalTrafficPolicy: Cluster then it’s a TCP health check on the designated NodePort. If you change it to externalTrafficPolicy: Local then k8s will create a HTTP health check to a special port which be healthy if one of the pods is running on that node. These health checks, as far as I’m aware, are not configurable (especially when using NLB; the older CLB has a few more knobs but the NLB does not have feature parity).

The problem here is, no matter which externalTrafficPolicy value you use, you’re only reflecting the health of the ingress gateway. Here’s a snippet from the Global Accelerator docs:

For Application Load Balancer or Network Load Balancer endpoints, you configure health checks for the resources by using Elastic Load Balancing configuration options. … Health check options that you choose in Global Accelerator do not affect Application Load Balancers or Network Load Balancers that you’ve added as endpoints.

On a side note, it looks like the ALB ingress controller 2.0 is going to add the ability to wire pod IPs straight into ELB target groups (just as is does now with ALBs). So that’s nice the the NLB could go direct to the pods (discovered via service endpoints).