I am using istio and I have karpenter setup. for node autoscalilng. I have couple of services running and Im using isito gateway. istio creates a classic load balancer in aws when setting up gateway-controller.
but now I am facing this issue. in the lb created I have 2 availability zones. eu-west-1a and eu-west-1b. the lb on eu-west-1a my Surge queue length is almost at 1024. the targets of this availability zone also looks good
any idea why this could be happening and is there a solution i can take for this ? below is how the service looks like
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-ingress
uid: 2d4d3df0-0ccd-42b3-a09b-2d3bfc51a1e6
resourceVersion: '25530105'
creationTimestamp: '2023-05-12T12:27:20Z'
labels:
...
annotations:
meta.helm.sh/release-name: istio-ingressgateway
meta.helm.sh/release-namespace: istio-ingress
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: >-
arn:aws:acm:eu-west-1:ACCOUNT_ID:certificate/CERT_ID
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
finalizers:
- service.kubernetes.io/load-balancer-cleanup
managedFields:
- manager: terraform-provider-helm_v2.9.0_x5
operation: Update
apiVersion: v1
time: '2023-05-12T12:27:20Z'
fieldsType: FieldsV1
fieldsV1:
....
f:spec:
f:allocateLoadBalancerNodePorts: {}
f:externalTrafficPolicy: {}
f:internalTrafficPolicy: {}
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
k:{"port":443,"protocol":"TCP"}:
.: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
k:{"port":15021,"protocol":"TCP"}:
.: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector: {}
f:sessionAffinity: {}
f:type: {}
- manager: aws-cloud-controller-manager
operation: Update
apiVersion: v1
time: '2023-05-12T12:27:22Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"service.kubernetes.io/load-balancer-cleanup": {}
f:status:
f:loadBalancer:
f:ingress: {}
subresource: status
- manager: kubectl-patch
operation: Update
apiVersion: v1
time: '2023-07-21T02:50:08Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: {}
f:service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {}
f:service.beta.kubernetes.io/aws-load-balancer-ssl-ports: {}
selfLink: /api/v1/namespaces/istio-ingress/services/istio-ingressgateway
status:
loadBalancer:
ingress:
- hostname: HOSTNAME.eu-west-1.elb.amazonaws.com
spec:
ports:
- name: status-port
protocol: TCP
port: 15021
targetPort: 15021
nodePort: 30938
- name: http2
protocol: TCP
port: 80
targetPort: 80
nodePort: 32121
- name: https
protocol: TCP
port: 443
targetPort: 443
nodePort: 32578
selector:
app: istio-ingressgateway
istio: ingressgateway
clusterIP: 172.20.23.186
clusterIPs:
- 172.20.23.186
type: LoadBalancer
sessionAffinity: None
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
allocateLoadBalancerNodePorts: true
internalTrafficPolicy: Cluster