I am trying to set up DataDog tracing for a Wordpress site running in our Kubernetes cluster. I was able to get it working, but only by hard-coding the IP address of the K8s node pool and adding a ServiceEntry for it. This IP address could change, and also we have both a primary and secondary node pool (I’m not sure if the datadog agent would ever end up running in the secondary pool, but it seems possible).
Is there any way to make an exception in the firewall that does not require hard-coding the IP address? Or maybe the outbound traffic policy can be customized to allow access to the IP of the node pool?
In accordance with the DataDog docs, this is how I’m setting the hostname of the DataDog agent in my container:
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
And this is the service entry I created:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-node-pool
namespace: {{ .Release.Namespace }}
labels:
app: external-node-pool
version: {{ .Chart.Version }}
spec:
hosts:
- external-node-pool.tcp.svcentry
ports:
- number: 8126
name: tcp
protocol: TCP
location: MESH_EXTERNAL
resolution: STATIC
endpoints:
# Is there a way to dynamically point this to the IP address of the node pool?
- address: 10.24.6.62
Given that this is an internal IP address for the cluster itself, it seems odd to me that it’s being blocked in the first place, but I assume that’s by design for some reason.