By default, pods in Kubernetes operate within their own isolated network namespace. Traffic is routed through the node's kernel via NAT and bridge forwarding, which introduces a slight performance overhead. By enabling hostNetwork, a pod shares the network namespace of its host node. This approach is ideal for high-performance Container Network Interface (CNI) plugins, node-level monitoring, and applications requiring direct access to the node's network interface.
In a production environment, only configure host network mode for pods that absolutely require it. A pod in host network mode uses the network namespace of its node. If a pod is compromised, the attacker gains direct access to the node's network services. These pods bypass network policy restrictions and are instead governed by the node's security group rules.
Configuration requirements
This configuration can only be applied when creating a new workload. You cannot switch an existing pod to host network mode.
To enable host network mode, configure the following fields in the pod spec:
hostNetwork: true– Shares the node's network namespace.dnsPolicy: ClusterFirstWithHostNet– Ensures the pod can still resolve internal cluster service names while using the host network.containerPort– Must match the port used by the application process within the container.
apiVersion: v1
kind: Pod
metadata:
...
spec:
hostNetwork: true # Enable the host network mode.
dnsPolicy: ClusterFirstWithHostNet # Ensure the pod can resolve domain names within the cluster.
containers:
- ...
ports:
- containerPort: 12000 # The port the container's process listens on. This value must match your application's configuration. The port 12000 is an example.
...Step-by-step example
The following example deploys a DaemonSet that uses pods in the host network mode for node-level monitoring with node-exporter.
Create a file named
node-exporter.yamlwith the following YAML manifest. Replace<REGION_ID>with the ID of the region where your cluster is located (such ascn-hangzhou).apiVersion: apps/v1 kind: DaemonSet metadata: name: node-exporter-demo labels: app: node-exporter-demo spec: selector: matchLabels: app: node-exporter-demo template: metadata: labels: app: node-exporter-demo spec: hostNetwork: true # Enable the host network mode. hostPID: true dnsPolicy: ClusterFirstWithHostNet # Ensure the pod can resolve domain names within the cluster. containers: - name: node-exporter-demo image: registry-<REGION_ID>-vpc.ack.aliyuncs.com/acs/node-exporter:v0.17.0-slim # Replace <REGION_ID> with your cluster's region ID. args: - '--path.procfs=/host/proc' - '--path.sysfs=/host/sys' - '--web.listen-address=0.0.0.0:20000' ports: - name: metrics containerPort: 20000 volumeMounts: - name: proc mountPath: /host/proc readOnly: true - name: sys mountPath: /host/sys readOnly: true resources: requests: memory: "64Mi" cpu: "100m" limits: memory: "128Mi" cpu: "200m" volumes: - name: proc hostPath: path: /proc - name: sys hostPath: path: /sysApply the manifest to create the DaemonSet.
kubectl apply -f node-exporter.yamlExpected output:
daemonset/node-exporter createdCheck the status and IP addresses of the pods. The pod IP should be identical to the node IP.
kubectl get pod -o wideExpected output:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES node-exporter-demo-49v** 1/1 Running 0 15h 10.***.8.109 xx-xxxx.10.***.8.109 <none> <none> node-exporter-demo-jdx** 1/1 Running 0 15h 10.***.203.146 xx-xxxx.10.***.203.146 <none> <none> node-exporter-demo-krg** 1/1 Running 0 15h 10.***.105.151 xx-xxxx.10.***.105.151 <none> <none>Log on to one of the nodes and verify that the service is accessible on the host port. The pod listens directly on node's port
20000. On the node, run the following command to access the pod's endpoint:curl localhost:20000/metricsIf the command returns the node's metrics, the configuration is successful.
FAQ
Why is my pod stuck in the Pending state?
Common causes include:
Port conflict: The port specified by the pod is already in use on the node. This prevents the container's process from binding to the port, causing the pod to fail. Ensure the following reserved ports are avoided:
Core cluster components:
6443,9890,9099,10250,10256, and30000to32767.Standard services:
22,53,80, and443.Custom ports used by other workloads on the node.
Pod Security Admission (PSA): In clusters with strict security policies, host networking is blocked by default. You must label the namespace to allow
privilegedpods:ImportantSetting this label grants pods in the namespace permissions to perform all privileged operations. Use this label with caution.
apiVersion: v1 kind: Namespace metadata: name: my-privileged-ns labels: pod-security.kubernetes.io/enforce: privilegedFor configuration details of
pod-security.kubernetes.io, see Pod Security Admission.Container security policies: Ensure your policy allows pods to use the host network and that the declared ports are within the allowed range.
Why can't my pod resolve cluster domain names?
This typically happens if dnsPolicy is not set correctly. Ensure spec.dnsPolicy is set to ClusterFirstWithHostNet. If it is set to ClusterFirst, the pod will use the host node's /etc/resolv.conf, which does not contain the cluster's internal DNS settings. See the Step-by-step example section.