All Products
Search
Document Center

Container Service for Kubernetes:Configure a pod to use host network mode (hostNetwork)

Last Updated:Jan 09, 2026

By default, pods in Kubernetes operate within their own isolated network namespace. Traffic is routed through the node's kernel via NAT and bridge forwarding, which introduces a slight performance overhead. By enabling hostNetwork, a pod shares the network namespace of its host node. This approach is ideal for high-performance Container Network Interface (CNI) plugins, node-level monitoring, and applications requiring direct access to the node's network interface.

Important

In a production environment, only configure host network mode for pods that absolutely require it. A pod in host network mode uses the network namespace of its node. If a pod is compromised, the attacker gains direct access to the node's network services. These pods bypass network policy restrictions and are instead governed by the node's security group rules.

Configuration requirements

Note

This configuration can only be applied when creating a new workload. You cannot switch an existing pod to host network mode.

To enable host network mode, configure the following fields in the pod spec:

  • hostNetwork: true – Shares the node's network namespace.

  • dnsPolicy: ClusterFirstWithHostNet – Ensures the pod can still resolve internal cluster service names while using the host network.

  • containerPort – Must match the port used by the application process within the container.

apiVersion: v1
kind: Pod
metadata:
  ...
spec:
  hostNetwork: true # Enable the host network mode.
  dnsPolicy: ClusterFirstWithHostNet # Ensure the pod can resolve domain names within the cluster.
  containers:
  - ...
    ports:
      - containerPort: 12000 # The port the container's process listens on. This value must match your application's configuration. The port 12000 is an example.
  ...

Step-by-step example

The following example deploys a DaemonSet that uses pods in the host network mode for node-level monitoring with node-exporter.

  1. Create a file named node-exporter.yaml with the following YAML manifest. Replace <REGION_ID> with the ID of the region where your cluster is located (such as cn-hangzhou).

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: node-exporter-demo
      labels:
        app: node-exporter-demo
    spec:
      selector:
        matchLabels:
          app: node-exporter-demo
      template:
        metadata:
          labels:
            app: node-exporter-demo
        spec:
          hostNetwork: true # Enable the host network mode.
          hostPID: true 
          dnsPolicy: ClusterFirstWithHostNet # Ensure the pod can resolve domain names within the cluster.
          containers:
          - name: node-exporter-demo
            image: registry-<REGION_ID>-vpc.ack.aliyuncs.com/acs/node-exporter:v0.17.0-slim # Replace <REGION_ID> with your cluster's region ID.
            args:
            - '--path.procfs=/host/proc'
            - '--path.sysfs=/host/sys'
            - '--web.listen-address=0.0.0.0:20000'
            ports:
            - name: metrics
              containerPort: 20000
            volumeMounts:
            - name: proc
              mountPath: /host/proc
              readOnly: true
            - name: sys
              mountPath: /host/sys
              readOnly: true
            resources:
              requests:
                memory: "64Mi"
                cpu: "100m"
              limits:
                memory: "128Mi"
                cpu: "200m"
          volumes:
          - name: proc
            hostPath:
              path: /proc
          - name: sys
            hostPath:
              path: /sys
    
  2. Apply the manifest to create the DaemonSet.

    kubectl apply -f node-exporter.yaml

    Expected output:

    daemonset/node-exporter created
  3. Check the status and IP addresses of the pods. The pod IP should be identical to the node IP.

    kubectl get pod -o wide

    Expected output:

    NAME                       READY   STATUS    RESTARTS   AGE     IP               NODE                      NOMINATED NODE   READINESS GATES
    node-exporter-demo-49v**   1/1     Running   0          15h     10.***.8.109     xx-xxxx.10.***.8.109      <none>           <none>
    node-exporter-demo-jdx**   1/1     Running   0          15h     10.***.203.146   xx-xxxx.10.***.203.146    <none>           <none>
    node-exporter-demo-krg**   1/1     Running   0          15h     10.***.105.151   xx-xxxx.10.***.105.151    <none>           <none>
  4. Log on to one of the nodes and verify that the service is accessible on the host port. The pod listens directly on node's port 20000. On the node, run the following command to access the pod's endpoint:

    curl localhost:20000/metrics

    If the command returns the node's metrics, the configuration is successful.

FAQ

Why is my pod stuck in the Pending state?

Common causes include:

  • Port conflict: The port specified by the pod is already in use on the node. This prevents the container's process from binding to the port, causing the pod to fail. Ensure the following reserved ports are avoided:

    • Core cluster components: 6443989090991025010256, and 30000 to 32767.

    • Standard services: 225380, and 443.

    • Custom ports used by other workloads on the node.

  • Pod Security Admission (PSA): In clusters with strict security policies, host networking is blocked by default. You must label the namespace to allow privileged pods:

    Important

    Setting this label grants pods in the namespace permissions to perform all privileged operations. Use this label with caution.

    apiVersion: v1
    kind: Namespace
    metadata:
      name: my-privileged-ns
      labels:
        pod-security.kubernetes.io/enforce: privileged
    For configuration details of pod-security.kubernetes.io, see Pod Security Admission.
  • Container security policies: Ensure your policy allows pods to use the host network and that the declared ports are within the allowed range.

Why can't my pod resolve cluster domain names?

This typically happens if dnsPolicy is not set correctly. Ensure spec.dnsPolicy is set to ClusterFirstWithHostNet. If it is set to ClusterFirst, the pod will use the host node's /etc/resolv.conf, which does not contain the cluster's internal DNS settings. See the Step-by-step example section.