All Products
Search
Document Center

Container Service for Kubernetes:Best practices for DNS services

Last Updated:Oct 11, 2024

The Domain Name System (DNS) service is a basic service for Kubernetes clusters. If the DNS settings of your client are not properly configured or if you use a large cluster, DNS resolution timeouts and failures may occur. This topic describes the best practices for configuring DNS services in Kubernetes clusters to help you avoid the issues.

Prerequisites

Table of contents

For more information about CoreDNS, see Official Documentation of CoreDNS.

Optimize DNS queries

DNS queries are frequently submitted in Kubernetes. A large number of DNS queries can be optimized or avoided. You can optimize DNS queries by using one of the following methods:

  • Use a connection pool: If a client pod frequently accesses a Service, we recommend that you use a connection pool. A connection pool can cache connections to the upstream Service in the memory. This way, the client pod no longer needs to send a DNS query and establish a TCP connection each time it accesses the Service.

  • Use DNS caching:

    • If you cannot use a connection pool to connect a client pod to a Service, we recommend that you cache DNS resolution results on the client pod side. For more information, see Use NodeLocal DNSCache to optimize DNS resolution.

    • If you cannot use NodeLocal DNSCache, you can use the built-in Name Service Cache Daemon (NSCD) in containers. For more information about how to use NSCD, see Use NSCD in Kubernetes clusters.

  • Optimize the resolv.conf file: Due to the use of the ndots and search parameters in the resolv.conf file, the DNS resolution efficiency is affected by how you specify domain names in containers. For more information about the ndots and search parameters, see DNS policies and domain name resolution.

  • Optimize domain name settings: You can specify the domain name that a client pod needs to access based on the following rules. These rules can help minimize the number of attempts to resolve the domain name and make the DNS resolution service more efficient.

    • If the client pod needs to access a Service in the same namespace, use <service-name> as the domain name. <service-name> indicates the name of the Service.

    • If the client pod needs to access a Service in another namespace, use <service-name>.<namespace-name> as the domain name. namespace-name specifies the namespace to which the Service belongs.

    • If the client pod needs to access an external domain name, you can specify the domain name in the Fully Qualified Domain Name (FQDN) format by appending a period (.) to the domain name. This avoids invalid DNS lookups caused by combining the search domain with the domain name to be queried. For example, if the client pod needs to access www.aliyun.com, you can specify www.aliyun.com. as the domain name.

Use proper container images

The implementation of the built-in musl libc in Alpine container images is different from that of glibc:

  • Alpine 3.3 and earlier versions do not support the search parameter. As a result, you cannot specify search domains or discover Services.

  • musl libc processes queries that are sent to the DNS servers that are specified in the /etc/resolv.conf file in parallel. As a result, NodeLocal DNSCache fails to optimize DNS resolution.

  • musl libc processes A and AAAA queries that use the same socket in parallel. This causes packet loss on the conntrack port in earlier kernel versions.

For more information, see musl libc.

If containers that are deployed in a Kubernetes cluster use Alpine as the base image, domain names may not be resolved due to the use of musl libc. We recommend that you replace the image with an image that is based on Debian or CentOS.

Reduce the adverse effect of occasional DNS resolution timeouts caused by IPVS defects

If the load balancing mode of kube-proxy is set to IPVS in your cluster, DNS resolution timeouts may occur when CoreDNS pods are scaled in or restarted. The issues are caused by the kernel bugs of Linux. For more information, see IPVS.

You can use the following methods to reduce the adverse effect of these issues:

Use NodeLocal DNSCache to optimize DNS resolution

Container Service for Kubernetes (ACK) allows you to deploy NodeLocal DNSCache to improve the stability and performance of service discovery. NodeLocal DNSCache is implemented as a DaemonSet and runs a DNS caching agent on cluster nodes to improve the efficiency of DNS resolution for ACK clusters.

For more information about NodeLocal DNSCache and how to deploy NodeLocal DNSCache in ACK clusters, see Configure NodeLocal DNSCache.

Use proper CoreDNS versions

CoreDNS is backward-compatible with Kubernetes. We recommend that you use a new stable version of CoreDNS. You can install, update, and configure CoreDNS on the Add-ons page of the ACK console. If the status of the CoreDNS component indicates that CoreDNS is updatable, we recommend that you update the component during off-peak hours at the earliest opportunity.

The following issues may occur in CoreDNS versions earlier than 1.7.0:

  • If connectivity exceptions occur between CoreDNS and the API server, such as network jitters, API server restarts, or API server migrations, CoreDNS pods may be restarted because error logs cannot be written. For more information, see Set klog's logtostderr flag.

  • CoreDNS occupies extra memory resources during the initialization process. In this process, the default memory limit may cause out of memory (OOM) errors in large clusters. If this situation intensifies, CoreDNS pods may be repetitively restarted but fail to be started. For more information, see CoreDNS uses a lot memory during initialization phase.

  • CoreDNS has issues that may affect the domain name resolution of headless Services and requests from outside the cluster. For more information, see plugin/kubernetes: handle tombstones in default processor and Data is not synced when CoreDNS reconnects to kubernetes api server after protracted disconnection.

  • Some earlier CoreDNS versions are configured with default toleration rules that may cause CoreDNS pods to be deployed on abnormal nodes and fail to be automatically evicted when exceptions occur on the nodes. This may lead to domain name resolution errors in the cluster.

The following table describes the recommended minimum CoreDNS versions for clusters that run different Kubernetes versions.

Kubernetes version

Minimum CoreDNS version

Earlier than 1.14.8 (discontinued)

v1.6.2

1.14.8 and later but earlier than 1.20.4

v1.7.0.0-f59c03d-aliyun

1.20.4 and later

v1.8.4.1-3a376cc-aliyun

Note

Kubernetes versions earlier than 1.14.8 are discontinued. We recommend that you update the Kubernetes version before you update CoreDNS.

Monitor the status of CoreDNS

Monitoring metrics

CoreDNS uses the standard Prometheus API to collect metrics such as DNS resolution results. This allows you to identify exceptions in CoreDNS and upstream DNS servers at the earliest opportunity.

By default, monitoring metrics and alert rules related to CoreDNS are predefined in Managed Service for Prometheus provided by Alibaba Cloud. You can log on to the ACK console to enable Managed Service for Prometheus and dashboards. For more information, see CoreDNS monitoring.

If you use open source Prometheus to monitor the Kubernetes cluster, you can view the related metrics in Prometheus and create alert rules based on the following key metrics. For more information, see CoreDNS Prometheus official documentation.

Operational log

When DNS resolution errors occur, you can view the log of CoreDNS to identify the causes. We recommend that you enable logging for CoreDNS and use Log Service to collect log data. For more information, see Collect and analyze CoreDNS logs.

Sink Kubernetes events

CoreDNS v1.9.3.6-32932850-aliyun and later versions allow you to enable the k8s_event plug-in to sink Kubernetes events that contain the Info, Error, and Warning logs of CoreDNS to the event center. For more information, see k8s_event.

By default, the k8s_event plug-in is enabled after you deploy CoreDNS v1.9.3.6-32932850-aliyun or a later version. If you deployed an earlier CoreDNS version and then updated it to v1.9.3.6-32932850-aliyun or later, you need to modify the ConfigMap of CoreDNS to enable the k8s_event plug-in.

  1. Run the following command to open the coredns ConfigMap:

    kubectl -n kube-system edit configmap/coredns
  2. Add the kubeAPI and k8s_event plug-ins.

    apiVersion: v1
    data:
      Corefile: |
        .:53 {
            errors
            health {
                lameduck 15s
            }
    
            // The beginning of the plug-in configuration. Ignore other settings. 
            kubeapi
            k8s_event {
              level info error warning // Report Info, Error, and Warning logs to Kubernetes events. 
            }
            // The end of the plug-in configuration. 
    
            kubernetes cluster.local in-addr.arpa ip6.arpa {
                pods verified
                fallthrough in-addr.arpa ip6.arpa
            }
            // Details are not shown. 
        }
  3. Check the status and logs of the CoreDNS pods. If the log data contains the reload keyword, the new configuration is loaded.

Modify the CoreDNS Deployment

Modify the number of CoreDNS pods

We recommend that you provision at least two CoreDNS pods. You must make sure that the number of CoreDNS pods is sufficient to handle DNS queries within the cluster.

The DNS QPS of CoreDNS is related to CPU usage. A single CPU can handle more than 10,000 DNS QPS if you enable DNS caching. The DNS QPS required by different workloads may vary. You can evaluate the DNS QPS based on the peak CPU usage of each CoreDNS pod. We recommend that you increase the number of CoreDNS pods if a CoreDNS pod occupies more than one vCPU during peak hours. If you cannot confirm the peak CPU usage of each CoreDNS pod, you can set the ratio of CoreDNS pods to cluster nodes to 1:8. This way, each time you add eight nodes to a cluster, a CoreDNS pod is created. The total number of CoreDNS pods must not exceed 10. If your cluster contains more than 100 nodes, we recommend that you use NodeLocal DNSCache. For more information, see Use NodeLocal DNSCache to optimize DNS resolution.

Note

UDP does not support retransmission. When CoreDNS pods terminate, UDP packets may be dropped if a CoreDNS pod is deleted or restarted. As a result, the cluster may experience DNS query timeouts or failures. If UDP packet loss caused by IPVS issues occurs on cluster nodes, the cluster may experience DNS query timeouts or failures that last up to 5 minutes after a CoreDNS pod is deleted or restarted. For more information about how to resolve DNS query failures caused by IPVS issues, see What do I do if DNS resolutions fail due to IP Virtual Server (IPVS) errors?.

Schedule CoreDNS pods to proper nodes

When you deploy CoreDNS pods in a cluster, we recommend that you deploy the CoreDNS pods on different cluster nodes across multiple zones. This prevents service disruptions when a single node or zone fails. By default, soft anti-affinity settings based on nodes are configured for CoreDNS. Some or all CoreDNS pods may be deployed on the same node due to insufficient nodes. In this case, we recommend that you delete the CoreDNS pods and reschedule the pods.

CoreDNS pods must not be deployed on cluster nodes whose CPU and memory resources are fully utilized. Otherwise, DNS QPS and response time are adversely affected. If the cluster contains sufficient nodes, you can schedule CoreDNS pods to exclusive nodes by configuring custom parameters. This allows CoreDNS to provide stable domain name resolution services. For more information about how to schedule CoreDNS pods to exclusive cluster nodes, see Deploy CoreDNS pods on exclusive nodes by configuring custom parameters.

Manually increase the number of CoreDNS pods

If the number of cluster nodes remains unchanged for a long period of time, you can run the following command to increase the number of CoreDNS pods:

kubectl scale --replicas={target} deployment/coredns -n kube-system
Note

Replace {target} with the required value.

Use cluster-autoscaler to automatically increase the number of CoreDNS pods

If the number of cluster nodes increases, you can use the following YAML template to deploy cluster-proportional-autoscaler and dynamically increase the number of CoreDNS pods:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dns-autoscaler
  namespace: kube-system
  labels:
    k8s-app: dns-autoscaler
spec:
  selector:
    matchLabels:
      k8s-app: dns-autoscaler
  template:
    metadata:
      labels:
        k8s-app: dns-autoscaler
    spec:
      serviceAccountName: admin
      containers:
      - name: autoscaler
        image: registry.cn-hangzhou.aliyuncs.com/acs/cluster-proportional-autoscaler:1.8.4
        resources:
          requests:
            cpu: "200m"
            memory: "150Mi"
        command:
        - /cluster-proportional-autoscaler
        - --namespace=kube-system
        - --configmap=dns-autoscaler
        - --nodelabels=type!=virtual-kubelet
        - --target=Deployment/coredns
        - --default-params={"linear":{"coresPerReplica":64,"nodesPerReplica":8,"min":2,"max":100,"preventSinglePointFailure":true}}
        - --logtostderr=true
        - --v=9

In the preceding example, a linear scaling policy is used. The number of CoreDNS pods is calculated based on the following formula: Replicas (pods) = Max (Ceil (Cores × 1/coresPerReplica), Ceil (Nodes × 1/nodesPerReplica)). The number of CoreDNS pods is subject to the values of max and min in the linear scaling policy. The following code block shows the parameters of the linear scaling policy:

{
      "coresPerReplica": 64,
      "nodesPerReplica": 8,
      "min": 2,
      "max": 100,
      "preventSinglePointFailure": true
}

Use HPA to increase the number of CoreDNS pods based on CPU loads

Horizontal Pod Autoscaler (HPA) frequently triggers scale-in activities for CoreDNS pods. We recommend that you do not use HPA. If HPA is required in specific scenarios, you can refer to the following policy configurations based on CPU utilization:

---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: coredns-hpa
  namespace: kube-system
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: coredns
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 50
Note

For more information about how to use HPA, see Implement horizontal pod autoscaling.

Deploy CoreDNS pods on exclusive nodes by using custom parameters

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Nodes > Nodes.

  3. On the Nodes page, click Manage Labels and Taints.

  4. On the Labels tab of the Manage Labels and Taints page, select the nodes that you want to manage and click Add Label.

    Note

    The number of nodes that you select must be greater than the number of CoreDNS pods. This avoids deploying multiple CoreDNS pods to the same node.

  5. In the Add dialog box, set the following parameters and click OK.

    • Name: Enter node-role-type.

    • Value: Enter coredns.

  6. In the left-side navigation pane, choose Operations > Add-ons. On the page that appears, search for CoreDNS.

  7. Click Configuration in the CoreDNS section. On the CoreDNS Parameters dialog box, click + Add to the right of NodeSelector, set the following parameters, and then click OK.

    • Key: Enter node-role-type.

    • Value: Enter coredns.

    CoreDNS pods are rescheduled to nodes with the specified label.

Properly configure CoreDNS

ACK provides only the default settings for CoreDNS. You can modify the parameters to optimize the settings and enable CoreDNS to provide normal DNS services for your client pods. You can modify the configurations of CoreDNS on demand. For more information, see DNS policies and domain name resolution and CoreDNS official documentation.

The default configurations of earlier CoreDNS versions that are deployed together with your Kubernetes clusters may pose risks. We recommend that you check and optimize the configurations by using the following methods:

You can go to the Container Intelligence Service console and use the Regular Inspection and Diagnosis components to check the configurations of CoreDNS. If the result shows errors related to the coredns ConfigMap, we recommend that perform the preceding operations.

Adjust resource requests and limits for CoreDNS

CoreDNS consumes resources based on the following rules:

  • When CoreDNS restarts and reloads the configuration file, and when the API Server is connected and reconnected, the usage of CPU and memory resources increases.

  • The CPU usage of CoreDNS increases with the DNS QPS of CoreDNS.

  • The memory usage of CoreDNS increases with the cluster size and the number of Services.

The following table describes the default values of resource requests and limits when the system deploys CoreDNS. You can modify the values based on the status of your cluster. For more information, see O&M management - Component management - Network - CoreDNS - Parameter configurations.

Resource type

Request/Limit

Default

Description

CPU

Request

100m

No impact on workloads

Limit

N/A

If you specify a low value, the DNS QPS of CoreDNS is adversely affected.

Memory

Request

100 Mi

No impact on workloads

Limit

2 Gi

If you specify a value lower than the default value, OOM errors may occur.

Note

The default configurations of CoreDNS versions earlier than 1.8.4 may be different than those described in the preceding table. You can modify the configurations based on your requirements.

Disable the affinity settings for the kube-dns Service

The affinity settings may cause CoreDNS pods to handle different loads. To disable the affinity settings, perform the following steps:

Use the ACK console

  1. Log on to the ACK console.
  2. In the left-side navigation pane of the ACK console, click Clusters.

  3. On the Clusters page, find the cluster that you want to manage and click the name of the cluster or click Details in the Actions column corresponding to the cluster.

  4. In the left-side navigation pane of the details page, choose Network > Services.

  5. In the kube-system namespace, find the kube-dns Service and click View in YAML in the Actions column.

    • If the value of the sessionAffinity field is None, skip the following steps.

    • If the value of the sessionAffinity field is ClientIP, perform the following steps.

  6. Delete sessionAffinity, sessionAffinityConfig, and all of the subfields. Then, click Update.

    # Delete the following content. 
    sessionAffinity: ClientIP
      sessionAffinityConfig:
        clientIP:
          timeoutSeconds: 10800
  7. Find the kube-dns Service and click View in YAML in the Actions column again to check whether the value of the sessionAffinity field is None. If the value is None, the kube-dns Service is modified.

Use the CLI

  1. Run the following command to query the configurations of the kube-dns Service:

    kubectl -n kube-system get svc kube-dns -o yaml
    • If the value of the sessionAffinity field is None, skip the following steps.

    • If the value of the sessionAffinity field is ClientIP, perform the following steps.

  2. Run the following command to modify the kube-dns Service:

    kubectl -n kube-system edit service kube-dns
  3. Delete all fields that are related to sessionAffinity, including sessionAffinity, sessionAffinityConfig, and all subfields. Then, save the change and exit.

    # Delete the following content. 
    sessionAffinity: ClientIP
      sessionAffinityConfig:
        clientIP:
          timeoutSeconds: 10800
  4. After you modify the kube-dns Service, run the following command again to check whether the value of the sessionAffinity field is None. If the value is None, the kube-dns Service is modified.

    kubectl -n kube-system get svc kube-dns -o yaml

Disable the autopath plug-in

The autopath plug-in is enabled for CoreDNS of earlier versions and may cause DNS resolution errors in specific scenarios. If the autopath plug-in is enabled, you must disable the plug-in in the coredns ConfigMap. For more information, see Autopath.

Note

After you disable the autopath plug-in, the number of DNS queries sent from the client per second increases by three times at most. Therefore, the amount of time required to resolve a domain name also increases by three times at most. You must pay close attention to the load on CoreDNS and the impacts on your business.

  1. Run the kubectl -n kube-system edit configmap coredns command to modify the coredns ConfigMap.

  2. Delete autopath @kubernetes. Then, save the change and exit.

  3. Check the status and logs of the CoreDNS pods. If the log data contains the reload keyword, the new configuration is loaded.

Configure graceful shutdown for CoreDNS

Note

ACK may consume additional memory resources when it updates the coredns ConfigMap. After you modify the coredns ConfigMap, check the status of the CoreDNS pods. If the memory resources of the pods are exhausted, change the memory limit of pods in the CoreDNS Deployment. We recommend that you change the memory limit to 2 GB.

Use the ACK console

  1. Log on to the ACK console.

  2. In the left-side navigation pane of the ACK console, click Clusters.

  3. On the Clusters page, find the cluster that you want to manage and click the name of the cluster or click Details in the Actions column corresponding to the cluster.

  4. In the left-side navigation pane of the details page, choose Configurations > ConfigMaps.

  5. Select the kube-system namespace. Find the coredns ConfigMap and click Edit YAML in the Actions column.

  6. Refer to the following Corefile content and make sure that the health plug-in is enabled. Then, set lameduck to 15s and click OK.

  7. .:53 {
            errors       
            # The setting of the health plug-in may vary based on the CoreDNS version. 
            # Scenario 1: The health plug-in is disabled by default.    
            # Scenario 2: The health plug-in is enabled by default but lameduck is not set. 
            health      
            # Scenario 3: The health plug-in is enabled by default and lameduck is set to 5s.    
            health {
                lameduck 5s
            }      
            # In the preceding scenarios, change the value of lameduck to 15s. 
            health {
                lameduck 15s
            }       
            # You do not need to modify other plug-ins. 
        }

If the CoreDNS pods run as normal, CoreDNS can be gracefully shut down. If the CoreDNS pods do not run as normal, you can check the pod events and log to identify the cause.

Use the CLI

  1. Run the following command to open the coredns ConfigMap:

  2. kubectl -n kube-system edit configmap/coredns
  3. Refer to the following YAML content and make sure that the health plug-in is enabled. Then, set lameduck to 15s.

  4. .:53 {
            errors     
            # The setting of the health plug-in may vary based on the CoreDNS version. 
            # Scenario 1: The health plug-in is disabled by default.      
            # Scenario 2: The health plug-in is enabled by default but lameduck is not set. 
            health
            # Scenario 3: The health plug-in is enabled by default and lameduck is set to 5s.    
            health {
                lameduck 5s
            }
            # In the preceding scenarios, change the value of lameduck to 15s. 
            health {
                lameduck 15s
            }
            # You do not need to modify other plug-ins. 
        }
  5. After you modify the coredns ConfigMap, save the change and exit.

  6. If the CoreDNS pods run as normal, CoreDNS can be gracefully shut down. If the CoreDNS pods do not run as normal, you can check the pod events and log to identify the cause.

Configure the default protocol for the forward plug-in and upstream DNS servers of a VPC

NodeLocal DNSCache uses the TCP protocol to communicate with CoreDNS. CoreDNS communicates with the upstream DNS servers based on the protocol used by the source of DNS queries. Therefore, DNS queries sent from a client pod for external domain names pass through NodeLocal DNSCache and CoreDNS, and then arrive at the DNS servers in a VPC over TCP. The IP addresses of the DNS servers are 100.100.2.136 and 100.100.2.138. These IP addresses are automatically configured on the Elastic Compute Service (ECS) instances.

DNS servers in a VPC have limited support for TCP. If you use NodeLocal DNSCache, you must modify the configurations of CoreDNS and enable CoreDNS to use UDP for communication with the upstream DNS servers. This prevents DNS resolution issues. We recommend that you modify the ConfigMap of CoreDNS based on the following modifications. The ConfigMap is named coredns and belongs to the kube-system namespace. For more information, see Manage ConfigMaps. Modify the setting of the forward plug-in and set the protocol that is used to communicate with the upstream servers to perfer_udp. This way, CoreDNS preferably uses the UDP protocol to communicate with the upstream DNS servers. You can modify the setting based on the following modifications:

# The original setting
forward . /etc/resolv.conf
# The modified setting
forward . /etc/resolv.conf {
  prefer_udp
}

Configure the ready plug-in

You must configure the ready plug-in for CoreDNS versions later than 1.5.0.

  1. Run the following command to open the coredns ConfigMap:

    kubectl -n kube-system edit configmap/coredns
  2. Check whether a line that contains only ready exists. If the line does not exist, add a line and specify ready, press Esc, enter :wq!, and then press Enter to save the file and exit the edit mode.

    apiVersion: v1
    data:
     Corefile: |
      .:53 {
        errors
        health {
          lameduck 15s
        }
        ready # Add this line and make sure that the word "ready" is aligned with the word "kubernetes". 
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods verified
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
                prefer_udp
        }
        cache 30
        loop
        log
        reload
        loadbalance
      }
  3. Check the status and logs of the CoreDNS pods. If the log data contains the reload keyword, the new configuration is loaded.