Container Service for Kubernetes (ACK) reserves a specific amount of node resources to run Kubernetes components and system processes. This ensures that the operating system kernel, system services, and Kubernetes daemons can run as expected. As a result, the amount of allocatable resources of a node differs from the resource capacity of the node. ACK provides a default resource reservation policy. You can also configure the kubelet to customize resource reservations.
Limits
You can create a custom resource reservation policy only for ACK clusters whose Kubernetes version is 1.20 or later. For more information about how to upgrade an ACK cluster, see Manually upgrade ACK clusters.
Impact
Impact of custom resource reservations
For more information about how to customize resource reservations, see Customize the kubelet parameters of a node pool. The custom resource reservation policy is applied to both existing nodes in the node pool and nodes that are newly added to the node pool. New nodes include nodes that are added by scale-out operations and Elastic Compute Service (ECS) nodes that are added by selecting Add Existing Node on the Node Pools page in the ACK console.
Do not use the CLI to manually modify the kubelet ConfigMap. If a configuration conflict exists, exceptions may occur during node pool O&M activities.
If you change the amount of reserved resources of a node, the amount of allocatable resources of the node may be reduced. If the resource usage of a node is high, pods that run on the node may be evicted. Properly configure resource reservations.
Impact of the default resource reservations
ACK may iterate the default values of resource reservations. If you update the node configuration of a node pool after an iteration, the new resource reservation policy is automatically applied to nodes in the node pool. For example, if you update the Kubernetes version, update the node pool, or modify the kubelet parameters of the node pool, the new resource reservation policy is automatically applied. If you do not perform any O&M operations, the new resource reservation policy is not applied to the existing nodes in the node pool in case the stability of your businesses is affected.
To apply the new resource reservation policy to an existing node, remove the node from the cluster, and then add it to the cluster again. By default, the new resource reservation policy is automatically applied to newly added nodes. For more information about how to add and remove nodes to and from a cluster and the impacts of the operations, see Remove nodes and Add existing ECS instances to an ACK cluster.
View the allocatable resources of a node
Run the following command to view the resource capacity and allocatable resources of a node:
kubectl describe node [NODE_NAME] | grep Allocatable -B 7 -A 6
Expected output:
Capacity:
cpu: 4 # The total number of CPU cores of the node.
ephemeral-storage: 123722704Ki # The total amount of ephemeral storage of the node. Unit: KiB.
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7925980Ki # The total amount of memory of the node. Unit: KiB.
pods: 64
Allocatable:
cpu: 3900m # The number of allocatable CPU cores on the node.
ephemeral-storage: 114022843818 # The amount of allocatable ephemeral storage on the node. Unit: bytes.
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 5824732Ki # The amount of allocatable memory on the node. Unit: KiB.
pods: 64
Calculate the allocatable resources of a node
You can calculate the allocatable resources of a node based on the following formula: Allocatable resources = Resource capacity - Reserved resources - Eviction threshold
.
Formula description:
The
Capacity
parameter in the output of the command that is used to query the node resources indicates the resource capacity of the node.For more information about resource reservations, see the Resource reservation policy section of this topic.
For more information about the eviction threshold, see Node-pressure Eviction.
Resource reservation policy
Take note of the following items when you configure a resource reservation policy:
Generally, ECS nodes with higher specifications can host more pods. To ensure the performance of nodes, ACK reserves more resources for Kubernetes components.
Windows nodes require additional resources to run the Windows operating system and Windows Server components. Therefore, Windows nodes usually reserve more resources than Linux nodes. For more information, see Create a Windows node pool.
ACK calculates the amount of reserved resources based on the CPU and memory resources in different intervals. The resource capacity of a node equals the sum of reserved resources in all intervals. To reduce the amount of reserved CPU resources and memory resources, the resource reservation policy algorithm is optimized for ACK clusters that run Kubernetes 1.28. We recommend that you update your cluster. For more information, see Manually upgrade ACK clusters.
Reserved resources are divided into two categories: kubeReserved and systemReserved. kubeReserved resources are reserved for Kubernetes components, and systemReserved resources are reserved for system processes. Each category accounts for 50% of the total reserved resources. For example, on a node with 4 CPU cores, an ACK cluster that runs Kubernetes 1.28 or later reserves a total of 80 millicores of CPU resources, with kubeReserved and systemReserved resources each accounting for 40 millicores. In contrast, an ACK cluster that runs Kubernetes versions 1.20 through 1.28 (inclusive) reserves a total of 100 millicores of CPU resources, with kubeReserved and systemReserved resources each accounting for 50 millicores.
Policy for reserving CPU resources
Kubernetes 1.28 or later
The following figure shows the total amount of reserved CPU resources of a compute node.
If the node provides 32 CPU cores, the total amount of CPU resources reserved is calculated based on the following formula:
1000 × 6% +1000 × 1% + 1000 × 2 × 0.5% + (32000 - 4000) × 0.25% = 150 millicores
Kubernetes 1.20 through 1.28 (inclusive)
The following figure shows the total amount of reserved CPU resources of a compute node.
If the node provides 32 CPU cores, the total amount of CPU resources reserved is calculated based on the following formula:
100 + (32,000 - 4,000) × 2.5%= 800 millicores
Policy for reserving memory
Kubernetes 1.28 or later
The total amount of reserved memory resources of a compute node is calculated based on the following formula: Reserved memory resources = min(11 × $max_num_pods + 255, Total memory × 25%)
. $max_num_pods
indicates the maximum number of pods that can run on the node. The node memory
is measured in MiB. The total amount of reserved memory resources is the smaller value of 11 × $max_num_pods + 255
and Total memory × 25%
.
The maximum number of pods that can run on the node is calculated based on the network plug-in used in your cluster.
If your ACK cluster uses Terway, the maximum number of pods that can run on the node depends on the number of elastic network interfaces (ENIs), which is decided by the ECS instance type. For more information about how to calculate the maximum number of pods that can run on a node in different Terway modes, see Work with Terway. For more information about how to view the maximum number of pods that can run on a node, see the Work with Terway section of the "Work with Terway" topic. You can also log on to the ACK console and view the maximum number of pods that run on a node on the Nodes page.
If your ACK cluster uses Flannel, you can specify the maximum number of pods that can run on a node when you create the ACK cluster. You can log on to the ACK console and view the maximum number of pods that run on a node on the Nodes page.
For example, a cluster uses Terway in multi-IP shared ENI mode, and the instance type of a node is ecs.g7.16xlarge with 256 GiB of memory. In this case, the maximum number of pods that can run on the node is calculated based on the following formula: (8 - 1) × 30 = 210
. The total amount of reserved memory resources is calculated based on the following formula: Reserved memory resources = min(11 × 210 + 255, 256 × 1,024 × 25%) = 2,565 MiB
.
Kubernetes 1.20 through 1.28 (inclusive)
The following figure shows the total amount of reserved memory resources of a compute node.
If the node provides 256 GiB of memory, the total amount of memory resources reserved is calculated based on the following formula:
4 × 25% + (8 - 4) × 20% + (16 - 8) × 10% + (128 - 16) × 6% + (256 - 128) × 2% = 11.88 GiB
Example of default resource reservations on ACK nodes
For more information about ECS instance types, see Overview of instance families.
Node resource capacity | Reserved resource (Kubernetes 1.28 or later) | Reserved resource (Kubernetes 1.20 through 1.28, inclusive) | |||||
Instance type | CPU (Unit: core) | Memory (Unit: GiB) | Maximum number of pods on a node In this example, the cluster uses Terway in multi-IP shared ENI mode. | CPU (Unit: millicore) | Memory (Unit: MiB) | CPU (Unit: millicore) | Memory (Unit: MiB) |
ECS c7 | 2 | 4 | 12 | 70 | 387 | 100 | 1,024 |
4 | 8 | 45 | 80 | 750 | 100 | 1,843 | |
8 | 16 | 45 | 90 | 750 | 200 | 2,662 | |
16 | 32 | 210 | 110 | 2,565 | 400 | 3,645 | |
32 | 64 | 210 | 150 | 2,565 | 800 | 5,611 | |
64 | 128 | 210 | 230 | 2,565 | 1,600 | 9,543 | |
128 | 256 | 420 | 390 | 4,875 | 2,400 | 12,164 | |
ECS ebmc7a | 256 | 512 | 450 | 710 | 5,205 | 3,040 | 17,407 |
FAQ
How do I view the total CPU and memory resources of a node?
CPU
Run the following command to view the total number of CPU cores of a node:
cat /proc/cpuinfo | grep processor
Expected output:
processor : 0
processor : 1
processor : 2
processor : 3
Memory
Run the following command to view the total amount of memory of a node:
cat /proc/meminfo | grep MemTotal
Expected output:
MemTotal: 7660952 kB
References
For more information about how to customize resource reservations and eviction thresholds and the relevant usage notes, see Customize the kubelet parameters of a node pool
You can configure resource requests for application pods based on the amount of allocatable resources. The sum of resource requests of all application pods on a node cannot be greater than the amount of allocatable resources on the node. Otherwise, pod scheduling fails due to insufficient resources. ACK can create resource profiles for Kubernetes-native workloads to help you configure resource requests based on the historical resource usage data. For more information about how to configure resource requests for application pods, see Create a stateless application by using a Deployment.
To apply a custom resource reservation policy to existing pods on a node, you need to remove the node from the cluster, and then add the node to the cluster again. By default, the custom resource reservation policy is automatically applied to newly added nodes. For more information about how to add and remove nodes to and from a cluster and the impacts of the operations, see Remove nodes and Add existing ECS instances to an ACK cluster.