All Products
Search
Document Center

Container Service for Kubernetes:Create a node pool

Last Updated:Nov 21, 2024

Nodes in Container Service for Kubernetes (ACK) clusters are physical or virtual machines that are used to run containerized applications. A node pool is a group of nodes that have the same configurations or nodes that are used for the same purpose, such as nodes that have the same specifications, zones, labels, and taints. Node pools facilitate the management and O&M of the nodes in clusters. You can specify the node attributes when you create a node pool or modify the configurations of an existing node pool.

Prerequisites

An ACK cluster is created. For more information, see Create an ACK managed cluster.

Node pool types

  • Regular node pool: You can use a regular node pool to manage a set of nodes that have the same configurations, such as specifications, labels, and taints. For more information, see Node pool overview.

  • Managed node pool: Managed node pools provide automated O&M features, such as automatic Common Vulnerabilities and Exposures (CVE) vulnerability patching and automatic node repair. For more information, see Overview of managed node pools.

    Note

    Only ACK Pro clusters support managed node pools.

For more information about the differences between the two types of node pools, see the Comparison between managed node pools and regular node pools section of the "Overview of managed node pools" topic.

Procedure

Note

When you create or modify a node pool, the nodes and services in other existing node pools are not affected.

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side navigation pane, choose Nodes > Node Pools.

  3. On the Node Pools page, click Create Node Pool. In the Create Node Pool dialog box, configure the parameters that are described in the following table.

    Basic settings

    Basic settings

    Parameter

    Description

    Node Pool Name

    Specify a node pool name.

    Region

    By default, the region in which the cluster resides is selected. You cannot change the region.

    Confidential Computing

    Note
    • To use confidential computing, submit a ticket to apply to be added to the whitelist.

    • This parameter is available only if you select containerd for the Container Runtime parameter.

    Specify whether to enable confidential computing. ACK provides a cloud-native and all-in-one confidential computing solution based on hardware encryption technologies. Confidential computing ensures data security, integrity, and confidentiality. It simplifies the development and delivery of trusted or confidential applications at lower costs. For more information, see TEE-based confidential computing.

    Container Runtime

    Specify the container runtime based on the Kubernetes version.

    • containerd: containerd is recommended for all Kubernetes versions.

    • Sandboxed-Container: supports Kubernetes 1.24 and earlier.

    • Docker: supports Kubernetes 1.22 and earlier.

    For more information, see Comparison among Docker, containerd, and Sandboxed-Container.

    Managed node pool settings

    Managed Node Pool

    Specify whether to enable the managed node pool feature.

    Managed node pools are O&M-free node pools provided by ACK. Managed node pools support CVE vulnerability patching and auto recovery. They can efficiently reduce your O&M work and enhance node security. For more information, see Overview of managed node pools.

    Auto Recovery Rule

    This parameter is available after you select Enable for the managed node pool feature.

    After you select Restart Faulty Node, the system automatically restarts relevant components to repair nodes in the NotReady state and drains the nodes before restarting them.

    Auto Update Rule

    This parameter is available after you select Enable for the managed node pool feature.

    After you select Automatically Update Kubelet and Containerd, the system automatically updates the kubelet when a new version is available. For more information, see Node pool updates.

    Auto CVE Patching (OS)

    This parameter is available after you select Enable for the managed node pool feature.

    You can configure ACK to automatically patch high-risk, medium-risk, and low-risk vulnerabilities. For more information, see Auto repair and CVE patching.

    Some patches take effect only after you restart the ECS instances. After you select Restart Nodes if Necessary to Patch CVE Vulnerabilities, the system automatically restarts nodes on demand. If you do not select this option, you need to manually restart nodes.

    Maintenance Window

    Image updates, runtime updates, and Kubernetes version updates are automatically performed during the maintenance window. For more information, see Overview of managed node pools.

    Auto Scaling

    Specify whether to enable auto scaling. This feature provides cost-effective computing resource scaling based on resource demand and scaling policies. For more information, see Auto scaling overview. Before you enable this feature, you need to enable node auto scaling for the node pool. For more information, see Step 1: Enable node auto scaling.

    Network settings

    VPC

    By default, the virtual private cloud (VPC) in which the cluster resides is selected. You cannot change the VPC.

    vSwitch

    When the node pool is being scaled out, new nodes are created in the zones of the selected vSwitches based on the policy that you select for the Scaling Policy parameter. You can select vSwitches in the zones that you want to use.

    If no vSwitch is available, click Create vSwitch to create one. For more information, see Create and manage a vSwitch.

    Instance and Image

    Billing Method

    The following billing methods are supported for nodes in a node pool: pay-as-you-go, subscription, and preemptible instances.

    • If you select the pay-as-you-go billing method, ECS instances in the node pool are billed on a pay-as-you-go basis. You are not charged for using the node pool.

    • If you select the subscription billing method, you must set the Duration and Auto Renewal parameters.

    • If you select the preemptible instances billing method, you must set the following parameter.

      Upper Price Limit of Current Instance Spec: If the real-time market price of an instance type that you select is lower than the value of this parameter, a preemptible instance of this instance type is created. After the protection period (1 hour) ends, the system checks the spot price and resource availability of the instance type every 5 minutes. If the real-time market price exceeds your bid price or if the resource inventory is insufficient, the preemptible instance is released.

      ACK supports only preemptible instances with a protection period. For more information, see Overview and Best practices for preemptible instance-based node pools.

    Important
    • If you change the billing method of a node pool, the change takes effect only on newly added nodes. The existing nodes in the node pool still use the original billing method. For more information about how to change the billing method of existing nodes in a node pool, see Change the billing method of an instance from pay-as-you-go to subscription.

    • To ensure that all nodes use the same billing method, ACK does not allow you to change the billing method of a node pool from pay-as-you-go or subscription to preemptible instances, or change the billing method of a node pool from preemptible instances to pay-as-you-go or subscription.

    Instance-related parameters

    Select the ECS instances used by the worker node pool based on instance types or attributes. You can filter ECS instances by attributes such as vCPU, memory, instance family, and architecture.

    When the node pool is scaled out, ECS instances of the selected instance types are created. The scaling policy of the node pool determines which instance types are used to create new nodes during scale-out activities. Select multiple instance types to improve the success rate of node pool scale-out operations.

    If the node pool fails to be scaled out because the instance types are unavailable or the instances are out of stock, you can specify more instance types for the node pool. The ACK console automatically evaluates the scalability of the node pool. You can view the scalability level when you create the node pool or after you create the node pool.

    Note

    ARM-based ECS instances support only ARM images. For more information about ARM-based node pools, see Configure an ARM-based node pool.

    If you select only GPU-accelerated instances, you can select Enable GPU Sharing on demand. For more information, see cGPU overview.

    Operating System

    Container Service for Kubernetes supports ContainerOS, Alibaba Cloud Linux 3, Ubuntu, and Windows. For more information, see Overview of OS images.

    Note
    • After you change the OS image of the node pool, the change takes effect only on newly added node. The existing nodes in the node pool still use the original OS image. For more information about how to update the OS images of existing nodes, see Node pool updates.

    • To ensure that all nodes in the node pool use the same OS image, ACK allows you to only update the node OS image to the latest version. ACK does not allow you to change the type of OS image.

    Security Reinforcement

    • Disable: disables security hardening for ECS instances.

    • Reinforcement based on classified protection: You can enable security hardening only when you select an Alibaba Cloud Linux 2 or Alibaba Cloud Linux 3 image. Alibaba Cloud provides baselines and the baseline check feature to help you check the compliance of Alibaba Cloud Linux 2 images and Alibaba Cloud Linux 3 images with the level 3 standards of Multi-Level Protection Scheme (MLPS) 2.0. For more information, see ACK reinforcement based on classified protection.

      Important

      MLPS Security Hardening enhances the security of OS images to meet the requirements of GB/T 22239-2019 Information Security Technology - Baseline for Classified Protection of Cybersecurity without compromising the compatibility and performance of the OS images.

      After you enable MLPS Security Hardening, remote logons through SSH are prohibited for root users. You can use Virtual Network Computing (VNC) to log on to the OS from the ECS console and create regular users that are allowed to log on through SSH. For more information, see Connect to a Linux instance by using VNC.

    • OS Security Hardening: You can enable Alibaba Cloud Linux Security Hardening only when the system image is an Alibaba Cloud Linux 2 or Alibaba Cloud Linux 3 image.

    Note

    After the cluster is created, you cannot modify the Security Hardening parameter.

    Logon Type

    Valid values: Key Pair, Password, and Later.

    Note

    If you select Reinforcement based on classified protection for the Security Reinforcement parameter, only the Password option is supported.

    • Configure the logon type when you create the node pool:

      • Key Pair: Alibaba Cloud SSH key pairs provide a secure and convenient method to log on to ECS instances. An SSH key pair consists of a public key and a private key. SSH key pairs support only Linux instances. For more information, see Overview.

      • Password: The password must be 8 to 30 characters in length, and can contain letters, digits, and special characters.

    • Configure the logon type after you create the node pool: For more information, see Bind an SSH key pair to an instance and Reset the logon password of an instance.

    Username

    If you select Key Pair or Password for Logon Type, you must select root or ecs-user as the username.

    Volumes

    System Disk

    ESSD AutoPL, Enterprise SSD (ESSD), ESSD Entry, Standard SSD, and Ultra Disk are supported.

    The types of system disks that you can select depend on the instance types that you select. Disk types that are not displayed in the drop-down list are not supported by the instance types that you select. For more information about disks, see Overview of Block Storage. For more information about disk types supported by different instance types, see Overview of instance families.

    Note
    • If you select Enterprise SSD (ESSD) as the system disk type, you can set a custom performance level for the system disk. You can select higher PLs for ESSDs with larger storage capacities. For example, you can select PL 2 for an ESSD with a storage capacity of more than 460 GiB. You can select PL 3 for an ESSD with a storage capacity of more than 1,260 GiB. For more information, see Capacities and performance levels.

    • You can select Encryption only if you set the system disk type to Enterprise SSD (ESSD). By default, the default service CMK is used to encrypt the system disk. You can also use an existing CMK generated by using BYOK in KMS.

    You can select More System Disk Types and select a disk type other than the current one in the System Disk section to improve the success rate of system disk creation. The system will attempt to create a system disk based on the specified disk types in sequence.

    Data Disk

    ESSD AutoPL, Enterprise SSD (ESSD), ESSD Entry, SSD, and Ultra Disk are supported. The disk types that you can select depend on the instance types that you select. Disk types that are not displayed in the drop-down list are not supported by the instance types that you select. For more information about disks, see Overview of Block Storage. For more information about disk types supported by different instance types, see Overview of instance families.

    • ESSD AutoPL disks provide the following features:

      • Performance provision: The performance provision feature allows you to configure provisioned performance settings for ESSD AutoPL disks to meet storage requirements that exceed the baseline performance without the need to extend the disks.

      • Performance burst: The performance burst feature allows ESSD AutoPL disks to burst their performance when spikes in read/write workloads occur and reduce the performance to the baseline level at the end of workload spikes.

    • ESSDs provide the following features:

      Custom Performance. You can select higher PLs for ESSDs with larger storage capacities. For example, you can select PL 2 for an ESSD with a storage capacity of more than 460 GiB. You can select PL 3 for an ESSD with a storage capacity of more than 1,260 GiB. For more information, see Capacity and PLs.

    • You can select Encryption for all disk types when you specify the type of data disk. By default, the default service CMK is used to encrypt the data disk. You can also use an existing CMK generated by using BYOK in KMS.

    • You can also use snapshots to create data disks in scenarios where container image acceleration and fast loading of large language models (LLMs) are required. This improves the system response speed and enhances the processing capability.

    • Make sure that a data disk is mounted to /var/lib/container on each node, and /var/lib/kubelet and /var/lib/containerd are mounted to the /var/lib/container. For other data disks on the node, you can perform the initialization operation and customize their mount directories. For more information, see Can I mount a data disk to a custom directory in an ACK node pool?

    Note

    You can attach up to 64 data disks to an ECS instance. The maximum number of disks that can be attached to an ECS instance varies based on the instance type. To query the maximum number of disks that you can attach to an ECS instance of a specific instance type, call the DescribeInstanceTypes operation and check the DiskQuantity parameter in the response.

    Instances

    Scaling Mode

    After you enable auto scaling for a node pool, you can select a scaling mode for ECS instances.

    • Standard mode: Auto scaling is implemented by creating and releasing ECS instances.

    • Swift mode: Auto scaling is implemented by creating, stopping, and starting ECS instances. ECS instances in the Stopped state can be directly restarted to accelerate scaling activities.

      When a node in swift mode is reclaimed, only disk fees are charged for the node. No computing fee is charged. This rule does not apply to instance families that use local disks, such as big data and local SSDs instance families. For more information about the billing rules and limits of the economical mode, see Economical mode.

    Expected Nodes

    The expected number of nodes in the node pool. You can modify the Expected Nodes parameter to adjust the number of nodes in the node pool. If you do not want to create nodes in the node pool, set this parameter to 0. For more information, see Scale a node pool.

    Advanced settings

    Expand Advanced Options (Optional) to configure the Scaling Policy for the node.

    Parameter

    Description

    Scaling Policy

    • Priority: The system scales the node pool based on the priorities of the vSwitches that you select for the node pool. The vSwitches that you select are displayed in descending order of priority. If Auto Scaling fails to create ECS instances in the zone of the vSwitch with the highest priority, Auto Scaling attempts to create ECS instances in the zone of the vSwitch with a lower priority.

    • Cost Optimization: The system creates instances based on the vCPU unit prices in ascending order. Preemptible instances are preferentially created when multiple preemptible instance types are specified in the scaling configurations. If preemptible instances cannot be created due to reasons such as insufficient stocks, the system attempts to create pay-as-you-go instances.

      When Billing Method is set to Preemptible Instance, you can configure the following parameters in addition to the Enable Supplemental Preemptible Instances parameter:

      • Percentage of Pay-as-you-go Instances: Specify the percentage of pay-as-you-go instances in the node pool. Valid values: 0 to 100.

      • Enable Supplemental Pay-as-you-go Instances: After you enable this feature, Auto Scaling attempts to create pay-as-you-go ECS instances to meet the scaling requirement if Auto Scaling fails to create preemptible instances for reasons such as that the unit price is too high or preemptible instances are out of stock.

    • Distribution Balancing: The even distribution policy takes effect only when you select multiple vSwitches. This policy ensures that ECS instances are evenly distributed among the zones (the vSwitches) of the scaling group. If ECS instances are unevenly distributed across the zones due to reasons such as insufficient stocks, you can perform a rebalancing operation.

      Important

      You cannot change the scaling policy of a node pool after the node pool is created.

      When Billing Method is set to Preemptible Instance, you can specify whether to turn on Enable Supplemental Preemptible Instances. After this feature is enabled, when a system message that indicates preemptible instances are reclaimed is received, the node pool with auto scaling enabled attempts to create new instance to replace the reclaimed the preemptible instances.

    Expand Advanced Options (Optional) to configure parameters such as Resource Group, ECS Tags, and Taints.

    Advanced settings

    Parameter

    Description

    Resource Group

    The resource group to which the cluster belongs. Each resource can belong to only one resource group. You can regard a resource group as a project, an application, or an organization based on your business scenarios.

    ECS Tags

    Add tags to the ECS instances that are automatically added during auto scaling. Tag keys must be unique. A key cannot exceed 128 characters in length. Keys and values cannot start with aliyun or acs:. Keys and values cannot contain https:// or http://.

    An ECS instance can have at most 20 tags. To increase the quota limit, submit an application in the Quota Center console. The following tags are automatically added to an ECS node by ACK and Auto Scaling. Therefore, you can add at most 17 tags to an ECS node.

    • The following two ECS tags are added by ACK:

      • ack.aliyun.com:<Cluster ID>

      • ack.alibabacloud.com/nodepool-id:<Node pool ID>

    • The following label is added by Auto Scaling: acs:autoscaling:scalingGroupId:<Scaling group ID>.

    Note
    • After you enable auto scaling, the following ECS tags are added to the node pool by default: k8s.io/cluster-autoscaler:true and k8s.aliyun.com:true.

    • The auto scaling component simulates scale-out activities based on node labels and taints. To meet this purpose, the format of node labels is changed to k8s.io/cluster-autoscaler/node-template/label/Label key:Label value and the format of taints is changed to k8s.io/cluster-autoscaler/node-template/taint/Taint key/Taint value:Taint effect.

    Taints

    Add taints to nodes. A taint consists of a key, a value, and an effect. A taint key can be prefixed. If you want to specify a prefixed taint key, add a forward slash (/) between the prefix and the remaining content of the key. For more information, see Taints and tolerations. The following limits apply to taints:

    • Key: A key must be 1 to 63 characters in length, and can contain letters, digits, hyphens (-), underscores (_), and periods (.). A key must start and end with a letter or digit.

      If you want to specify a prefixed key, the prefix must be a subdomain name. A subdomain name consists of DNS labels that are separated by periods (.), and cannot exceed 253 characters in length. It must end with a forward slash (/). For more information about subdomain names, see DNS subdomain names.

    • Value: A value cannot exceed 63 characters in length, and can contain letters, digits, hyphens (-), underscores (_), and periods (.). A value must start and end with a letter or digit. You can also leave a value empty.

    • You can specify the following effects for a taint: NoSchedule, NoExecute, and PreferNoSchedule.

      • NoSchedule: If a node has a taint whose effect is NoSchedule, the system does not schedule pods to the node.

      • NoExecute: Pods that do not tolerate this taint are evicted after this taint is added to a node. Pods that tolerate this taint are not evicted after this taint is added to a node.

      • PreferNoSchedule: The system attempts to avoid scheduling pods to nodes with taints that are not tolerated by the pods.

    Node Label

    Add labels to nodes. A label is a key-value pair. A label key can be prefixed. If you want to specify a prefixed label key, add a forward slash (/) between the prefix and the remaining content of the key. The following limits apply to labels:

    • The key of a label must be 1 to 63 characters in length, and can contain letters, digits, hyphens (-), underscores (_), and periods (.). It must start and end with a letter or a digit.

      If you want to specify a prefixed key, the prefix must be a subdomain name. A subdomain name consists of DNS labels that are separated by periods (.), and cannot exceed 253 characters in length. It must end with a forward slash (/). For more information about subdomain names, see Subdomain names.

      The following prefixes are used by key Kubernetes components and cannot be used in node labels:

      • kubernetes.io/

      • k8s.io/

      • Prefixes that end with kubernetes.io/ or k8s.io/. Example: test.kubernetes.io/.

        However, you can still use the following prefixes:

        • kubelet.kubernetes.io/

        • node.kubernetes.io

        • Prefixes that are end with kubelet.kubernetes.io/.

        • Prefixes that are end with node.kubernetes.io.

    • A value cannot exceed 63 characters in length, and can contain letters, digits, hyphens (-), underscores (_), and periods (.). A value must start and end with a letter or digit. You can also leave a value empty.

    • If you select Set New Nodes to Unschedulable, nodes are unschedulable when they are added to the cluster. You can set an existing node to schedulable on the Nodes page in the ACK console.

    CPU Policy

    The CPU management policy for kubelet nodes.

    • None: The default CPU management policy.

    • Static: This policy allows pods with specific resource characteristics on the node to be granted enhanced CPU affinity and exclusivity.

    For more information, see CPU management policies.

    Custom Node Name

    Specify whether to use a custom node name. If you choose to use a custom node name, the name of the node, name of the ECS instance, and hostname of the ECS instance are changed.

    Note

    If a Windows instance uses a custom node name, the hostname of the instance is fixed to an IP address. You need to use hyphens (-) to replace the periods (.) in the IP address. In addition, no prefix or suffix is allowed in the IP address.

    A custom node name consists of a prefix, an IP substring, and a suffix.

    • A custom node name must be 2 to 64 characters in length. The name must start and end with a lowercase letter or digit.

    • The prefix and suffix can contain letters, digits, hyphens (-), and periods (.). The prefix and suffix must start with a letter and cannot end with a hyphen (-) or period (.). The prefix and suffix cannot contain consecutive hyphens (-) or periods (.).

    • The prefix is required due to ECS limits and the suffix is optional.

    For example, the node IP address is 192.XX.YY.55, the prefix is aliyun.com, and the suffix is test.

    • If the node is a Linux node, the node name, ECS instance name, and ECS instance hostname are aliyun.com192.XX.YY.55test.

    • If the node is a Windows node, the ECS instance hostname is 192-XX-YY-55 and the node name and ECS instance name are aliyun.com192.XX.YY.55test.

    Pre-defined Custom Data

    To use this feature, submit an application in the Quota Center console.

    Nodes automatically run predefined scripts before they are added to the cluster. For more information about user-data scripts, see User-data scripts.

    For example, if you enter echo "hello world", a node runs the following script:

    #!/bin/bash
    echo "hello world"
    [Node initialization script]

    User Data

    Nodes automatically run user-data scripts after they are added to the cluster. For more information about user-data scripts, see User-data scripts.

    For example, if you enter echo "hello world", a node runs the following script:

    #!/bin/bash
    [Node initialization script]
    echo "hello world"
    Note

    After you create a cluster or add nodes, the execution of the user-data script on a node may fail. We recommend that you log on to a node and run the grep cloud-init/var/log/messages command to view the execution log and check whether the execution succeeds or fails on the node.

    CloudMonitor Agent

    Specify whether to install the CloudMonitor agent. After you install the CloudMonitor agent on ECS nodes, you can view the monitoring information about the nodes in the CloudMonitor console.

    Note

    This parameter takes effect only on newly added nodes and does not take effect on existing nodes. If you want to install the CloudMonitor agent on an existing ECS node, go to the CloudMonitor console.

    Public IP

    Specify whether to assign an IPv4 address to each node. If you clear the check box, no public IP address is allocated. If you select the check box, you must also set the Bandwidth Billing Method and Peak Bandwidth parameters.

    Note

    This parameter takes effect only on newly added nodes and does not take effect on existing nodes. If you want to enable an existing node to access the Internet, you must create an EIP and associate the EIP with the node. For more information, see Associate an EIP with an ECS instance.

    Custom Security Group

    You can select Basic Security Group or Advanced Security Group but you can select only one security group type. You cannot modify the security groups of node pools or change the type of security group. For more information about security groups, see Overview.

    Important
    • To use custom security groups, apply to be added to the whitelist in Quota Center.

    • Each ECS instance supports up to five security groups. Make sure that the quota of security groups for your ECS instance is sufficient. For more information about security group limits and how to increase the quota limit of security groups for your ECS instance, see Security group limits.

    • If you select an existing security group, the system does not automatically configure security group rules. This may cause errors when you access the nodes in the cluster. You must manually configure security group rules. For more information about how to manage security group rules, see Configure security group rules to enforce access control on ACK clusters.

    RDS Whitelist

    Click Select RDS Instance to add node IP addresses to the whitelist of an ApsaraDB RDS instance.

    Deployment Set

    Important
    • To use the deployment set feature, apply to be added to the whitelist in the Quota Center console.

    • You cannot change the deployment set used by control planes after the control planes are created.

    • After you select a deployment set, the maximum number of nodes that can be created in the node pool is limited. By default, the maximum number of nodes supported by a deployment set is 20 × Number of zones. The number of zones depends on the number of vSwitches. Exercise caution when you select the deployment set. To avoid node creation failures, make sure that the ECS quota of the deployment set that you select is sufficient.

    You need to first create a deployment set in the ECS console and then specify the deployment set when creating a node pool in the ACK console. For more information about how to create a deployment set, see Create a deployment set.

    You can use a deployment set to distribute your ECS instances to different physical servers to ensure high service availability and implement underlying disaster recovery. If you specify a deployment set when you create ECS instances, the instances are created and distributed based on the deployment strategy that you preset for the deployment set within the specified region. For more information, see Best practices for associating deployment sets with node pools.

    Worker RAM Role

    You can assign a worker RAM role to a node pool to reduce the potential risk of sharing a worker RAM role among all nodes in the cluster.

    • Default Role: The node pool uses the default worker RAM role created by the cluster.

    • Custom: The node pool uses the specified role as the worker RAM role. The default role is used when this parameter is left empty. For more information, see Use custom worker RAM roles.

    Important

    ACK managed clusters that run Kubernetes 1.22 or later are supported.

    Private Pool Type

    Note

    This parameter is in canary release. To use this feature, submit a ticket.

    Valid values: Open, Do Not Use, and Specified.

    • Open: The system automatically matches an open private pool. If no matching is found, resources in the public pool are used.

    • Do Not Use: No private pool is used. Only resources in the public pool are used.

    • Specified: Specify a private pool by ID. If the specified private pool is unavailable, ECS instances fail to start up.

    For more information, see Private pools.

  4. Click Confirm Order.

    If the Status column of the node pool in the node pool list displays Initializing, the node pool is being created. After the node pool is created, the Status column of the node pool displays Active.

What to do next

After the node pool is created, in the node pool list, you can perform one of the operations that are described in the following table.

UI

Description

References

Sync Node Pool

If the node information is abnormal, you can synchronize the node pool.

N/A

Details

View the details of the node pool.

N/A

Edit

Modify the configurations of the node pool. For example, you can modify the vSwitch, managed node pool settings, billing method, instance type, or auto scaling settings of the node pool.

Modify a node pool

Monitor

View basic monitoring information about Elastic Compute Service (ECS) instances collected by Managed Service for Prometheus.

Monitor nodes

Scale

Adjust the expected number of nodes to scale the node pool. This helps reduce costs.

Scale a node pool

Logon Mode

Modify the logon mode of the node pool. You can choose between Key Pair and Password logon mode.

The Basic settings section of this topic

Configure Managed Node Pool

Configure managed node pool settings, such as the auto recovery rule, auto update rule, and auto CVE vulnerability patching.

The Basic settings section of this topic

Add Existing Node

Automatically or manually add existing ECS instances to the cluster.

Add existing ECS instances to an ACK cluster

Clone

Clone a node pool that contains the expected number of nodes based on the current node pool configurations.

N/A

Node Repair

If an exception occurs on the nodes in the managed node pool, ACK automatically repairs the nodes.

Auto repair

CVE Patching (OS)

Patch high-risk CVE vulnerabilities in nodes with a few clicks.

CVE patching

Kubelet Configuration

Modify the kubelet configurations.

Customize the kubelet parameters of a node pool

OS Configuration

If the default OS parameters of a Linux system do not meet your business requirements, you can customize the OS parameters of nodes within the node pool.

Customize the OS parameters of a node pool

Change Operating System

Change the operating system type of nodes or upgrade the operating system version.

N/A

Kubelet Update

Update nodes at the node pool level, including updates to the kubelet and container runtime.

Node pool updates

Delete

Delete the current node pool to save costs.

Delete a node pool

References

  • When a node is no longer required, remove it by following the steps in Remove nodes.

  • ACK reserves a certain amount of node resources to run Kubernetes components and system processes. For more information, see Resource reservation policy.

  • You can use the node scaling feature to enable ACK to automatically scale nodes when resources in the current cluster cannot fulfill pod scheduling. For more information, see Overview of node scaling.