If the default parameter settings of the node OS, such as Linux, do not meet your business requirements, you can customize the OS parameters of your node pools to improve OS performance. After you customize the OS parameters of a node pool, Container Service for Kubernetes (ACK) updates the nodes in the node pool in batches. The new OS parameters immediately take effect on existing nodes in the node pool. Newly added nodes also use the new OS parameters.
Limits
This feature is supported only by ACK clusters that run Kubernetes 1.28 and later. For more information, see Create an ACK managed cluster, Create an ACK dedicated cluster, and Create an ACK edge cluster. To update an ACK cluster, see Manually update ACK clusters.
Usage notes
Dynamically modifying node OS configurations may change the configurations of existing pods on nodes. As a result, pods may be recreated. Before you modify node OS configurations, we recommend that you ensure the high availability of your applications.
Modifications to OS parameters may affect the Linux kernel, which may cause node performance degradation or unavailability. As a result, your applications may be affected. Before you modify an OS parameter in the production environment, we recommend that you learn the purpose of the parameter and test the impact of the parameter change.
Do not use other methods to modify OS parameters that are not allowed to be customized in the ACK console. Modifying OS parameters that are not allowed may lead to node unavailability or overwrite other OS parameter modifications. For example, if you manually modify the /etc/sysctl.d/99-k8s.conf file in the CLI, other configuration modifications you made may be overwritten when the system performs cluster O&M operations, such as cluster updates and custom parameter changes.
Customizable sysctl parameters in the ACK console
Parameter | Description | Default | Suggested value range |
fs.aio-max-nr | The maximum number of asynchronous I/O operations supported by the system. | 65536 | [65536, 6553500] |
fs.file-max | The maximum number of file handles that can be allocated by the system. | 2097152 | [8192, 12000500] |
fs.inotify.max_user_watches | The maximum number of inotify watches that can be created by a user. | 524288 | [524288, 2097152] |
fs.nr_open | The maximum number of file descriptors that can be allocated by a process. | 1048576 | [1000000, 20000500] The value of this parameter must be less than the value of the fs.file-max parameter. |
kernel.pid_max | The maximum number of process IDs (PIDs) that can be allocated by the system. | 4194303 | > 1048575 |
kernel.threads-max | The maximum number of threads that can be created by the system. | 504581 | > 500000 |
net.core.netdev_max_backlog | The maximum number of packets supported by the input queue when the packet receive rate on the interface is higher than the processing rate of the kernel. | 16384 | [1000, 3240000] |
net.core.optmem_max | The maximum ancillary buffer size supported by a socket. Unit: bytes. | 20480 | [20480, 4194304] |
net.core.rmem_max | The maximum receive buffer size supported by a socket. Unit: bytes. | 16777216 | [212992, 134217728] |
net.core.wmem_max | The maximum send buffer size supported by a socket. Unit: bytes. | 16777216 | [212992, 134217728] |
net.core.wmem_default | The default send buffer size supported by a socket. Unit: bytes. | 212992 | ≥ 212992 |
net.ipv4.tcp_mem | The maximum size of memory that can be used by the TCP stack. Unit: pages. In most cases, the page size is 4 KB. The value of this parameter consists of three integers that specify different memory watermarks for the TCP stack. The first integer specifies the minimum memory watermark. The second integer specifies the stressful memory watermark. The third integer specifies the maximum memory watermark. | The value is dynamically calculated based on the total memory provided by the system. | The three values increase in sequence. Minimum value: 80000. |
net.ipv4.neigh.default.gc_thresh1 | Garbage collection configurations for the Address Resolution Protocol (ARP) cache | 128 | [128, 80000] |
net.ipv4.neigh.default.gc_thresh2 | 1024 | [512, 90000] | |
net.ipv4.neigh.default.gc_thresh3 | 8192 | [1024, 100000] |
Customizable THP parameters in the ACK console
The Transparent Huge Pages (THP) feature is a common feature in the Linux kernel. THP can merge small pages (typically 4KB in size) into huge pages (typically 2 MB or larger in size) to reduce the number of page table entries (PTEs) and memory access. This way, the stress of the translation lookaside buffer (TLB) cache is reduced and application performance is improved.
This feature is in canary release. To use it, submit a ticket.
The default values in the following table are the default settings used by systems that run Alibaba Cloud Linux 2 with kernel version 4.19.91-18 and later.
Parameter | Description | Default | Valid value |
transparent_enabled | Specifies whether to globally enable the THP feature. | always |
|
transparent_defrag | Specifies whether to enable the THP defragmentation feature. After you enable THP defragmentation, small pages can be merged into huge pages, which reduces the page table size and improves system performance. | madvise |
|
khugepaged_defrag |
This operation locks the memory directory. In addition, the | 1 |
|
khugepaged_alloc_sleep_millisecs | If THP allocation fails, the | Default value: 60000, which is equivalent to 60 seconds. | |
khugepaged_scan_sleep_millisecs | The system starts the | Default value: 10000, which is equivalent to 10 seconds. | |
khugepaged_pages_to_scan | The | Default value: 4096. |
Customize the OS parameters of a node pool in the ACK console
After you customize the OS parameters of a node pool, ACK updates the nodes in the node pool in batches. The new OS parameters immediately take effect on existing nodes in the node pool. Newly added nodes also use the new OS parameters. OS parameters on existing nodes may affect applications on the nodes. We recommend that you perform this operation during off-peak hours.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side navigation pane, choose .
On the Node Pools page, select the node pool that you want to manage and choose More > OS Configuration in the Actions column.
Read the configuration notes. Click + Custom Parameters and select the parameters that you want to modify. Specify the Maximum Number of Nodes to Repair per Batch parameter. Then, click Submit and follow the instructions to complete subsequent operations.
After you specify the Maximum Number of Nodes to Repair per Batch parameter, the new OS configurations are updated on the nodes in the node pool in batches based on the value you specified. You can view the progress of the update in the Event Rotation section. You can also pause, resume, or cancel the update. You can pause the update and then verify the updated nodes. After you pause the update, the OS configurations of the nodes in the current batch will still be updated. The remaining batches of nodes are not updated until you resume the update.
ImportantWe recommend that you complete the update at the earliest opportunity. If the update remains paused for seven days, the system automatically cancels the update and deletes the related events and logs.