All Products
Search
Document Center

Container Service for Kubernetes:kube-scheduler

Last Updated:Feb 03, 2026

kube-scheduler is a control plane component that schedules pods to suitable nodes in a cluster. It considers both node resource usage and the scheduling requirements of the pods.

Component introduction

Introduction to kube-scheduler

The kube-scheduler determines which nodes can run each pod in the scheduling queue. This decision is based on the pod's declared Request and the node's Allocatable property, which ensures that the nodes are valid. The kube-scheduler then sorts all valid nodes and binds a pod to a suitable one. By default, the kube-scheduler distributes pods evenly based on their Request values. For more information, see the official Kubernetes documentation for kube-scheduler.

Introduction to Filter and Score plugins

The Kubernetes Scheduling Framework organizes complex scheduling logic into plugins. These plugins provide flexible scheduling extensions. The Filter plugins remove nodes that cannot run a specific pod. The Score plugins then use algorithms to score the filtered nodes. The score indicates the suitability of each node for running the pod.

The following table lists the enabled Filter and Score scheduling plugins and their default weights for each kube-scheduler version.

View default enabled plugins

Component version

Filter

Score

v1.30.1-aliyun.6.5.4.fcac2bdf

  • The following open source plug-ins are enabled by default:

    Same as in the open source community. For more information, see v1.30.1 default Filter plugins.

  • Default ACK plugins:

    • NodeNUMAResource

    • topologymanager

    • EciPodTopologySpread

    • ipawarescheduling

    • BatchResourceFit

    • PreferredNode

    • gpushare

    • NetworkTopology

    • CapacityScheduling

    • elasticresource

    • resourcepolicy

    • gputopology

    • ECIBinderV1

    • loadawarescheduling

    • EciScheduling

  • Default open source plugins:

    Same as in the open source community. For more information, see v1.30.1 default Score plugins.

  • Default ACK plugins and their weights:

    • Name: NodeNUMAResource Default Weight: 1

    • Name: ipawarescheduling Default Weight: 1

    • Name: gpuNUMAJointAllocation Default Weight: 1

    • Name: PreferredNode Default Weight: 10000

    • Name: gpushare Default Weight: 20000

    • Name: gputopology Default Weight: 1

    • Name: numa Default Weight: 1

    • Name: EciScheduling Default Weight: 2

    • Name: NodeAffinity Default Weight: 2

    • Name: elasticresource Default Weight: 1000000

    • Name: resourcepolicy Default Weight: 1000000

    • Name: NodeBEResourceLeastAllocated Default Weight: 1

    • Name: loadawarescheduling Default Weight: 10

v1.28.3-aliyun-6.5.2.7ff57682

  • The following plug-ins are enabled by default in the open source version:

    Same as in the open source community. For more information, see v1.28.3 default Filter plugins.

  • Default ACK plugins:

    • NodeNUMAResource

    • topologymanager

    • EciPodTopologySpread

    • ipawarescheduling

    • BatchResourceFit

    • PreferredNode

    • gpushare

    • NetworkTopology

    • CapacityScheduling

    • elasticresource

    • resourcepolicy

    • gputopology

    • ECIBinderV1

    • loadawarescheduling

    • EciScheduling

  • Default open source plugins:

    Same as in the open source community. For more information, see v1.28.3 default Score plugins.

  • Default ACK plugins and their weights:

    • Name: NodeNUMAResource Default Weight: 1

    • Name: ipawarescheduling Default Weight: 1

    • Name: gpuNUMAJointAllocation Default Weight: 1

    • Name: PreferredNode Default Weight: 10000

    • Name: gpushare Default Weight: 20000

    • Name: gputopology Default Weight: 1

    • Name: numa Default Weight: 1

    • Name: EciScheduling Default Weight: 2

    • Name: NodeAffinity Default Weight: 2

    • Name: elasticresource Default Weight: 1000000

    • Name: resourcepolicy Default Weight: 1000000

    • Name: NodeBEResourceLeastAllocated Default Weight: 1

    • Name: loadawarescheduling Default Weight: 10

v1.26.3-aliyun-6.6.1.605b8a4f

  • The following open source plugins are enabled by default:

    Same as in the open source community. For more information, see v1.26.3 default Filter plugins.

  • Default ACK plugins:

    • NodeNUMAResource

    • topologymanager

    • EciPodTopologySpread

    • ipawarescheduling

    • BatchResourceFit

    • PreferredNode

    • gpushare

    • NetworkTopology

    • CapacityScheduling

    • elasticresource

    • resourcepolicy

    • gputopology

    • ECIBinderV1

    • loadawarescheduling

    • EciScheduling

  • Default open source plugins:

    Same as in the open source community. For more information, see v1.26.3 default Score plugins.

  • Default ACK plugins:

    • Name: NodeNUMAResource Default Weight: 1

    • Name: ipawarescheduling Default Weight: 1

    • Name: gpuNUMAJointAllocation Default Weight: 1

    • Name: PreferredNode Default Weight: 10000

    • Name: gpushare Default Weight: 20000

    • Name: gputopology Default Weight: 1

    • Name: numa Default Weight: 1

    • Name: EciScheduling Default Weight: 2

    • Name: NodeAffinity Default Weight: 2

    • Name: elasticresource Default Weight: 1000000

    • Name: resourcepolicy Default Weight: 1000000

    • Name: NodeBEResourceLeastAllocated Default Weight: 1

    • Name: loadawarescheduling Default Weight: 10

Plugin functions

View plugin descriptions and related documentation

Plugin name

Description

Related documentation

NodeNUMAResource

Manages CPU topology-aware scheduling.

Enable CPU topology-aware scheduling

topologymanager

Manages node NUMA resource allocation.

Enable NUMA topology-aware scheduling

EciPodTopologySpread

Enhances topology spread constraints in virtual node scheduling scenarios.

Enable virtual node scheduling policies for a cluster

ipawarescheduling

Aware scheduling of the remaining IP addresses.

Scheduling FAQ

BatchResourceFit

Enables and manages the colocation of multi-type workloads.

Best practices for colocation of multi-type workloads

PreferredNode

Reserves nodes for node pools with auto scaling enabled.

Node auto scaling

gpushare

Manages shared GPU scheduling.

Shared GPU scheduling

NetworkTopology

Manages network topology-aware scheduling.

Topology-aware scheduling

CapacityScheduling

Manages CapacityScheduling.

Use Capacity Scheduling

elasticresource

Manages ECI elastic scheduling.

Use ElasticResource to implement ECI elastic scheduling (discontinued)

resourcepolicy

Manages the scheduling of custom elastic resources.

Priority-based scheduling of custom elastic resources

gputopology

Manages GPU topology-aware scheduling.

GPU topology-aware scheduling

ECIBinderV1

Binds virtual nodes in ECI elastic scheduling scenarios.

Schedule pods to ECI

loadawarescheduling

Manages load-aware scheduling.

Use load-aware scheduling

EciScheduling

Manages virtual node scheduling.

Enable virtual node scheduling policies for a cluster

Instructions

The kube-scheduler component is installed by default and requires no configuration. Upgrade the kube-scheduler component to the latest version to benefit from the latest feature optimizations and bug fixes. To upgrade the component, log on to the Container Service Management Console, click the destination cluster, and in the navigation pane on the left, choose Operations Management > Component Management.

Change history

Version 1.34 change history

Version number

Change date

Changes

v1.34.0-apsara.6.11.8.a32868e8

January 5, 2026

  • New features:

    • Optimized the scheduling efficiency of shared GPUs.

    • Added multiple metrics for serverless resource scheduling, such as processing latency, timestamp tracking, and concurrent configurations, to enhance the observability of serverless workloads.

  • Bug fixes:

    • Fixed the logic for updating scheduling allocation information annotations for GPUShare pods in the Reserve phase to ensure the correct persistence of scheduling results.

    • Fixed an issue where NUMA IDs were not correctly removed.

    • Fixed an issue where NUMA allocation results could not be reconstructed from pod annotations after a scheduler restart, which caused uneven resource allocation.

    • Fixed an issue where the NominatedNodeName of a pod was not correctly cleared under specific conditions, such as being out of stock or concurrent preemption.

    • Fixed an issue where resources for pods with a NominatedNodeName were not reserved by the quota when Reservation was disabled.

    • Fixed an issue where a gang did not fail as a whole when preemption failed. This ensures consistent behavior for task groups and prevents multiple invalid scheduling attempts for multi-replica jobs.

    • Fixed a preemption failure issue in NetworkTopology preemption scenarios caused by incorrect object calls for StateData and Filter.

    • Fixed a scheduling issue with self-built Virtual Kubelets that have GPUs.

    • Fixed a scheduling issue that occurred when multiple containers requested a full GPU card.

    • Optimized the Min/Max Guarantee logic for ElasticQuota to ensure that a quota exceeding its Min value can only preempt itself after scheduling.

v1.34.0-apsara.6.11.7.43cab345

December 8, 2025

  • New features:

    • Network topology-aware scheduling now supports EP size scheduling. For PyTorchJob, it automatically places pods contiguously based on their index during scheduling.

  • Bug fixes:

    • Optimized auto scaling efficiency.

    • The scheduler no longer updates the Pod Scheduled condition when an ACS pod scheduling is triggered. This prevents the triggering of node pool auto scaling.

    • Fixed an issue where the scheduler could not read the ACS GPU partitions of scheduled pods in the cluster after a restart.

v1.34.0-apsara.6.11.6.3c0b732b

November 10, 2025

  • Bug fixes:

    • Fixed a memory leak issue in remaining IP-aware scheduling.

    • Fixed a statistics error that might occur if the CapacityScheduling quota was updated before the pod was bound.

  • New features:

    • Added support for __IGNORE__RESOURCE__.

    • Added support for declaring a pod as temporarily unschedulable using the alibabacloud.com/schedule-admission annotation.

    • Added support for ACS shared GPU capabilities.

    • Optimized scheduling capabilities related to PersistentVolumeClaims (PVCs) to increase the scheduling speed when creating pods with disks.

    • Fixed an issue where the ScheduleCycle was updated incorrectly when ResourcePolicy and Gang were used together.

v1.34.0-apsara.6.11.5.3c117f21

October 23, 2025

  • Bug fixes:

    • Fixed an issue where using ACS with the `alibabacloud.com/acs: "true"` label or using ECI with the `alibabacloud.com/eci: "true"` label did not take effect.

    • Fixed a scheduling issue that occurred when multiple containers in a single pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash when many concurrent requests were made for ACS computing power.

v1.34.0-apsara.6.11.3.ff6b62d8

September 17, 2025

Supported all previous features in ACK clusters of version 1.34.

Version 1.33 change history

Version number

Change date

Changes

v1.33.0-apsara.6.11.8.709bb6e6

January 5, 2026

  • New features:

    • Optimized the scheduling efficiency of shared GPUs.

    • Added multiple metrics for serverless resource scheduling, such as processing latency, timestamp tracking, and concurrent configurations, to enhance the observability of serverless workloads.

  • Bug fixes:

    • Fixed the logic for updating scheduling allocation information annotations for GPUShare pods in the Reserve phase to ensure the correct persistence of scheduling results.

    • Fixed an issue where NUMA IDs were not correctly removed.

    • Fixed an issue where NUMA allocation results could not be reconstructed from pod annotations after a scheduler restart, which caused uneven resource allocation.

    • Fixed an issue where the NominatedNodeName of a pod was not correctly cleared under specific conditions, such as being out of stock or concurrent preemption.

    • Fixed an issue where resources for pods with a NominatedNodeName were not reserved by the quota when Reservation was disabled.

    • Fixed an issue where a gang did not fail as a whole when preemption failed. This ensures consistent behavior for task groups and prevents multiple invalid scheduling attempts for multi-replica jobs.

    • Fixed a preemption failure issue in NetworkTopology preemption scenarios caused by incorrect object calls for StateData and Filter.

    • Fixed a scheduling issue with self-built Virtual Kubelets that have GPUs.

    • Fixed a scheduling issue that occurred when multiple containers requested a full GPU card.

    • Optimized the Min/Max Guarantee logic for ElasticQuota to ensure that a quota exceeding its Min value can only preempt itself after scheduling.

v1.33.0-apsara.6.11.7.4a6779f8

December 5, 2025

  • New features:

    • Network topology-aware scheduling now supports EP size scheduling. For PyTorchJob, it automatically places pods contiguously based on their index during scheduling.

  • Bug fixes:

    • Optimize the efficiency of auto scaling.

    • The scheduler no longer updates the Pod Scheduled condition when an ACS instance is created. This prevents the triggering of node pool auto scaling.

    • Fixed an issue where the scheduler could not read the ACS GPU partitions of scheduled pods in the cluster after a restart.

    • Fixed an issue where a pod could not be scheduled if its PVC had a SelectedNode.

v1.33.0-apsara.6.11.6.2fce98cb

November 10, 2025

  • Bug fixes:

    • Fixed a memory leak issue in remaining IP-aware scheduling.

    • Fixed a statistics error that might occur if the CapacityScheduling quota was updated before the pod was bound.

  • New features:

    • Added support for __IGNORE__RESOURCE__.

    • Added support for declaring a pod as temporarily unschedulable using the alibabacloud.com/schedule-admission annotation.

    • Added support for ACS shared GPU capabilities.

    • Optimized scheduling capabilities related to PVCs to increase the scheduling speed when creating pods with disks.

    • Fixed an issue where the ScheduleCycle was updated incorrectly when ResourcePolicy and Gang were used together.

v1.33.0-apsara.6.11.5.8dd6f5f4

October 23, 2025

  • Bug fixes:

    • Fixed an issue where using ACS with the `alibabacloud.com/acs: "true"` label or using ECI with the `alibabacloud.com/eci: "true"` label did not take effect.

v1.33.0-apsara.6.11.4.77470105

September 15, 2025

  • Bug fixes:

    • Fixed a scheduling issue that occurred when multiple containers in a single pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash when many concurrent requests were made for ACS computing power.

v1.33.0-apsara.6.11.3.ed953a31

September 8, 2025

  • New features:

    • ElasticQuotaTree now supports using the alibabacloud.com/ignore-empty-resource annotation to ignore resource limits that are not declared in a quota.

    • NetworkTopology now supports using a constraint in JobNetworkTopology to declare discretized distribution.

  • Bug fixes:

    • Fixed an issue where the scheduler component might crash when PodTopologySpread was used.

v1.33.0-aliyun.6.11.2.330dcea7

August 19, 2025

  • Optimized the scheduling determinism of GOAT to prevent nodes from being considered not ready when `node.cloudprovider.kubernetes.io/uninitialized` and `node.kubernetes.io/unschedulable` taints exist on the nodes.

  • Fixed an issue where the fairness check of ElasticQuotaTree considered quotas with an empty Min value or an empty Request within the quota as not met.

  • Fixed an issue where the scheduler component might crash when creating ACS instances.

  • Fixed an issue where the scheduler reported an error when the resources of an init container were empty. (29d1951)

v1.33.0-aliyun.6.11.1.382cd0a6

July 25, 2025

v1.33.0-aliyun.6.11.0.87e9673b

July 18, 2025

  • Optimized the scheduling determinism of GOAT to prevent determinism from failing due to concurrent NodeReady states during pod scheduling.

  • Fixed an issue where the pod count for a gang was incorrect when the PodGroup custom resource (CR) was deleted and recreated while scheduled pods existed.

  • Fixed an issue in the ElasticQuota preemption policy where pods with the same policy might be preempted, and preemption within the same quota might occur when resource usage did not reach the Min value.

  • Fixed an issue in remaining IP-aware scheduling where the scheduler did not correctly prevent pods from being scheduled to a node when the node had insufficient IP addresses.

  • Fixed an issue where the TimeoutOrExceedMax and ExceedMax policies in ResourcePolicy were invalid (introduced in version 6.9.x).

  • Fixed an issue where the MaxPod count was occasionally calculated incorrectly after elastic scaling was triggered in ResourcePolicy.

  • Added a scheduling fairness check to ElasticQuotaTree. This ensures that when a quota with unmet resource requirements has pending pods, no new pods are scheduled for quotas that have already met their resource guarantees. This feature must be enabled using the StrictFairness parameter of the plugin and is enabled by default when the preemption algorithm is set to None.

  • Added the ScheduleAdmission feature. The scheduler does not schedule pods that have the alibabacloud.com/schedule-admission annotation.

  • The scheduler now supports pods with the following three types of labels: alibabacloud.com/eci=true, alibabacloud.com/acs=true, and eci=true. For these pods, the scheduler checks only the VolumeBind, VolumeRestrictions, VolumeZone, and virtual node scheduling-related plugins (ServerlessGateway, ServerlessScheduling, and ServerlessBinder). If a pod does not have a PVC-type mount, the scheduler skips all checks and passes the pod directly to the virtual node for processing.

  • Added a security check for ResourcePolicy scheduling. If a unit might patch pod labels and the label might affect the MatchLabels of a ReplicaSet or StatefulSet, the unit is skipped.

v1.33.0-aliyun.6.9.4.8b58e6b4

June 10, 2025

  • Fixed an issue where InterPodAffinity and PodTopologySpread might become invalid during continuous pod scheduling.

  • Fixed an issue where scheduling occasionally became abnormal when ResourcePolicy was used.

  • Optimized the behavior of the scheduler when interacting with node pools that have auto scaling enabled.

  • Fixed an issue where the pod count was incorrect in ResourcePolicy for priority-based scheduling of custom elastic resources.

  • Fixed an issue where a cloud disk leak might occur when a WaitForFirstConsumer-type cloud disk was used with serverless computing power.

v1.33.0-aliyun.6.9.2.09bce458

April 28, 2025

Supported all previous features in ACK clusters of version 1.33.

Version 1.32 change history

Version number

Change date

Changes

v1.32.0-apsara.6.11.8.df9f2fa6

January 5, 2026

  • New features:

    • Optimized the scheduling efficiency of shared GPUs.

    • Added multiple metrics for serverless resource scheduling, such as processing latency, timestamp tracking, and concurrent configurations, to enhance the observability of serverless workloads.

  • Bug fixes:

    • Fixed the logic for updating scheduling allocation information annotations for GPUShare pods in the Reserve phase to ensure the correct persistence of scheduling results.

    • Fixed an issue where NUMA IDs were not correctly removed.

    • Fixed an issue where NUMA allocation results could not be reconstructed from pod annotations after a scheduler restart, which caused uneven resource allocation.

    • Fixed an issue where the NominatedNodeName of a pod was not correctly cleared under specific conditions, such as being out of stock or concurrent preemption.

    • Fixed an issue where resources for pods with a NominatedNodeName were not reserved by the quota when Reservation was disabled.

    • Fixed an issue where a gang did not fail as a whole when preemption failed. This ensures consistent behavior for task groups and prevents multiple invalid scheduling attempts for multi-replica jobs.

    • Fixed a preemption failure issue in NetworkTopology preemption scenarios caused by incorrect object calls for StateData and Filter.

    • Fixed a scheduling issue with self-built Virtual Kubelets that have GPUs.

    • Fixed a scheduling issue that occurred when multiple containers requested a full GPU card.

    • Optimized the Min/Max Guarantee logic for ElasticQuota to ensure that a quota exceeding its Min value can only preempt itself after scheduling.

v1.32.0-apsara.6.11.7.4489ebf4

December 10, 2025

  • Bug fixes:

    • Optimize the efficiency of Auto Scaling.

    • The scheduler no longer updates the Pod Scheduled condition when an ACS instance is created. This prevents the triggering of node pool auto scaling.

    • Fixed an issue where the scheduler could not read the ACS GPU partitions of scheduled pods in the cluster after a restart.

v1.32.0-apsara.6.11.6.03248691

November 10, 2025

  • Bug fixes:

    • Fixed a memory leak issue in remaining IP-aware scheduling.

    • Fixed a statistics error that might occur if the CapacityScheduling quota was updated before the pod was bound.

  • New features:

    • Added support for __IGNORE__RESOURCE__.

    • Added support for declaring a pod as temporarily unschedulable using the alibabacloud.com/schedule-admission annotation.

    • Added support for ACS shared GPU capabilities.

    • Optimized scheduling capabilities related to PVCs to increase the scheduling speed when creating pods with disks.

    • Fixed an issue where the ScheduleCycle was updated incorrectly when ResourcePolicy and Gang were used together.

v1.32.0-apsara.6.11.5.c774d3c3

October 23, 2025

  • Bug fixes:

    • Fixed an issue where using ACS with the `alibabacloud.com/acs: "true"` label or using ECI with the `alibabacloud.com/eci: "true"` label did not take effect.

v1.32.0-apsara.6.11.4.4a4f4843

September 15, 2025

  • Bug fixes:

    • Fixed a scheduling issue that occurred when multiple containers in a single pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash when many concurrent requests were made for ACS computing power.

v1.32.0-apsara.6.11.3.b651c575

September 12, 2025

  • New features:

    • ElasticQuotaTree now supports using the alibabacloud.com/ignore-empty-resource annotation to ignore resource limits that are not declared in a quota.

    • NetworkTopology now supports using a constraint in JobNetworkTopology to declare discretized distribution.

v1.32.0-aliyun.6.11.2.58302423

August 21, 2025

  • Optimized the scheduling determinism of GOAT to prevent nodes from being considered not ready when `node.cloudprovider.kubernetes.io/uninitialized` and `node.kubernetes.io/unschedulable` taints exist on the nodes.

  • Fixed an issue where the fairness check of ElasticQuotaTree considered quotas with an empty Min value or an empty Request within the quota as not met.

  • Fixed an issue where the scheduler component might crash when creating ACS instances.

v1.32.0-aliyun.6.11.1.ab632d8c

July 25, 2025

v1.32.0-aliyun.6.11.0.0350a0e7

July 18, 2025

  • Optimized the scheduling determinism of GOAT to prevent determinism from failing due to concurrent NodeReady states during pod scheduling.

  • Fixed an issue where the pod count for a gang was incorrect when the PodGroup CR was deleted and recreated while scheduled pods existed.

  • Fixed an issue in the ElasticQuota preemption policy where pods with the same policy might be preempted, and preemption within the same quota might occur when resource usage did not reach the Min value.

  • Fixed an issue in remaining IP-aware scheduling where the scheduler did not correctly prevent pods from being scheduled to a node when the node had insufficient IP addresses.

  • Fixed an issue where the TimeoutOrExceedMax and ExceedMax policies in ResourcePolicy were invalid (introduced in version 6.9.x).

  • Fixed an issue where the MaxPod count was occasionally calculated incorrectly after elastic scaling was triggered in ResourcePolicy.

  • Added a scheduling fairness check to ElasticQuotaTree. This ensures that when a quota with unmet resource requirements has pending pods, no new pods are scheduled for quotas that have already met their resource guarantees. This feature must be enabled using the StrictFairness parameter of the plugin and is enabled by default when the preemption algorithm is set to None.

  • Added the ScheduleAdmission feature. The scheduler does not schedule pods that have the alibabacloud.com/schedule-admission annotation.

  • The scheduler now supports pods with the following three types of labels: alibabacloud.com/eci=true, alibabacloud.com/acs=true, and eci=true. For these pods, the scheduler checks only the VolumeBind, VolumeRestrictions, VolumeZone, and virtual node scheduling-related plugins (ServerlessGateway, ServerlessScheduling, and ServerlessBinder). If a pod does not have a PVC-type mount, the scheduler skips all checks and passes the pod directly to the virtual node for processing.

  • Added a security check for ResourcePolicy scheduling. If a unit might patch pod labels and the label might affect the MatchLabels of a ReplicaSet or StatefulSet, the unit is skipped.

v1.32.0-aliyun.6.9.4.d5a8a355

June 4, 2025

  • Fixed an issue where InterPodAffinity and PodTopologySpread might become invalid during continuous pod scheduling.

  • Fixed an issue where scheduling occasionally became abnormal when ResourcePolicy was used.

  • Fixed an issue where ElasticQuota preemption was abnormal.

v1.32.0-aliyun.6.9.3.515ac311

May 14, 2025

  • Optimized the behavior of the scheduler when interacting with node pools that have auto scaling enabled.

  • Fixed an issue where the pod count was incorrect in ResourcePolicy for priority-based scheduling of custom elastic resources.

  • Fixed an issue where a cloud disk leak might occur when a WaitForFirstConsumer-type cloud disk was used with serverless computing power.

v1.32.0-aliyun.6.9.2.09bce458

April 16, 2025

  • Fixed an issue where the ElasticQuota preemption feature was abnormal.

  • Added support for scheduling pods to ACS GPU-HPN nodes in ACK clusters.

v1.32.0-aliyun.6.8.6.bd13955d

April 2, 2025

  • Fixed an issue in ACK serverless clusters where cloud disks of the WaitForFirstConsumer type were not created using the Container Storage Interface (CSI) plugin.

v1.32.0-aliyun.6.9.0.a1c7461b

February 28, 2025

  • Added support for node remaining IP-aware scheduling.

  • Added a plugin to support resource checks before Kube Queue tasks are dequeued.

  • Added support for switching the preemption algorithm implementation through component configuration.

v1.32.0-aliyun.6.8.5.28a2aed7

February 19, 2025

  • Fixed an issue where cloud disks might be repeatedly created when using ECI or ACS.

  • Fixed an issue where the Max value was invalid after declaring PodLabels in priority-based scheduling of custom elastic resources.

v1.32.0-aliyun.6.8.4.2b585931

January 17, 2025

Supported all previous features in ACK clusters of version 1.32.

Version 1.31 change history

Version number

Change date

Changes

v1.31.0-apsara.6.11.5.28c6b51a

October 20, 2025

  • Bug fixes:

    • Fixed an issue where using ACS with the `alibabacloud.com/acs: "true"` label or using ECI with the `alibabacloud.com/eci: "true"` label did not take effect.

v1.31.0-apsara.6.11.4.69d7e1fa

September 15, 2025

  • Bug fixes:

    • Fixed a scheduling issue that occurred when multiple containers in a single pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash when many concurrent requests were made for ACS computing power.

v1.31.0-apsara.6.11.3.9b41ad4a

September 12, 2025

  • New features:

    • ElasticQuotaTree now supports using the alibabacloud.com/ignore-empty-resource annotation to ignore resource limits that are not declared in a quota.

    • NetworkTopology now supports using a constraint in JobNetworkTopology to declare discretized distribution.

    • Optimized the scheduling determinism of GOAT to prevent nodes from being considered not ready when `node.cloudprovider.kubernetes.io/uninitialized` and `node.kubernetes.io/unschedulable` taints exist on the nodes.

  • Bug fixes

    • Fixed an issue where the fairness check of ElasticQuotaTree considered quotas with an empty Min value or an empty Request within the quota as not met.

    • Fixed an issue where the scheduler component might crash when creating ACS instances.

v1.31.0-aliyun.6.11.1.c9ed2f40

July 25, 2025

v1.31.0-aliyun.6.11.0.ea1f0f94

July 18, 2025

  • Optimized the scheduling determinism of GOAT to prevent determinism from failing due to concurrent NodeReady states during pod scheduling.

  • Fixed an issue where the pod count for a gang was incorrect when the PodGroup CR was deleted and recreated while scheduled pods existed.

  • Fixed an issue in the ElasticQuota preemption policy where pods with the same policy might be incorrectly preempted, and preemption within the same quota might occur when resource usage did not reach the Min value.

  • Fixed an issue in remaining IP-aware scheduling where the scheduler did not correctly prevent pods from being scheduled to a node when the node had insufficient IP addresses.

  • Fixed an issue where the TimeoutOrExceedMax and ExceedMax policies in ResourcePolicy were invalid (introduced in version 6.9.x).

  • Fixed an issue where the MaxPod count was occasionally calculated incorrectly after elastic scaling was triggered in ResourcePolicy.

  • Added a scheduling fairness check to ElasticQuotaTree. This ensures that when a quota with unmet resource requirements has pending pods, no new pods are scheduled for quotas that have already met their resource guarantees. This feature must be enabled using the StrictFairness parameter of the plugin and is enabled by default when the preemption algorithm is set to None.

  • Added the ScheduleAdmission feature. The scheduler does not schedule pods that have the alibabacloud.com/schedule-admission annotation.

  • The scheduler now supports pods with the following three types of labels: alibabacloud.com/eci=true, alibabacloud.com/acs=true, and eci=true. For these pods, the scheduler checks only the VolumeBind, VolumeRestrictions, VolumeZone, and virtual node scheduling-related plugins (ServerlessGateway, ServerlessScheduling, and ServerlessBinder). If a pod does not have a PVC-type mount, the scheduler skips all checks and passes the pod directly to the virtual node for processing.

  • Added a security check for ResourcePolicy scheduling. If a unit might patch pod labels and the label might affect the MatchLabels of a ReplicaSet or StatefulSet, the unit is skipped.

v1.31.0-aliyun.6.9.4.c8e540e8

June 4, 2025

  • Fixed an issue where InterPodAffinity and PodTopologySpread might become invalid during continuous pod scheduling.

  • Fixed an issue where scheduling occasionally became abnormal when ResourcePolicy was used.

  • Fixed an issue where ElasticQuota preemption was abnormal.

v1.31.0-aliyun.6.9.3.051bb0e8

May 14, 2025

  • Optimized the behavior of the scheduler when interacting with node pools that have auto scaling enabled.

  • Fixed an issue where the pod count was incorrect in ResourcePolicy for priority-based scheduling of custom elastic resources.

  • Fixed an issue where a cloud disk leak might occur when a WaitForFirstConsumer-type cloud disk was used with serverless computing power.

v1.31.0-aliyun.6.8.6.520f223d

April 2, 2025

  • Fixed an issue in ACK serverless clusters where cloud disks of the WaitForFirstConsumer type were not created using the CSI plugin.

v1.31.0-aliyun.6.9.0.8287816e

February 28, 2025

  • Added support for node remaining IP-aware scheduling.

  • Added a plugin to support resource checks before Kube Queue tasks are dequeued.

  • Added support for switching the preemption algorithm implementation through component configuration.

v1.31.0-aliyun.6.8.5.2c6ea085

February 19, 2025

  • Fixed an issue where cloud disks might be repeatedly created when using ECI or ACS.

  • Fixed an issue where the Max value was invalid after declaring PodLabels in priority-based scheduling of custom elastic resources.

v1.31.0-aliyun.6.8.4.8f585f26

January 2, 2025

  • Priority-based scheduling of custom elastic resources:

    • Added support for ACS GPU.

    • Fixed an issue where ECI instances might leak when PVCs were used in an ACK serverless cluster.

  • CapacityScheduling:

    • Fixed an issue where ElasticQuotaTree usage was incorrect in ACS resource normalization scenarios.

v1.31.0-aliyun.6.8.3.eeb86afc

December 16, 2024

Priority-based scheduling of custom elastic resources: Added support for multiple ACS-type units.

v1.31.0-aliyun.6.8.2.eeb86afc

December 5, 2024

Priority-based scheduling of custom elastic resources: Added support for defining PodAnnotations in a unit.

v1.31.0-aliyun.6.8.1.116b8e1f

December 2, 2024

  • Optimized the performance of network topology-aware scheduling.

  • Fixed an issue where ECI pods might be scheduled back to ECS nodes for execution.

  • Load-aware scheduling no longer restricts DaemonSet pods during scheduling.

v1.31.0-aliyun.6.7.1.1943173f

November 6, 2024

  • Priority-based scheduling of custom elastic resources

    • Added support for trigger-based autoscaling of pods.

    • The `resource: elastic` field in a unit is deprecated. Use k8s.aliyun.com/resource-policy-wait-for-ecs-scaling in PodLabels instead.

  • CPU topology-aware scheduling

    • Fixed an issue that might occur when the ECS instance type changes.

v1.31.0-aliyun.6.7.0.740ba623

November 4, 2024

  • CapacityScheduling

    • Fixed an issue where elastic quota preemption was performed even when ElasticQuotaTree was not present.

  • Priority-based scheduling of custom elastic resources

    • Added support for ACS-type units.

v1.31.0-aliyun.6.6.1.5bd14ab0

October 22, 2024

  • Fixed an issue where PodTopologySpread occasionally caused an invalid score.

  • Optimized the event messages for Coscheduling. The number of Coscheduling failures is now included in the event messages.

  • Optimized the messages related to virtual node scheduling. Warning events are no longer sent during the virtual node scheduling process.

  • Network topology-aware scheduling

    • Fixed an issue where pods could not be scheduled after preemption in network topology-aware scheduling.

  • NUMA topology-aware scheduling

    • Fixed an issue where NUMA topology-aware scheduling did not take effect.

v1.31.0-aliyun.6.6.0.ba473715

September 13, 2024

Supported all previous features in ACK clusters of version 1.31.

Version 1.30 change history

Version number

Change date

Changes

v1.30.3-apsara.6.11.7.3cfed0f9

December 10, 2025

  • Bug fixes:

    • Optimized auto scaling efficiency.

    • The scheduler no longer updates the Pod Scheduled condition when an ACS instance is created. This prevents the triggering of node pool auto scaling.

    • Fixed an issue where the scheduler could not read the ACS GPU partitions of scheduled pods in the cluster after a restart.

v1.30.3-apsara.6.11.6.a298df6b

November 10, 2025

  • New features:

    • Added support for __IGNORE__RESOURCE__.

    • Added support for declaring a pod as temporarily unschedulable using the alibabacloud.com/schedule-admission annotation.

    • Added support for ACS shared GPU capabilities.

    • Optimized scheduling capabilities related to PVCs to increase the scheduling speed when creating pods with disks.

    • Fixed an issue where the ScheduleCycle was updated incorrectly when ResourcePolicy and Gang were used together.

    • ElasticQuotaTree now supports using the alibabacloud.com/ignore-empty-resource annotation to ignore resource limits that are not declared in a quota.

    • Optimized the scheduling determinism of GOAT to prevent nodes from being considered not ready when `node.cloudprovider.kubernetes.io/uninitialized` and `node.kubernetes.io/unschedulable` taints exist on the nodes.

  • Bug fixes:

    • Fixed a memory leak issue in remaining IP-aware scheduling.

    • Fixed a statistics error that might occur if the CapacityScheduling quota was updated before the pod was bound.

    • Fixed an issue where the fairness check of ElasticQuotaTree considered quotas with an empty Min value or an empty Request within the quota as not met.

v1.30.3-apsara.6.11.3.bc707580

October 23, 2025

  • Bug fixes:

    • Fixed an issue where using ACS with the `alibabacloud.com/acs: "true"` label or using ECI with the `alibabacloud.com/eci: "true"` label did not take effect.

v1.30.3-apsara.6.11.2.463d59c9

September 15, 2025

  • Bug fixes:

    • Fixed a scheduling issue that occurred when multiple containers in a single pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash when many concurrent requests were made for ACS computing power.

v1.30.3-aliyun.6.11.1.c005a0b0

July 25, 2025

v1.30.3-aliyun.6.11.0.84cdcafb

July 18, 2025

  • Optimized the scheduling determinism of GOAT to prevent determinism from failing due to concurrent NodeReady states during pod scheduling.

  • Fixed an issue where the pod count for a gang was incorrect when the PodGroup CR was deleted and recreated while scheduled pods existed.

  • Fixed an issue in the ElasticQuota preemption policy where pods with the same policy might be preempted, and preemption within the same quota might occur when resource usage did not reach the Min value.

  • Fixed an issue in remaining IP-aware scheduling where the scheduler did not correctly prevent pods from being scheduled to a node when the node had insufficient IP addresses.

  • Fixed an issue where the TimeoutOrExceedMax and ExceedMax policies in ResourcePolicy were invalid (introduced in version 6.9.x).

  • Fixed an issue where the MaxPod count was occasionally calculated incorrectly after elastic scaling was triggered in ResourcePolicy.

  • Added a scheduling fairness check to ElasticQuotaTree. This ensures that when a quota with unmet resource requirements has pending pods, no new pods are scheduled for quotas that have already met their resource guarantees. This feature must be enabled using the StrictFairness parameter of the plugin and is enabled by default when the preemption algorithm is set to None.

  • Added the ScheduleAdmission feature. The scheduler does not schedule pods that have the alibabacloud.com/schedule-admission annotation.

  • The scheduler now supports pods with the following three types of labels: alibabacloud.com/eci=true, alibabacloud.com/acs=true, and eci=true. For these pods, the scheduler checks only the VolumeBind, VolumeRestrictions, VolumeZone, and virtual node scheduling-related plugins (ServerlessGateway, ServerlessScheduling, and ServerlessBinder). If a pod does not have a PVC-type mount, the scheduler skips all checks and passes the pod directly to the virtual node for processing.

  • Added a security check for ResourcePolicy scheduling. If a unit might patch pod labels and the label might affect the MatchLabels of a ReplicaSet or StatefulSet, the unit is skipped.

v1.30.3-aliyun.6.9.4.818b6506

June 4, 2025

  • Fixed an issue where InterPodAffinity and PodTopologySpread might become invalid during continuous pod scheduling.

  • Fixed an issue where scheduling occasionally became abnormal when ResourcePolicy was used.

  • Fixed an issue where ElasticQuota preemption was abnormal.

v1.30.3-aliyun.6.9.3.ce7e2faf

May 14, 2025

  • Optimized the behavior of the scheduler when interacting with node pools that have auto scaling enabled.

  • Fixed an issue where the pod count was incorrect in ResourcePolicy for priority-based scheduling of custom elastic resources.

  • Fixed an issue where a cloud disk leak might occur when a WaitForFirstConsumer-type cloud disk was used with serverless computing power.

v1.30.3-aliyun.6.8.6.40d5fdf4

April 2, 2025

  • Fixed an issue in ACK serverless clusters where cloud disks of the WaitForFirstConsumer type were not created using the CSI plugin.

v1.30.3-aliyun.6.9.0.f08e56a7

February 28, 2025

  • Added support for node remaining IP-aware scheduling.

  • Added a plugin to support resource checks before Kube Queue tasks are dequeued.

  • Added support for switching the preemption algorithm implementation through component configuration.

v1.30.3-aliyun.6.8.5.af20249c

February 19, 2025

  • Fixed an issue where cloud disks might be repeatedly created when using ECI or ACS.

  • Fixed an issue where the Max value was invalid after declaring PodLabels in priority-based scheduling of custom elastic resources.

v1.30.3-aliyun.6.8.4.946f90e8

January 2, 2025

  • Priority-based scheduling of custom elastic resources:

    • Added support for ACS GPU.

    • Fixed an issue where ECI instances might leak when PVCs were used in an ACK serverless cluster.

  • CapacityScheduling:

    • Fixed an issue where ElasticQuotaTree usage was incorrect in ACS resource normalization scenarios.

v1.30.3-aliyun.6.8.3.697ce9b5

December 16, 2024

Priority-based scheduling of custom elastic resources: Added support for multiple ACS-type units.

v1.30.3-aliyun.6.8.2.a5fa5dbd

December 5, 2024

Priority-based scheduling of custom elastic resources

  • Added support for defining PodAnnotations in a unit.

v1.30.3-aliyun.6.8.1.6dc0fd75

December 2, 2024

  • Optimized the performance of network topology-aware scheduling.

  • Fixed an issue where ECI pods might be scheduled back to ECS nodes for execution.

  • Load-aware scheduling no longer restricts DaemonSet pods during scheduling.

v1.30.3-aliyun.6.7.1.d992180a

November 6, 2024

  • Priority-based scheduling of custom elastic resources

    • Added support for trigger-aware scaling based on the number of Pods.

    • The `resource: elastic` field in a unit is deprecated. Use k8s.aliyun.com/resource-policy-wait-for-ecs-scaling in PodLabels instead.

  • CPU topology-aware scheduling

    • Fixed an issue that might occur when the ECS instance type changes.

v1.30.3-aliyun.6.7.0.da474ec5

November 4, 2024

  • CapacityScheduling

    • Fixed an issue where elastic quota preemption was performed even when ElasticQuotaTree was not present.

  • Priority-based scheduling of custom elastic resources

    • Added support for ACS-type units.

v1.30.3-aliyun.6.6.4.b8940a30

October 22, 2024

  • Fixed an issue where PodTopologySpread occasionally caused an invalid score.

v1.30.3-aliyun.6.6.3.994ade8a

October 18, 2024

  • Optimized the event messages for Coscheduling. The number of Coscheduling failures is now included in the event messages.

  • Optimized the messages related to virtual node scheduling. Warning events are no longer sent during the virtual node scheduling process.

v1.30.3-aliyun.6.6.2.0be67202

September 23, 2024

  • Network topology-aware scheduling

    • Fixed an issue where pods could not be scheduled after preemption in network topology-aware scheduling.

  • NUMA topology-aware scheduling

    • Fixed an issue where NUMA topology-aware scheduling did not take effect.

v1.30.3-aliyun.6.6.1.d98352c6

September 11, 2024

  • Added support for preemption in network topology-aware scheduling.

  • SlurmOperator

    • Added support for hybrid scheduling in Kubernetes & Slurm clusters.

  • Coscheduling

    • Added support for the latest CRD version from the community.

v1.30.3-aliyun.6.5.6.fe7bc1d5

August 20, 2024

Fixed the abnormal PodAffinity/PodAntiAffinity scheduling issue introduced in v1.30.1-aliyun.6.5.1.5dad3be8.

v1.30.3-aliyun.6.5.5.8b10ee7c

August 1, 2024

  • Rebased to community version v1.30.3.

v1.30.1-aliyun.6.5.5.fcac2bdf

August 1, 2024

  • CapacityScheduling

    • Fixed a quota calculation error that might occur when Coscheduling and CapacityScheduling were used together.

  • GPUShare

    • Fixed an error in calculating the remaining resources of a computing power scheduling node.

  • Priority-based scheduling of custom elastic resources

    • Optimized the node scale-out behavior when ResourcePolicy and ClusterAutoscaler were used together. Nodes are no longer scaled out when the pods in all units have reached their Max value.

v1.30.1-aliyun.6.5.4.fcac2bdf

July 22, 2024

  • Coscheduling

    • Fixed a quota statistics error when using ECI.

  • Fixed the occasional "xxx is in cache, so can't be assumed" issue.

v1.30.1-aliyun.6.5.3.9adaeb31

July 10, 2024

Fixed the issue where pods were in the Pending state for a long time, which was introduced in v1.30.1-aliyun.6.5.1.5dad3be8.

v1.30.1-aliyun.6.5.1.5dad3be8

June 27, 2024

  • Coscheduling

    • Optimize the scheduling performance of Coscheduling.

  • Added support for sequential pod scheduling.

  • Added support for declaring equivalence classes to improve scheduling performance.

  • Optimized the performance of existing scheduler plugins using PreEnqueue.

v1.30.1-aliyun.6.4.7.6643d15f

May 31, 2024

  • Supported all previous features in ACK clusters of version 1.30.

Version 1.28 change history

Version number

Change date

Changes

v1.28.12-apsara-6.11.5.db9be0f5

October 20, 2025

  • Bug fixes:

    • Fixed an issue where using ACS with the `alibabacloud.com/acs: "true"` label or using ECI with the `alibabacloud.com/eci: "true"` label did not take effect.

v1.28.12-apsara-6.11.4.a48c5b6c

September 15, 2025

  • Bug fixes:

    • Fixed a scheduling issue that occurred when multiple containers in a single pod requested nvidia.com/gpu.

    • Fixed an issue where the scheduler might crash when many concurrent requests were made for ACS computing power.

v1.28.12-apsara-6.11.3.1a06b13e

September 9, 2025

  • New features:

    • ElasticQuotaTree now supports using the alibabacloud.com/ignore-empty-resource annotation to ignore resource limits that are not declared in a quota.

v1.28.12-aliyun-6.11.1.f23c663c

July 25, 2025

v1.28.12-aliyun-6.11.0.4003ef92

July 18, 2025

  • Optimized the scheduling determinism of GOAT to prevent determinism from failing due to concurrent NodeReady states during pod scheduling.

  • Fixed an issue where the pod count for a gang was incorrect when the PodGroup CR was deleted and recreated while scheduled pods existed.

  • Fixed an issue in the ElasticQuota preemption policy where pods with the same policy might be incorrectly preempted, and preemption within the same quota might occur when resource usage did not reach the Min value.

  • Fixed an issue in remaining IP-aware scheduling where the scheduler did not correctly prevent pods from being scheduled to a node when the node had insufficient IP addresses.

  • Fixed an issue where the TimeoutOrExceedMax and ExceedMax policies in ResourcePolicy were invalid (introduced in version 6.9.x).

  • Fixed an issue where the MaxPod count was occasionally calculated incorrectly after elastic scaling was triggered in ResourcePolicy.

  • Added a scheduling fairness check to ElasticQuotaTree. This ensures that when a quota with unmet resource requirements has pending pods, no new pods are scheduled for quotas that have already met their resource guarantees. This feature must be enabled using the StrictFairness parameter of the plugin and is enabled by default when the preemption algorithm is set to None.

  • Added the ScheduleAdmission feature. The scheduler does not schedule pods that have the alibabacloud.com/schedule-admission annotation.

  • The scheduler now supports pods with the following three types of labels: alibabacloud.com/eci=true, alibabacloud.com/acs=true, and eci=true. For these pods, the scheduler checks only the VolumeBind, VolumeRestrictions, VolumeZone, and virtual node scheduling-related plugins (ServerlessGateway, ServerlessScheduling, and ServerlessBinder). If a pod does not have a PVC-type mount, the scheduler skips all checks and passes the pod directly to the virtual node for processing.

  • Added a security check for ResourcePolicy scheduling. If a unit might patch pod labels and the label might affect the MatchLabels of a ReplicaSet or StatefulSet, the unit is skipped.

v1.28.12-aliyun-6.9.4.206fc5f8

June 4, 2025

  • Fixed an issue where InterPodAffinity and PodTopologySpread might become invalid during continuous pod scheduling.

  • Fixed an issue where scheduling occasionally became abnormal when ResourcePolicy was used.

  • Fixed an issue where ElasticQuota preemption was abnormal.

v1.28.12-aliyun-6.9.3.cd73f3fe

May 14, 2025

  • Optimized the behavior of the scheduler when interacting with node pools that have auto scaling enabled.

  • Fixed an issue where the pod count was incorrect in ResourcePolicy for priority-based scheduling of custom elastic resources.

  • Fixed an issue where a cloud disk leak might occur when a WaitForFirstConsumer-type cloud disk was used with serverless computing power.

v1.28.12-aliyun-6.8.6.5f05e0ac

April 2, 2025

  • Fixed an issue in ACK serverless clusters where cloud disks of the WaitForFirstConsumer type were not created using the CSI plugin.

v1.28.12-aliyun-6.9.0.6a13fa65

February 28, 2025

  • Added support for node remaining IP-aware scheduling.

  • Added a plugin to support resource checks before Kube Queue tasks are dequeued.

  • Added support for switching the preemption algorithm implementation through component configuration.

v1.28.12-aliyun-6.8.5.b6aef0d1

February 19, 2025

  • Fixed an issue where disks might be repeatedly created when using ECI or ACS.

  • Fixed an issue where the Max value was invalid after declaring PodLabels in priority-based scheduling of custom elastic resources.

v1.28.12-aliyun-6.8.4.b27c0009

January 2, 2025

  • Priority-based scheduling of custom elastic resources:

    • Added support for ACS GPU.

    • Fixed an issue where ECI instances might leak when PVCs were used in an ACK serverless cluster.

  • CapacityScheduling:

    • Fixed an issue where ElasticQuotaTree usage was incorrect in ACS resource normalization scenarios.

v1.28.12-aliyun-6.8.3.70c756e1

December 16, 2024

Priority-based scheduling of custom elastic resources: Added support for multiple ACS-type units.

v1.28.12-aliyun-6.8.2.9a307479

December 5, 2024

Priority-based scheduling of custom elastic resources

  • Added support for defining PodAnnotations in a unit.

v1.28.12-aliyun-6.8.1.db6cdeb8

December 2, 2024

  • Optimized the performance of network topology-aware scheduling.

  • Fixed an issue where ECI pods might be scheduled back to ECS nodes for execution.

  • Load-aware scheduling no longer restricts DaemonSet pods during scheduling.

v1.28.12-aliyun-6.7.1.44345748

November 6, 2024

  • Priority-based scheduling of custom elastic resources

    • Added support for detecting the number of pods that trigger elastic scaling.

    • The `resource: elastic` field in a unit is deprecated. Use k8s.aliyun.com/resource-policy-wait-for-ecs-scaling in PodLabels instead.

  • CPU topology-aware scheduling

    • Fixed an issue that might occur when the ECS instance type changes.

v1.28.12-aliyun-6.7.0.b97fca02

November 4, 2024

  • CapacityScheduling

    • Fixed an issue where elastic quota preemption was performed even when ElasticQuotaTree was not present.

  • Priority-based scheduling of custom elastic resources

    • Added support for ACS-type units.

v1.28.12-aliyun-6.6.4.e535a698

October 22, 2024

  • Fixed an issue where PodTopologySpread occasionally caused an invalid score.

v1.28.12-aliyun-6.6.3.188f750b

October 11, 2024

  • Optimized the event messages for Coscheduling. The number of Coscheduling failures is now included in the event messages.

  • Optimized the messages related to virtual node scheduling. Warning events are no longer sent during the virtual node scheduling process.

v1.28.12-aliyun-6.6.2.054ec1f5

September 23, 2024

  • Network topology-aware scheduling

    • Fixed an issue where pods could not be scheduled after preemption in network topology-aware scheduling.

  • NUMA topology-aware scheduling

    • Fixed an issue where NUMA topology-aware scheduling did not take effect.

v1.28.12-aliyun-6.6.1.348b251d

September 11, 2024

  • Added support for preemption in network topology-aware scheduling.

  • SlurmOperator

    • Added support for hybrid scheduling in Kubernetes & Slurm clusters.

v1.28.12-aliyun-6.5.4.79e08301

August 20, 2024

Fixed the abnormal PodAffinity/PodAntiaffinity scheduling issue introduced in v1.28.3-aliyun-6.5.1.364d020b.

v1.28.12-aliyun-6.5.3.aefde017

August 1, 2024

  • Rebased to community version v1.28.12.

v1.28.3-aliyun-6.5.3.79e08301

August 1, 2024

  • CapacityScheduling

    • Fixed a quota calculation error that might occur when Coscheduling and CapacityScheduling were used together.

  • GPUShare

    • Fixed an error in calculating the remaining resources of a computing power scheduling node.

  • Priority-based scheduling of custom elastic resources

    • Optimized the node scale-out behavior when ResourcePolicy and ClusterAutoscaler were used together. Nodes are no longer scaled out when the pods in all units have reached their Max value.

v1.28.3-aliyun-6.5.2.7ff57682

July 22, 2024

  • Coscheduling

    • Fixed a quota statistics error when using ECI.

  • Fixed the occasional "xxx is in cache, so can't be assumed" issue.

  • Fixed the issue where pods were in the Pending state for a long time, which was introduced in v1.28.3-aliyun-6.5.1.364d020b.

v1.28.3-aliyun-6.5.1.364d020b

June 27, 2024

  • Coscheduling

    • Optimize the scheduling speed of Coscheduling.

  • Added support for sequential pod scheduling.

  • Added support for declaring equivalence classes to improve scheduling performance.

  • Optimized the performance of existing scheduler plugins using PreEnqueue.

v1.28.3-aliyun-6.4.7.0f47500a

May 24, 2024

  • Network topology-aware scheduling

    • Fixed an issue where network topology-aware scheduling occasionally failed.

v1.28.3-aliyun-6.4.6.f32dc398

May 16, 2024

  • Shared GPU scheduling

    • Fixed an issue in LINGJUN clusters where GPU scheduling became abnormal after the ack.node.gpu.schedule label of a node was changed from egpu to default.

  • CapacityScheduling

    • Fixed the occasional error message: running AddPod on PreFilter plugin.

  • Elastic scheduling

    • Added an event wait for eci provisioning that is generated when an ECI instance is created using alibabacloud.com/burst-resource.

v1.28.3-aliyun-6.4.5.a8b4a599

May 9, 2024

v1.28.3-aliyun-6.4.3.f57771d7

March 18, 2024

  • Shared GPU scheduling

    • Added support for submitting a ConfigMap to specify card isolation.

  • Priority-based scheduling of custom elastic resources

    • Added support for the elastic resource type.

v1.28.3-aliyun-6.4.2.25bc61fb

March 1, 2024

Disabled the SchedulerQueueingHints feature by default. For more information, see Pull Request #122291.

v1.28.3-aliyun-6.4.1.c7db7450

February 21, 2024

  • Added support for NUMA joint allocation.

  • Priority-based scheduling of custom elastic resources

    • Added support for waiting between units.

  • Fixed an issue in remaining IP-aware scheduling where the number of schedulable pods was reduced due to an incorrect count of remaining IP addresses.

v1.28.3-aliyun-6.3.1ab2185e

January 10, 2024

  • Priority-based scheduling of custom elastic resources

    • Fixed an issue where ECI zone affinity and discretization did not take effect when custom elastic resource priority scheduling was used.

  • CPU topology-aware scheduling

    • Prevented the same CPU core from being repeatedly allocated to a single pod, which caused the pod to fail to start on the node.

  • ECI elastic scheduling

    • Fixed an issue where pods were still scheduled to ECI when the value of the alibabacloud.com/burst-resource label was not `eci` or `eci_only`.

v1.28.3-aliyun-6.2.84d57ad9

December 21, 2023

Added support for MatchLabelKeys in priority-based scheduling of custom elastic resources to automatically group different versions during application releases.

v1.28.3-aliyun-6.1.ac950aa0

December 13, 2023

  • CapacityScheduling

    • Added a feature to specify a quota. You can specify the quota to which a pod belongs using the quota.scheduling.alibabacloud.com/name annotation on the pod.

    • Added a queue association feature. This feature supports counting only the resources of pods managed by Kube Queue.

    • Optimized the preemption logic. In the new version, CapacityScheduling preemption does not cause the resource usage of the preempted quota's pods to fall below the Min value, nor does it cause the resource usage of the preempting quota's pods to exceed the Min value.

  • Priority-based scheduling of custom elastic resources

    • Added support for updating the unit and node labels of a ResourcePolicy. After an update, the Deletion-Cost of the pod is synchronized.

    • Added IgnoreTerminatingPod. This feature supports ignoring terminating pods when counting the number of pods in a unit.

    • Added the IgnorePreviousPod option. This feature supports ignoring pods whose CreationTimestamp is earlier than that of the associated ResourcePolicy when counting the number of pods in a unit.

    • Added the PreemptPolicy option. This feature supports attempting pod preemption between units.

  • GPUShare

    • Optimized the GPUShare scheduling speed by reducing the P99 scheduling latency of the Filter plugin from milliseconds to microseconds.

v1.28.3-aliyun-5.8-89c55520

October 28, 2023

Supported all previous features in ACK clusters of version 1.28.

Version 1.26 change history

Version number

Change date

Changes

v1.26.3-aliyun-6.8.7.5a563072

November 27, 2025

Fixed a scheduling issue caused by NUMAAwareResource returning a score greater than 100.

v1.26.3-aliyun-6.8.7.fec3f2bc

May 14, 2025

  • Fixed an issue where a cloud disk leak might occur when a WaitForFirstConsumer-type cloud disk was used with serverless computing power.

v1.26.3-aliyun-6.9.0.293e663c

February 28, 2025

  • Added support for node remaining IP-aware scheduling.

  • Added a plugin to support resource checks before Kube Queue tasks are dequeued.

  • Added support for switching the preemption algorithm implementation through component configuration.

v1.26.3-aliyun-6.8.5.7838feba

February 19, 2025

  • Fixed an issue where disks might be repeatedly created when using ECI or ACS.

  • Fixed an issue where the Max value was invalid after declaring PodLabels in priority-based scheduling of custom elastic resources.

v1.26.3-aliyun-6.8.4.4b180111

January 2, 2025

  • Priority-based scheduling of custom elastic resources:

    • Added support for ACS GPU.

    • Fixed an issue where ECI instances might leak when PVCs were used in an ACK serverless cluster.

  • CapacityScheduling:

    • Fixed an issue where ElasticQuotaTree usage was incorrect in ACS resource normalization scenarios.

v1.26.3-aliyun-6.8.3.95c73e0b

December 16, 2024

Priority-based scheduling of custom elastic resources: Added support for multiple ACS-type units.

v1.26.3-aliyun-6.8.2.9c9fa19f

December 5, 2024

Priority-based scheduling of custom elastic resources

  • Added support for defining PodAnnotations in a unit.

v1.26.3-aliyun-6.8.1.a12db674

December 2, 2024

  • Fixed an issue where ECI pods might be scheduled back to ECS nodes for execution.

  • Load-aware scheduling no longer restricts DaemonSet pods during scheduling.

v1.26.3-aliyun-6.7.1.d466c692

November 6, 2024

  • Priority-based scheduling of custom elastic resources

    • Added support for detecting the number of pods that trigger elastic scaling.

    • The `resource: elastic` field in a unit is deprecated. Use k8s.aliyun.com/resource-policy-wait-for-ecs-scaling in PodLabels instead.

  • CPU topology-aware scheduling

    • Fixed an issue that might occur when the ECS instance type changes.

v1.26.3-aliyun-6.7.0.9c293fb7

November 4, 2024

  • CapacityScheduling

    • Fixed an issue where elastic quota preemption was performed even when ElasticQuotaTree was not present.

  • Priority-based scheduling of custom elastic resources

    • Added support for ACS-type units.

v1.26.3-aliyun-6.6.4.7a8f3f9d

October 22, 2024

Optimized the messages related to virtual node scheduling. Warning events are no longer sent during the virtual node scheduling process.

v1.26.3-aliyun-6.6.3.67f250fe

September 4, 2024

  • SlurmOperator

    • Optimized the scheduling performance of the plugin.

v1.26.3-aliyun-6.6.2.9ea0a6f5

August 30, 2024

  • InterPodAffinity

    • Fixed an issue where removing taints from a new node did not trigger pod rescheduling.

v1.26.3-aliyun-6.6.1.605b8a4f

July 31, 2024

  • SlurmOperator

    • Added support for hybrid scheduling in Kubernetes & Slurm clusters.

  • Priority-based scheduling of custom elastic resources

    • Optimized the product feature to avoid unnecessary node scale-out when used with node pools that have auto scaling enabled.

v1.26.3-aliyun-6.4.7.2a77d106

June 27, 2024

  • Coscheduling

    • Optimized Coscheduling speed.

v1.26.3-aliyun-6.4.6.78cacfb4

May 16, 2024

  • CapacityScheduling

    • Fixed the occasional error message: running AddPod on PreFilter plugin.

  • Elastic scheduling

    • Added an event wait for eci provisioning that is generated when an ECI instance is created using alibabacloud.com/burst-resource.

v1.26.3-aliyun-6.4.5.7f36e9b3

May 9, 2024

v1.26.3-aliyun-6.4.3.e7de0a1e

March 18, 2024

  • Shared GPU scheduling

    • Added support for submitting a ConfigMap to specify card isolation.

  • Priority-based scheduling of custom elastic resources

    • Added support for the elastic resource type.

v1.26.3-aliyun-6.4.1.d24bc3c3

February 21, 2024

  • The scoring of Virtual Nodes by the NodeResourceFit plug-in is optimized to prevent a Virtual Node from always receiving a score of 0. This allows Preferred-type NodeAffinity to correctly prioritize scheduling on ECS nodes.

  • Added support for NUMA joint allocation.

  • Priority-based scheduling of custom elastic resources

    • Added support for waiting between units.

  • Fixed an issue in remaining IP-aware scheduling where the number of schedulable pods was reduced due to an incorrect count of remaining IP addresses.

v1.26.3-aliyun-6.3.33fdc082

January 10, 2024

  • Customize elastic resource priorities

    • Fixed an issue where ECI zone affinity and discretization did not take effect when custom elastic resource priority scheduling was used.

  • CPU topology-aware scheduling

    • Prevented the same CPU core from being repeatedly allocated to a single pod, which caused the pod to fail to start on the node.

  • ECI elastic scheduling

    • Fixed an issue where pods were still scheduled to ECI when the value of the alibabacloud.com/burst-resource label was not `eci` or `eci_only`.

  • CapacityScheduling

    • Automatically enabled the job preemption feature in ACK LINGJUN clusters.

v1.26.3-aliyun-6.2.d9c15270

December 21, 2023

Added support for MatchLabelKeys in priority-based scheduling of custom elastic resources to automatically group different versions during application releases.

v1.26.3-aliyun-6.1.a40b0eef

December 13, 2023

  • CapacityScheduling

    • Added a feature to specify a quota. You can specify the quota to which a pod belongs using the quota.scheduling.alibabacloud.com/name annotation on the pod.

    • Added a queue association feature. This feature supports counting only the resources of pods managed by Kube Queue.

    • Optimized the preemption logic. In the new version, CapacityScheduling preemption does not cause the resource usage of the preempted quota's pods to fall below the Min value, nor does it cause the resource usage of the preempting quota's pods to exceed the Min value.

  • Priority-based scheduling of custom elastic resources

    • Added an update feature. This feature supports updating the unit of a ResourcePolicy and the label of a node. After an update, the Deletion-Cost of the pod is synchronized.

    • Added IgnoreTerminatingPod. This feature supports ignoring terminating pods when counting the number of pods in a unit.

    • Added the IgnorePreviousPod option. This feature supports ignoring pods whose CreationTimestamp is earlier than that of the associated ResourcePolicy when counting the number of pods in a unit.

    • Added the PreemptPolicy option. This feature supports attempting pod preemption between units.

  • GPUShare

    • Optimized the GPUShare scheduling speed by reducing the P99 scheduling latency of the Filter plugin from milliseconds to microseconds.

v1.26.3-aliyun-5.9-cd4f2cc3

November 16, 2023

  • Optimized the display of reasons for scheduling failures due to unsatisfied cloud disk types.

v1.26.3-aliyun-5.8-a1482f93

October 16, 2023

  • Added support for Windows node scheduling.

  • Optimized the Coscheduling speed when handling simultaneous scheduling of multiple tasks to reduce task blocking.

v1.26.3-aliyun-5.7-2f57d3ff

September 20, 2023

  • Fixed an issue where GPUShare occasionally failed to admit pods during scheduling.

  • Added a plugin to the scheduler that is aware of remaining IP addresses on nodes. Pods are no longer scheduled to nodes that have no remaining IP addresses.

  • Added a topology-aware scheduling plugin to the scheduler. This plugin supports scheduling pods to the same topology domain and automatically retries across multiple topology domains.

  • The scheduler now updates the Usage and Request information of ElasticQuotaTree at a frequency of one second.

v1.26.3-aliyun-5.5-8b98a1cc

July 5, 2023

  • Fixed an issue where pods occasionally remained in the Pending state for a long time during Coscheduling.

  • Optimized the user experience when using Coscheduling with elastic node pools. Other pods in a PodGroup no longer trigger node pool scale-out when some pods cannot be scheduled or scaled out due to incorrect node selector configurations.

v1.26.3-aliyun-5.4-21b4da4c

July 3, 2023

  • Fixed an issue where the Max property of ResourcePolicy was invalid.

  • Optimized the impact of many pending pods on scheduler performance. The scheduler throughput is now similar to when there are no pending pods, even with many pending pods in the cluster.

v1.26.3-aliyun-5.1-58a821bf

May 26, 2023

Added support for updating fields such as min-available and Matchpolicy for a PodGroup.

v1.26.3-aliyun-5.0-7b1ccc9d

May 22, 2023

  • The priority-based scheduling of custom elastic resources feature now supports declaring the maximum number of replicas in the Unit field.

  • Added support for GPU topology-aware scheduling.

v1.26.3-aliyun-4.1-a520c096

April 27, 2023

Nodes are no longer scaled out by the autoscaler when the Elasticquota limit is exceeded or the number of Gang pods is not met.

Version 1.24 change history

Version number

Change date

Changes

v1.24.6-aliyun-6.4.7.e7ffcda5

May 6, 2025

  • Fixed an issue where the Max count in ResourcePolicy was occasionally incorrect.

  • Fixed an issue where a cloud disk leak might occur when a WaitForFirstConsumer-type cloud disk was used with serverless computing power.

v1.24.6-aliyun-6.5.0.37a567db (Available on whitelist)

November 4, 2024

Priority-based scheduling of custom elastic resources

  • Added support for ACS-type units.

v1.24.6-aliyun-6.4.6.c4d551a0

May 16, 2024

  • CapacityScheduling

    • Fixed the occasional error message: running AddPod on PreFilter plugin.

v1.24.6-aliyun-6.4.5.aab44b4a

May 9, 2024

v1.24.6-aliyun-6.4.3.742bd819

March 18, 2024

  • Shared GPU scheduling

    • Added support for submitting a ConfigMap to specify card isolation.

  • Priority-based scheduling of custom elastic resources

    • Added support for the elastic resource type.

v1.24.6-aliyun-6.4.1.14ebc575

February 21, 2024

  • The scoring of Virtual Nodes by the NodeResourceFit plugin is optimized. A Virtual Node is always assigned a score of 0, which ensures that a `preferred` `NodeAffinity` rule can correctly prioritize scheduling to ECS nodes.

  • Added support for NUMA joint allocation.

  • Priority-based scheduling of custom elastic resources

    • Added support for waiting between units.

  • Fixed an issue in remaining IP-aware scheduling where the number of schedulable pods was reduced due to an incorrect count of remaining IP addresses.

v1.24.6-aliyun-6.3.548a9e59

January 10, 2024

  • Priority-based scheduling of custom elastic resources

    • Fixed an issue where ECI zone affinity and discretization did not take effect when custom elastic resource priority scheduling was used.

  • CPU topology-aware scheduling

    • Prevented the same CPU core from being repeatedly allocated to a single pod, which caused the pod to fail to start on the node.

  • ECI elastic scheduling

    • Fixed an issue where pods were still scheduled to ECI when the value of the alibabacloud.com/burst-resource label was not `eci` or `eci_only`.

  • CapacityScheduling

    • Automatically enabled the job preemption feature in ACK LINGJUN clusters.

v1.24.6-aliyun-6.2.0196baec

December 21, 2023

Added support for MatchLabelKeys in priority-based scheduling of custom elastic resources to automatically group different versions during application releases.

v1.24.6-aliyun-6.1.1900da95

December 13, 2023

  • CapacityScheduling

    • Added a feature to specify a quota. You can specify the quota to which a pod belongs using the quota.scheduling.alibabacloud.com/name annotation on the pod.

    • Added a queue association feature. This feature supports counting only the resources of pods managed by Kube Queue.

    • Optimized the preemption logic. In the new version, CapacityScheduling preemption does not cause the resource usage of the preempted quota's pods to fall below the Min value, nor does it cause the resource usage of the preempting quota's pods to exceed the Min value.

  • Priority-based scheduling of custom elastic resources

    • Added an update feature. This feature supports updating the unit of a ResourcePolicy and the label of a node. After an update, the Deletion-Cost of the pod is synchronized.

    • Added IgnoreTerminatingPod. This feature supports ignoring terminating pods when counting the number of pods in a unit.

    • Added the IgnorePreviousPod option. This feature supports ignoring pods whose CreationTimestamp is earlier than that of the associated ResourcePolicy when counting the number of pods in a unit.

    • Added the PreemptPolicy option. This feature supports attempting pod preemption between units.

  • GPUShare

    • Optimized the GPUShare scheduling speed by reducing the P99 scheduling latency of the Filter plugin from milliseconds to microseconds.

v1.24.6-aliyun-5.9-e777ab5b

November 16, 2023

  • Optimized the display of reasons for scheduling failures due to unsatisfied cloud disk types.

v1.24.6-aliyun-5.8-49fd8652

October 16, 2023

  • Added support for Windows node scheduling.

  • Optimized the Coscheduling speed when handling simultaneous scheduling of multiple tasks to reduce task blocking.

v1.24.6-aliyun-5.7-62c7302c

September 20, 2023

  • Fixed an issue where GPUShare occasionally failed to admit pods during scheduling.

v1.24.6-aliyun-5.6-2bb99440

August 31, 2023

  • Added a plugin to the scheduler that is aware of remaining IP addresses on nodes. Pods are no longer scheduled to nodes that have no remaining IP addresses.

  • Added a topology-aware scheduling plugin to the scheduler. This plugin supports scheduling pods to the same topology domain and automatically retries across multiple topology domains.

  • The scheduler now updates the Usage and Request information of ElasticQuotaTree at a frequency of one second.

v1.24.6-aliyun-5.5-5e8aac79

July 5, 2023

  • Fixed an issue where pods occasionally remained in the Pending state for a long time during Coscheduling.

  • Optimized the user experience when using Coscheduling with elastic node pools. Other pods in a PodGroup no longer trigger node pool scale-out when some pods cannot be scheduled or scaled out due to incorrect node selector configurations.

v1.24.6-aliyun-5.4-d81e785e

July 3, 2023

  • Fixed an issue where the Max property of ResourcePolicy was invalid.

  • Optimized the impact of many pending pods on scheduler performance. The scheduler throughput is now similar to when there are no pending pods, even with many pending pods in the cluster.

v1.24.6-aliyun-5.1-95d8a601

May 26, 2023

Added support for updating fields such as min-available and Matchpolicy for Coscheduling.

v1.24.6-aliyun-5.0-66224258

May 22, 2023

  • The priority-based scheduling of custom elastic resources feature now supports declaring the maximum number of replicas in the Unit field.

  • Added support for GPU topology-aware scheduling.

v1.24.6-aliyun-4.1-18d8d243

March 31, 2023

ElasticResource now supports scheduling pods to Arm VK nodes.

v1.24.6-4.0-330eb8b4-aliyun

March 1, 2023

  • GPUShare:

    • Fixed an issue where the scheduler status was incorrect when a GPU node was downgraded.

    • Fixed an issue where GPU nodes could not be fully allocated with GPU memory.

    • Added support for preempting GPU pods.

  • Coscheduling:

    • Added support for declaring a gang using the PodGroup and Koordinator APIs.

    • Added support for controlling the retry policy of a gang using Matchpolicy.

    • Added support for Gang Group.

    • Gang names must comply with DNS subdomain naming rules.

  • Custom parameters: Added support for Loadaware-related configuration parameters.

v1.24.6-3.2-4f45222b-aliyun

January 13, 2023

Fixed an issue where inaccurate GPUShare memory calculation prevented pods from using GPU memory properly.

v1.24.6-ack-3.1

November 14, 2022

  • The score feature for shared GPU scheduling is enabled by default. In previous versions, this feature was disabled by default.

  • Added support for load-aware scheduling.

v1.24.6-ack-3.0

September 27, 2022

Added support for Capacity Scheduling.

v1.24.3-ack-2.0

September 21, 2022

  • Added support for shared GPU scheduling.

  • Added support for Coscheduling.

  • Added support for ECI elastic scheduling.

  • Added support for CPU-aware scheduling.

Version 1.22 change history

Version number

Change date

Changes

v1.22.15-aliyun-6.4.5.e54fd757

May 6, 2025

  • Fixed an issue where the Max count in ResourcePolicy was occasionally incorrect.

  • Fixed an issue where a cloud disk leak might occur when a WaitForFirstConsumer-type cloud disk was used with serverless computing power.

v1.22.15-aliyun-6.4.4.7fc564f8

May 16, 2024

  • CapacityScheduling

    • Fixed the occasional error message: running AddPod on PreFilter plugin.

v1.22.15-aliyun-6.4.3.e858447b

April 22, 2024

  • Priority-based scheduling of custom elastic resources

    • Fixed an issue where deleting a ResourcePolicy occasionally caused an abnormal status.

v1.22.15-aliyun-6.4.2.4e00a021

March 18, 2024

  • CapacityScheduling

    • Fixed an issue where preemption occasionally failed in ACK LINGJUN clusters.

  • Added support for manually blacklisting specific GPU cards in a cluster using a ConfigMap.

v1.22.15-aliyun-6.4.1.1205db85

February 29, 2024

  • Priority-based scheduling of custom elastic resources

    • Fixed an occasional concurrency conflict issue.

v1.22.15-aliyun-6.4.0.145bb899

February 28, 2024

  • CapacityScheduling

    • Fixed an issue where specifying a quota caused incorrect quota statistics.

v1.22.15-aliyun-6.3.a669ec6f

January 10, 2024

  • Priority-based scheduling of custom elastic resources

    • Fixed an issue where ECI zone affinity and discretization did not take effect when custom elastic resource priority scheduling was used.

    • Added support for MatchLabelKeys.

  • CPU topology-aware scheduling

    • Fixed an issue where the same CPU core might be repeatedly allocated to a single pod, causing the pod to fail to start on the node.

  • ECI elastic scheduling

    • Fixed an issue where pods were still scheduled to ECI when the value of the alibabacloud.com/burst-resource label was not `eci` or `eci_only`.

  • CapacityScheduling

    • Automatically enabled the job preemption feature in ACK LINGJUN clusters.

v1.22.15-aliyun-6.1.e5bf8b06

December 13, 2023

  • CapacityScheduling

    • Added a feature to specify a quota. You can specify the quota to which a pod belongs using the quota.scheduling.alibabacloud.com/name annotation on the pod.

    • Added a queue association feature. You can configure a quota to count only the resources of pods managed by Kube Queue.

    • Optimized the preemption logic. In the new version, CapacityScheduling preemption does not cause the resource usage of the preempted quota's pods to fall below the Min value, nor does it cause the resource usage of the preempting quota's pods to exceed the Min value.

  • Priority-based scheduling of custom elastic resources

    • Added an update feature. This feature supports updating the unit of a ResourcePolicy and the label of a node. After an update, the Deletion-Cost of the pod is synchronized.

    • Added IgnoreTerminatingPod. This feature supports ignoring terminating pods when counting the number of pods in a unit.

    • Added the IgnorePreviousPod option. This feature supports ignoring pods whose CreationTimestamp is earlier than that of the associated ResourcePolicy when counting the number of pods in a unit.

    • Added the PreemptPolicy option. This feature supports attempting pod preemption between units.

  • GPUShare

    • Optimized the GPUShare scheduling speed by reducing the P99 scheduling latency of the Filter plugin from milliseconds to microseconds.

v1.22.15-aliyun-5.9-04a5e6eb

November 16, 2023

  • Optimized the display of reasons for scheduling failures due to unsatisfied cloud disk types.

v1.22.15-aliyun-5.8-29a640ae

October 16, 2023

  • Added support for Windows node scheduling.

  • Optimized the Coscheduling speed when handling simultaneous scheduling of multiple tasks to reduce task blocking.

v1.22.15-aliyun-5.7-bfcffe21

September 20, 2023

  • Fixed an issue where GPUShare occasionally failed to admit pods during scheduling.

v1.22.15-aliyun-5.6-6682b487

August 14, 2023

  • Added a plugin to the scheduler that is aware of remaining IP addresses on nodes. Pods are no longer scheduled to nodes that have no remaining IP addresses.

  • Added a topology-aware scheduling plugin to the scheduler. This plugin supports scheduling pods to the same topology domain and automatically retries across multiple topology domains.

  • The scheduler now updates the Usage and Request information of ElasticQuotaTree at a frequency of one second.

v1.22.15-aliyun-5.5-82f32f68

July 5, 2023

  • Fixed an issue where pods occasionally remained in the Pending state for a long time during Coscheduling.

  • Optimized the user experience when using PodGroup with elastic node pools. Other pods in a PodGroup no longer trigger node pool scale-out when some pods cannot be scheduled or scaled out due to incorrect node selector configurations.

v1.22.15-aliyun-5.4-3b914a05

July 3, 2023

  • Fixed an issue where the Max property of ResourcePolicy was invalid.

  • Optimized the impact of many pending pods on scheduler performance. The scheduler throughput is now similar to when there are no pending pods, even with many pending pods in the cluster.

v1.22.15-aliyun-5.1-8a479926

May 26, 2023

Added support for updating fields such as min-available and Matchpolicy for a PodGroup.

v1.22.15-aliyun-5.0-d1ab67d9

May 22, 2023

  • The priority-based scheduling of custom elastic resources feature now supports declaring the maximum number of replicas in the Unit field.

  • Added support for GPU topology-aware scheduling.

v1.22.15-aliyun-4.1-aec17f35

March 31, 2023

ElasticResource now supports scheduling pods to Arm VK nodes.

v1.22.15-aliyun-4.0-384ca5d5

March 3, 2023

  • GPUShare:

    • Fixed an issue where the scheduler status was incorrect when a GPU node was downgraded.

    • Fixed an issue where GPU nodes could not be fully allocated with GPU memory.

    • Added support for preempting GPU pods.

  • Coscheduling:

    • Added support for declaring a gang using the PodGroup and Koordinator APIs.

    • Added support for controlling the retry policy of a gang using Matchpolicy.

    • Added support for Gang Group.

    • The name of a gang must comply with DNS subdomain rules.

  • Custom parameters: Added support for Loadaware-related configuration parameters.

v1.22.15-2.1-a0512525-aliyun

January 10, 2023

Fixed an issue where inaccurate GPUShare memory calculation prevented pods from using GPU memory properly.

v1.22.15-ack-2.0

November 30, 2022

  • The scheduler now supports custom parameters.

  • Added support for load-aware scheduling.

  • Added support for elastic scheduling based on node pool priority.

  • Added support for shared GPU computing power scheduling.

v1.22.3-ack-1.1

February 27, 2022

Fixed an issue where shared GPU scheduling failed when the cluster had only one node.

v1.22.3-ack-1.0

January 4, 2021

  • Added support for CPU-aware scheduling.

  • Added support for Coscheduling.

  • Added support for Capacity Scheduling.

  • Added support for ECI elastic scheduling.

  • Added support for shared GPU scheduling.

Version 1.20 change history

Version number

Change date

Changes

v1.20.11-aliyun-10.6-f95f7336

September 22, 2023

  • Fixed an issue where quota usage was occasionally calculated incorrectly in ElasticQuotaTree.

v1.20.11-aliyun-10.3-416caa03

May 26, 2023

  • Fixed an issue where GPUShare occasionally caused cache errors in earlier versions of Kubernetes.

v1.20.11-aliyun-10.2-f4a371d3

April 27, 2023

  • ElasticResource now supports scheduling pods to Arm VK nodes.

  • Fixed a scheduling failure issue in load-aware scheduling caused by CPU usage exceeding the requested amount.

v1.20.11-aliyun-10.0-ae867721

April 3, 2023

Added support for MatchPolicy in Coscheduling.

v1.20.11-aliyun-9.2-a8f8c908

March 8, 2023

  • CapacityScheduling: Fixed an issue where the scheduler status was incorrect due to duplicate quota names.

  • Added support for cloud disk scheduling.

  • Shared GPU scheduling:

    • Fixed an issue where the scheduler status was incorrect when a GPU node was downgraded.

    • Fixed an issue where GPU nodes could occasionally not be fully allocated with GPU memory.

    • Added support for preempting GPU pods.

  • CPU topology-aware scheduling: Pods with CPU scheduling enabled are not scheduled to nodes without NUMA enabled.

  • You can use custom parameters.

v1.20.4-ack-8.0

August 29, 2022

Fixed known bugs.

v1.20.4-ack-7.0

February 22, 2022

Added support for elastic scheduling based on node pool priority.

v1.20.4-ack-4.0

September 2, 2021

  • Added support for load-aware scheduling.

  • Added support for ECI elastic scheduling.

v1.20.4-ack-3.0

May 26, 2021

Added support for CPU-aware scheduling based on Socket and L3 cache.

v1.20.4-ack-2.0

May 14, 2021

Added support for Capacity Scheduling.

v1.20.4-ack-1.0

April 7, 2021

  • Added support for CPU-aware scheduling.

  • Coscheduling is supported.

  • Added support for GPU topology-aware scheduling.

  • Added support for shared GPU scheduling.

Version 1.18 change history

Version number

Change date

Changes

v1.18-ack-4.0

September 2, 2021

Added support for load-aware scheduling.

v1.18-ack-3.1

June 5, 2021

Made ECI scheduling compatible with node pools.

v1.18-ack-3.0

March 12, 2021

Added support for unified scheduling of ECI and ECS.

v1.18-ack-2.0

November 30, 2020

Added support for GPU topology-aware scheduling and shared GPU scheduling.

v1.18-ack-1.0

September 24, 2020

Added support for CPU-aware scheduling and Coscheduling.

Version 1.16 change history

Version number

Change date

Changes

v1.16-ack-1.0

July 21, 2020

  • Added support for CPU-aware scheduling in Kubernetes v1.16 clusters.

  • Added support for Coscheduling in Kubernetes v1.16 clusters.