All Products
Search
Document Center

Container Service for Kubernetes:kube-scheduler

Last Updated:Nov 14, 2024

kube-scheduler is a control plane component that schedules pods to nodes that meet resource usage and pod scheduling requirements.

Introduction

kube-scheduler

kube-scheduler selects a valid node for each pod in the scheduling queue based on the resource request of the pod and the allocatable resources on the node. In addition, kube-scheduler can sort all valid nodes in a specific order and select a suitable node to host the pod. By default, kube-scheduler spreads pods across nodes based on pod requests. For more information, see kube-scheduler.

Filter and score plug-ins

The Kubernetes scheduling framework transforms the complex scheduling logic into plug-ins and implements elastic scheduling based on the plug-ins. The filter plug-ins are used to filter out nodes that cannot run a specific pod during pod scheduling. The score plug-ins assign a score to each node that has passed the filtering phase based on algorithm. The score indicates whether a node is suitable for running the pod.

The following table lists the enabled filter and score plug-ins and their default weights in different versions of kube-scheduler.

View the plug-ins that are enabled by default

Component version

Filter

Score

v1.30.1-aliyun.6.5.4.fcac2bdf

  • The plug-ins that are enabled by default in Kubernetes:

    Same as those listed in Filter plug-ins enabled by default in Kubernetes v1.30.1.

  • The plug-ins that are enabled by default in Container Service for Kubernetes (ACK):

    • NodeNUMAResource

    • topologymanager

    • EciPodTopologySpread

    • ipawarescheduling

    • BatchResourceFit

    • PreferredNode

    • gpushare

    • NetworkTopology

    • CapacityScheduling

    • elasticresource

    • resourcepolicy

    • gputopology

    • ECIBinderV1

    • loadawarescheduling

    • EciScheduling

  • The plug-ins that are enabled by default in Kubernetes:

    Same as those listed in Score plug-ins enabled by default in Kubernetes v1.30.1.

  • The plug-ins that are enabled by default in ACK and their default weights:

    • Name: NodeNUMAResource Default Weight: 1

    • Name: ipawarescheduling Default Weight: 1

    • Name: gpuNUMAJointAllocation Default Weight: 1

    • Name: PreferredNode Default Weight: 10000

    • Name: gpushare Default Weight: 20000

    • Name: gputopology Default Weight: 1

    • Name: numa Default Weight: 1

    • Name: EciScheduling Default Weight: 2

    • Name: NodeAffinity Default Weight: 2

    • Name: elasticresource Default Weight: 1000000

    • Name: resourcepolicy Default Weight: 1000000

    • Name: NodeBEResourceLeastAllocated Default Weight: 1

    • Name: loadawarescheduling Default Weight: 10

v1.28.3-aliyun-6.5.2.7ff57682

  • The plug-ins that are enabled by default in Kubernetes:

    Same as those listed in Filter plug-ins enabled by default in Kubernetes v1.28.3.

  • The plug-ins that are enabled by default in ACK:

    • NodeNUMAResource

    • topologymanager

    • EciPodTopologySpread

    • ipawarescheduling

    • BatchResourceFit

    • PreferredNode

    • gpushare

    • NetworkTopology

    • CapacityScheduling

    • elasticresource

    • resourcepolicy

    • gputopology

    • ECIBinderV1

    • loadawarescheduling

    • EciScheduling

  • The plug-ins that are enabled by default in Kubernetes:

    Same as those listed in Score plug-ins enabled by default in Kubernetes v1.28.3.

  • The plug-ins that are enabled by default in ACK and their default weights:

    • Name: NodeNUMAResource Default Weight: 1

    • Name: ipawarescheduling Default Weight: 1

    • Name: gpuNUMAJointAllocation Default Weight: 1

    • Name: PreferredNode Default Weight: 10000

    • Name: gpushare Default Weight: 20000

    • Name: gputopology Default Weight: 1

    • Name: numa Default Weight: 1

    • Name: EciScheduling Default Weight: 2

    • Name: NodeAffinity Default Weight: 2

    • Name: elasticresource Default Weight: 1000000

    • Name: resourcepolicy Default Weight: 1000000

    • Name: NodeBEResourceLeastAllocated Default Weight: 1

    • Name: loadawarescheduling Default Weight: 10

v1.26.3-aliyun-6.6.1.605b8a4f

  • The plug-ins that are enabled by default in Kubernetes:

    Same as those listed in Filter plug-ins enabled by default in Kubernetes v1.26.3.

  • The plug-ins that are enabled by default in ACK:

    • NodeNUMAResource

    • topologymanager

    • EciPodTopologySpread

    • ipawarescheduling

    • BatchResourceFit

    • PreferredNode

    • gpushare

    • NetworkTopology

    • CapacityScheduling

    • elasticresource

    • resourcepolicy

    • gputopology

    • ECIBinderV1

    • loadawarescheduling

    • EciScheduling

  • The plug-ins that are enabled by default in Kubernetes:

    Same as those listed in Score plug-ins enabled by default in Kubernetes v1.26.3.

  • The plug-ins that are enabled by default in ACK:

    • Name: NodeNUMAResource Default Weight: 1

    • Name: ipawarescheduling Default Weight: 1

    • Name: gpuNUMAJointAllocation Default Weight: 1

    • Name: PreferredNode Default Weight: 10000

    • Name: gpushare Default Weight: 20000

    • Name: gputopology Default Weight: 1

    • Name: numa Default Weight: 1

    • Name: EciScheduling Default Weight: 2

    • Name: NodeAffinity Default Weight: 2

    • Name: elasticresource Default Weight: 1000000

    • Name: resourcepolicy Default Weight: 1000000

    • Name: NodeBEResourceLeastAllocated Default Weight: 1

    • Name: loadawarescheduling Default Weight: 10

Features of the filter and score plug-ins

View the descriptions and features of the plug-ins

Plug-in name

Description

Reference

NodeNUMAResource

Manages topology-aware CPU scheduling.

Enable topology-aware CPU scheduling

topologymanager

Manages the allocation of non-uniform memory access (NUMA) resources for nodes.

Enable topology-aware NUMA scheduling

EciPodTopologySpread

Enhances topology spread constraints in virtual node-based pod scheduling scenarios.

Enable the virtual node-based pod scheduling policy for an ACK cluster

ipawarescheduling

Schedules pods based on idle vSwitch IP addresses.

FAQ about scheduling

BatchResourceFit

Enables and manages colocation.

Overview of colocation

PreferredNode

Reserves nodes for node pools with node scaling enabled.

Overview of node scaling

gpushare

Manages GPU sharing.

GPU sharing

NetworkTopology

Manages network topology-aware scheduling.

Topology-aware scheduling

CapacityScheduling

Manages the capacity scheduling feature.

Work with capacity scheduling

elasticresource

Manages Elastic Container Instance-based scheduling.

Use Elastic Container Instance-based scheduling

resourcepolicy

Manages custom elastic resource scheduling.

Configure priority-based resource scheduling

gputopology

Manages topology-aware GPU scheduling.

Overview of topology-aware GPU scheduling

ECIBinderV1

Binds virtual nodes in Elastic Container Instance-based pod scheduling scenarios.

Schedule pods to elastic container instances that are deployed as virtual nodes

loadawarescheduling

Manages load-aware scheduling.

Use load-aware scheduling

EciScheduling

Manages virtual node-based pod scheduling.

Enable the virtual node-based pod scheduling policy for an ACK cluster

Usage notes

kube-scheduler is automatically installed in a Kubernetes cluster. You can use it without additional configurations. We recommend that you update kube-scheduler to the latest version at the earliest opportunity to use the latest features and fix bugs. To update kube-scheduler, log on to the ACK console, click the cluster that you want to manage, and then choose Operations > Add-ons.

Release notes

Release notes for v1.31

Version number

Release date

Description

v1.31.0-aliyun.6.7.1.1943173f

2024-11-06

  • Custom priority-based resource scheduling

    • The maximum number of pods that triggers scaling can be configured. This prevents excessive node scaling by limiting the number of Pods that initiate the scaling process.

    • resource: elastic in Unit is deprecated. Instead, k8s.aliyun.com/resource-policy-wait-for-ecs-scaling in PodLabels is used.

  • Topology-aware CPU scheduling

    • The following issue is fixed: Exceptions may occur when the Elastic Compute Service (ECS) instance type changes.

v1.31.0-aliyun.6.7.0.740ba623

2024-11-04

  • Capacity scheduling

    • The issue that elastic quota preemption is executed without creating an ElasticQuotaTree is fixed.

  • Custom priority-based resource scheduling

    • Support for the Alibaba Cloud Container Compute Service (ACS) type is added.

v1.31.0-aliyun.6.6.1.5bd14ab0

2024-10-22

  • The Invalid Score error that occasionally occurs in PodTopologySpread is fixed.

  • Event notifications for Coscheduling are optimized. The number of Coscheduling failures is included.

  • Notifications for virtual node scheduling are optimized. Warning events are no longer sent during the virtual node scheduling process.

  • Network topology-aware scheduling

    • The following issue is fixed: Pods cannot be scheduled after preemption.

  • NUMA topology-aware scheduling

    • The following issue is fixed: NUMA topology-aware scheduling does not take effect.

v1.31.0-aliyun.6.6.0.ba473715

2024-09-13

All features provided by earlier versions are supported in kube-scheduler V1.31.

Release notes for v1.30

Version number

Release date

Description

v1.30.3-aliyun.6.7.1.d992180a

2024-11-06

  • Custom priority-based resource scheduling

    • Configuring the maximum number of pods that can trigger scaling is supported. This prevents excessive node scaling by limiting the number of Pods that initiate the scaling process.

    • Deprecated resource: elastic in Unit. Use k8s.aliyun.com/resource-policy-wait-for-ecs-scaling in PodLabels instead.

  • Topology-aware CPU scheduling

    • The following issue is fixed: Exceptions may occur when the Elastic Compute Service (ECS) instance type changes.

v1.30.3-aliyun.6.7.0.da474ec5

2024-11-04

  • Capacity scheduling

    • The issue that elastic quota preemption is executed without creating an ElasticQuotaTree is fixed.

  • Custom priority-based resource scheduling

    • Support for the Alibaba Cloud Container Compute Service (ACS) type is added.

v1.30.3-aliyun.6.6.4.b8940a30

2024-10-22

  • The Invalid Score error that occasionally occurs in PodTopologySpread is fixed.

v1.30.3-aliyun.6.6.3.994ade8a

2024-10-18

  • Event notifications for Coscheduling are optimized. The number of Coscheduling failures is included.

  • Notifications for virtual node scheduling are optimized. Warning events are no longer sent during the virtual node scheduling process.

v1.30.3-aliyun.6.6.2.0be67202

2024-09-23

  • Network topology-aware scheduling

    • The following issue is fixed: Pods cannot be scheduled after preemption.

  • NUMA topology-aware scheduling

    • The following issue is fixed: NUMA topology-aware scheduling does not take effect.

v1.30.3-aliyun.6.6.1.d98352c6

2024-09-11

  • Preemptible instances can be scheduled in Network topology-aware scheduling.

  • SlurmOperator

    • Hybrid scheduling of ACK clusters with Slurm clusters is supported.

  • Coscheduling

    • The latest community edition of the custom resource definition (CRD) is supported.

v1.30.3-aliyun.6.5.6.fe7bc1d5

2024-08-20

The following issue induced by v1.30.1-aliyun.6.5.1.5dad3be8 is fixed: PodAffinity/PodAntiaffinity scheduling errors.

v1.30.3-aliyun.6.5.5.8b10ee7c

2024-08-01

  • Rebase to the open source version 1.30.3.

v1.30.1-aliyun.6.5.5.fcac2bdf

2024-08-01

  • CapacityScheduling

    • The following issue is fixed: The quota may be incorrectly calculated when Coscheduling and CapacityScheduling are used at the same time.

  • GPUShare

    • The following issue is fixed: Incorrect calculation of remaining resources during computing power scheduling on nodes.

  • Custom priority-based resource scheduling

    • The scale-out activity is optimized when ResourcePolicy and ClusterAutoscaler are used at the same time. Nodes are not added if all units reach their maximum number of pods.

v1.30.1-aliyun.6.5.4.fcac2bdf

2024-07-22

  • Coscheduling

    • The following issue is fixed: Incorrect quota statistics when you use elastic container instances.

  • The xxx is in cache, so can't be assumed error that occasionally occurs is fixed.

v1.30.1-aliyun.6.5.3.9adaeb31

2024-07-10

The following issue induced by v1.30.1-aliyun.6.5.1.5dad3be8 is fixed: Pods remain pending for a long period of time.

v1.30.1-aliyun.6.5.1.5dad3be8

2024-06-27

  • Coscheduling

    • Coscheduling is optimized to accelerate the scheduling.

  • Pod scheduling in sequence is supported.

  • Enhanced scheduling performance by specifying the scheduling-group.

  • Optimized scheduler plug-in performance by PreEnqueue.

v1.30.1-aliyun.6.4.7.6643d15f

2024-05-31

  • All features in earlier versions are supported by ACK clusters that are installed with kube-scheduler V1.30.

Release notes for v1.28

Version number

Release date

Description

v1.28.12-aliyun-6.7.1.44345748

2024-11-06

  • Custom priority-based resource scheduling

    • Configuring the maximum number of pods that can trigger scaling is supported. This prevents excessive node scaling by limiting the number of Pods that initiate the scaling process.

    • Deprecated resource: elastic in Unit. Use k8s.aliyun.com/resource-policy-wait-for-ecs-scaling in PodLabels instead.

  • Topology-aware CPU scheduling

    • The following issue is fixed: Exceptions may occur when the Elastic Compute Service (ECS) instance type changes.

v1.28.12-aliyun-6.7.0.b97fca02

2024-11-04

  • Capacity scheduling

    • The issue that elastic quota preemption is executed without creating an ElasticQuotaTree is fixed.

  • Custom priority-based resource scheduling

    • Support for the Alibaba Cloud Container Compute Service (ACS) type is added.

v1.28.12-aliyun-6.6.4.e535a698

2024-10-22

The Invalid Score error that occasionally occurs in PodTopologySpread is fixed.

v1.28.12-aliyun-6.6.3.188f750b

2024-10-11

  • Events in Coscheduling is optimized. The number of Coscheduling failures is added to the events.

  • Notifications in virtual node-based scheduling is optimized. Warning events are no longer sent during the scheduling process.

v1.28.12-aliyun-6.6.2.054ec1f5

2024-09-23

  • Network topology-aware scheduling

    • The following issue is fixed: Pods cannot be scheduled after preemption in the network topology-aware scheduling.

  • NUMA topology-aware scheduling

    • The following issue is fixed: NUMA topology-aware scheduling does not take effect.

v1.28.12-aliyun-6.6.1.348b251d

2024-09-11

  • Preemptible instances can be scheduled in Network topology-aware scheduling.

  • SlurmOperator

    • Hybrid scheduling of ACK clusters with Slurm clusters is supported.

v1.28.12-aliyun-6.5.4.79e08301

2024-08-20

The following issue induced by v1.30.1-aliyun.6.5.1.5dad3be8 is fixed: PodAffinity/PodAntiaffinity scheduling errors.

v1.28.12-aliyun-6.5.3.aefde017

2024-08-01

  • Rebase to the open source version 1.28.12.

v1.28.3-aliyun-6.5.3.79e08301

2024-08-01

  • CapacityScheduling

    • The following issue is fixed: The quota may be incorrectly calculated when Coscheduling and CapacityScheduling are used at the same time.

  • GPUShare

    • The following issue is fixed: Incorrect calculation of remaining resources during computing power scheduling on nodes.

  • Custom priority-based resource scheduling

    • The scale-out activity is optimized when ResourcePolicy and ClusterAutoscaler are used at the same time. Nodes are not added if all units reach their maximum number of pods.

v1.28.3-aliyun-6.5.2.7ff57682

2024-07-22

  • Coscheduling

    • The following issue is fixed: Incorrect quota statistics when you use elastic container instances.

  • The xxx is in cache, so can't be assumed error that occasionally occurs is fixed.

  • The following issue induced by v1.30.1-aliyun.6.5.1.5dad3be8 is fixed: Pods remain pending for a long period of time.

v1.28.3-aliyun-6.5.1.364d020b

2024-06-27

  • Coscheduling

    • Coscheduling is optimized to accelerate the scheduling.

  • Pod scheduling in sequence is supported.

  • Enhanced scheduling performance by specifying the scheduling-group.

  • Optimized scheduler plug-in performance by PreEnqueue.

v1.28.3-aliyun-6.4.7.0f47500a

2024-05-24

  • Network topology-aware scheduling

    • Occasional network topology-aware scheduling failures are fixed.

v1.28.3-aliyun-6.4.6.f32dc398

2024-05-16

  • GPU sharing

    • The following issue is fixed: GPU scheduling exceptions occur after the value of the ack.node.gpu.schedule label on a node in an ACK Lingjun cluster is changed from egpu to default.

  • CapacityScheduling

    • The running AddPod on PreFilter plugin error that occasionally occurs is fixed.

  • Elastic scheduling

    • The wait for eci provisioning event is supported. This event is generated when you use alibabacloud.com/burst-resource to create elastic container instances.

v1.28.3-aliyun-6.4.5.a8b4a599

2024-05-09

v1.28.3-aliyun-6.4.3.f57771d7

2024-03-18

  • GPU sharing

    • A ConfigMap can be submitted to isolate a specific GPU.

  • Custom priority-based resource scheduling

    • The elastic resources are supported.

v1.28.3-aliyun-6.4.2.25bc61fb

2024-03-01

The SchedulerQueueingHints feature is disabled by default. For more information, see Automated cherry pick of #122289: fix: disable SchedulerQueueingHints feature flag by default #122291.

v1.28.3-aliyun-6.4.1.c7db7450

2024-02-21

  • NUMA co-scheduling is supported.

  • Custom priority-based resource scheduling

    • Attempt wait between units is supported.

  • The following issue is fixed: Fewer pods can be scheduled due to an incorrect number of remaining IP addresses in IP-aware pod scheduling.

v1.28.3-aliyun-6.3.1ab2185e

2024-01-10

  • Custom priority-based resource scheduling

    • The following issue is fixed: When you use custom priority-based resource scheduling, the affinity and spread of Elastic Container Instance-based pods across zones do not take effect.

  • Topology-aware CPU scheduling

    • The following issue is fixed: A pod fails to start on the node as the same vCPU is assigned to the pod multiple times.

  • Elastic Container Instance-based scheduling

    • The following issue is fixed: When the alibabacloud.com/burst-resource tag is used to specify a policy, the resources are still scheduled to an elastic container instance even if the value of the tag is not eci or eci_only.

v1.28.3-aliyun-6.2.84d57ad9

2023-12-21

MatchLabelKeys is supported for custom priority-based resource scheduling. This way, versions are automatically grouped when an application is released.

v1.28.3-aliyun-6.1.ac950aa0

2023-12-13

  • CapacityScheduling

    • The quota can be specified. You can use quota.scheduling.alibabacloud.com/name to specify the quota of a pod.

    • Queue association is supported. This feature allows you to collect statistics on resource usage of only Kube Queue-managed pods.

    • The preemption logic is optimized. In the new version, CapacityScheduling preemption does not cause the usage of preempted pods to be lower than the minimum value or higher than the maximum value.

  • Custom elastic resource priorities

    • Labels for updating the unit and node of ResourcePolicy are updated. After the unit and node are updated, the Deletion-Cost of the pod is updated as well.

    • IgnoreTerminatingPod is added. Pods that are being deleted can be ignored when the number of pods in a unit is counted.

    • IgnorePreviousPod is added. Pods whose CreationTimestamp is earlier than that of the associated ResourcePolicy can be ignored when the number of pods in a unit is counted.

    • PreemptPolicy is supported to attempt pod preemption between units.

  • GPUShare

    • The GPU sharing-based scheduling is accelerated. The 99th percentile scheduling latency of the Filter plug-in is reduced from milliseconds to microseconds.

v1.28.3-aliyun-5.8-89c55520

2023-10-28

All features provided by earlier versions are supported in kube-scheduler V1.28.

Release notes for v1.26

Version number

Release date

Description

v1.26.3-aliyun-6.7.1.d466c692

2024-11-06

  • Custom priority-based resource scheduling

    • Configuring the maximum number of pods that can trigger scaling is supported. This prevents excessive node scaling by limiting the number of Pods that initiate the scaling process.

    • Deprecated resource: elastic in Unit. Use k8s.aliyun.com/resource-policy-wait-for-ecs-scaling in PodLabels instead.

  • Topology-aware CPU scheduling

    • The following issue is fixed: Exceptions may occur when the Elastic Compute Service (ECS) instance type changes.

v1.26.3-aliyun-6.7.0.9c293fb7

2024-11-04

  • Capacity scheduling

    • The issue that elastic quota preemption is executed without creating an ElasticQuotaTree is fixed.

  • Custom priority-based resource scheduling

    • Support for the Alibaba Cloud Container Compute Service (ACS) type is added.

v1.26.3-aliyun-6.6.4.7a8f3f9d

2024-10-22

Notifications for virtual node scheduling are optimized. Warning events are no longer sent during the virtual node scheduling process.

v1.26.3-aliyun-6.6.3.67f250fe

2024-09-04

  • SlurmOperator

    • The scheduling performance of the plug-ins is optimized.

v1.26.3-aliyun-6.6.2.9ea0a6f5

2024-08-30

  • InterPodAffinity

    • The following issue is fixed: Pods rescheduling is not triggered when you remove taints from newly created nodes.

v1.26.3-aliyun-6.6.1.605b8a4f

2024-07-31

  • SlurmOperator

    • Hybrid scheduling with Kubernetes clusters and Slurm clusters is supported.

  • Custom priority-based resource scheduling

    • The feature is optimized to avoid unnecessary nodes increase when used with a node pool that has auto scaling enabled.

v1.26.3-aliyun-6.4.7.2a77d106

2024-06-27

  • Coscheduling

    • Coscheduling is optimized to accelerate the scheduling.

v1.26.3-aliyun-6.4.6.78cacfb4

2024-05-16

  • CapacityScheduling

    • The running AddPod on PreFilter plugin error that occasionally occurs is fixed.

  • Elastic scheduling

    • The wait for eci provisioning event is supported. This event is generated when you use alibabacloud.com/burst-resource to create elastic container instances.

v1.26.3-aliyun-6.4.5.7f36e9b3

2024-05-09

v1.26.3-aliyun-6.4.3.e7de0a1e

2024-03-18

  • GPU sharing

    • A ConfigMap can be submitted to isolate a specific GPU.

  • Custom priority-based resource scheduling

    • The elastic resources are supported.

v1.26.3-aliyun-6.4.1.d24bc3c3

2024-02-21

  • The scoring of the NodeResourceFit plug-in for Virtual Node is optimized. Currently, the NodeResourceFit plug-in always gives Virtual Node a score of 0. This way, Preferred NodeAffinity can preferentially schedule ECS nodes.

  • NUMA co-scheduling is supported.

  • Custom priority-based resource scheduling

    • Attempt wait between units is supported.

  • The following issue is fixed: Fewer pods can be scheduled due to an incorrect number of remaining IP addresses in IP-aware pod scheduling.

v1.26.3-aliyun-6.3.33fdc082

2024-01-10

  • Custom elastic resource priorities

    • The following issue is fixed: When you use custom priority-based resource scheduling, the affinity and spread of Elastic Container Instance-based pods across zones do not take effect.

  • Topology-aware CPU scheduling

    • The following issue is fixed: A pod fails to start on the node as the same vCPU is assigned to the pod multiple times.

  • Elastic Container Instance-based scheduling

    • The following issue is fixed: When the alibabacloud.com/burst-resource tag is used to specify a policy, the resources are still scheduled to an elastic container instance even if the value of the tag is not eci or eci_only.

  • CapacityScheduling

    • Automatic task preemption is supported in ACK Lingjun clusters.

v1.26.3-aliyun-6.2.d9c15270

2023-12-21

MatchLabelKeys is supported for custom priority-based resource scheduling. This way, versions are automatically grouped when an application is released.

v1.26.3-aliyun-6.1.a40b0eef

2023-12-13

  • CapacityScheduling

    • The quota can be specified. You can use quota.scheduling.alibabacloud.com/name to specify the quota of a pod.

    • Queue association is supported. This feature allows you to collect statistics on resource usage of only Kube Queue-managed pods.

    • The preemption logic is optimized. In the new version, CapacityScheduling preemption does not cause the usage of preempted pods to be lower than the minimum value or higher than the maximum value.

  • Custom elastic resource priorities

    • Labels for updating the unit and node of ResourcePolicy are updated. After the unit and node are updated, the Deletion-Cost of the pod is updated as well.

    • IgnoreTerminatingPod is added. Pods that are being deleted can be ignored when the number of pods in a unit is counted.

    • IgnorePreviousPod is added. Pods whose CreationTimestamp is earlier than that of the associated ResourcePolicy can be ignored when the number of pods in a unit is counted.

    • PreemptPolicy is supported to attempt pod preemption between units.

  • GPUShare

    • The GPU sharing-based scheduling is accelerated. The 99th percentile scheduling latency of the Filter plug-in is reduced from milliseconds to microseconds.

v1.26.3-aliyun-5.9-cd4f2cc3

2023-11-16

  • The display of the cause of the scheduling failure due to the invalid disk type is optimized.

v1.26.3-aliyun-5.8-a1482f93

2023-10-16

  • Pods can be scheduled to Windows nodes.

  • Coscheduling is optimized to accelerate the scheduling of multiple concurrent tasks and reduce blocked tasks.

v1.26.3-aliyun-5.7-2f57d3ff

2023-09-20

  • The following issue is fixed: When GPU sharing is used to schedule pods, kube-scheduler occasionally fails to admit pods.

  • A plug-in is added to kube-scheduler to detect the available IP addresses on a node. If no IP addresses are available on the node, pods are no longer scheduled to the node.

  • A topology-aware scheduling plug-in is added to kube-scheduler. This plug-in can schedule pods to the same topological domain and automatically retries scheduling on multiple topological domains.

  • kube-scheduler updates the usage and request information about ElasticQuotaTree every one second.

v1.26.3-aliyun-5.5-8b98a1cc

2023-07-05

  • The following issue is fixed: Pods occasionally remain in the Pending state for a long time when Coscheduling is used.

  • User experience is optimized for using Coscheduling and elastic node pools at the same time. If some pods in a pod group cannot be scheduled or created due to incorrect node selector configurations, the other pods in the PodGroup will not trigger scale-out activities.

v1.26.3-aliyun-5.4-21b4da4c

2023-07-03

  • The issue the max parameter in a ResourcePolicy does not take effect is fixed.

  • The impact of a large number of pending pods on the kube-scheduler performance is mitigated. This update brings the throughput of kube-scheduler up to a level close to that when the cluster does not contain pending pods.

v1.26.3-aliyun-5.1-58a821bf

2023-05-26

Fields such as min-available and Matchpolicy can be updated for PodGroups.

v1.26.3-aliyun-5.0-7b1ccc9d

2023-05-22

  • The maximum number of replicated pods can be specified in the Unit field when you configure priority-based resource scheduling.

  • Topology-aware GPU scheduling is supported.

v1.26.3-aliyun-4.1-a520c096

2023-04-27

Nodes are not added by cluster-autoscaler when the elastic quota is exhausted or when pods are insufficient for gang scheduling.

Release notes for v1.24

Version number

Release date

Description

v1.24.6-aliyun-6.5.0.37a567db (Whitelist enabled)

2024-11-04

Custom priority-based resource scheduling

  • Support for the Alibaba Cloud Container Compute Service (ACS) type is added.

v1.24.6-aliyun-6.4.6.c4d551a0

2024-05-16

  • CapacityScheduling

    • The running AddPod on PreFilter plugin error that occasionally occurs is fixed.

v1.24.6-aliyun-6.4.5.aab44b4a

2024-05-09

v1.24.6-aliyun-6.4.3.742bd819

2024-03-18

  • GPU sharing

    • A ConfigMap can be submitted to isolate a specific GPU.

  • Custom priority-based resource scheduling

    • The elastic resources are supported.

v1.24.6-aliyun-6.4.1.14ebc575

2024-02-21

  • The scoring of the NodeResourceFit plug-in for Virtual Node is optimized. Currently, the NodeResourceFit plug-in always gives Virtual Node a score of 0. This way, Preferred NodeAffinity can preferentially schedule ECS nodes.

  • NUMA Coscheduling is supported.

  • Custom priority-based resource scheduling

    • Attempt wait between units is supported.

  • The following issue is fixed: Fewer pods can be scheduled due to an incorrect number of remaining IP addresses in IP-aware pod scheduling.

v1.24.6-aliyun-6.3.548a9e59

2024-01-10

  • Custom priority-based resource scheduling

    • The following issue is fixed: When you use custom priority-based resource scheduling, the affinity and spread of Elastic Container Instance-based pods across zones do not take effect.

  • Topology-aware CPU scheduling

    • The following issue is fixed: A pod fails to start on the node as the same vCPU is assigned to the pod multiple times.

  • Elastic Container Instance-based scheduling

    • The following issue is fixed: When the alibabacloud.com/burst-resource tag is used to specify a policy, the resources are still scheduled to an elastic container instance even if the value of the tag is not eci or eci_only.

  • CapacityScheduling

    • Automatic task preemption is supported in ACK Lingjun clusters.

v1.24.6-aliyun-6.2.0196baec

2023-12-21

MatchLabelKeys is supported for custom priority-based resource scheduling. This way, versions are automatically grouped when an application is released.

v1.24.6-aliyun-6.1.1900da95

2023-12-13

  • CapacityScheduling

    • The quota can be specified. You can use quota.scheduling.alibabacloud.com/name to specify the quota of a pod.

    • Queue association is supported. This feature allows you to collect statistics on resource usage of only Kube Queue-managed pods.

    • The preemption logic is optimized. In the new version, CapacityScheduling preemption does not cause the usage of preempted pods to be lower than the minimum value or higher than the maximum value.

  • Custom elastic resource priorities

    • Labels for updating the unit and node of ResourcePolicy are updated. After the unit and node are updated, the Deletion-Cost of the pod is updated as well.

    • IgnoreTerminatingPod is added. Pods that are being deleted can be ignored when the number of pods in a unit is counted.

    • IgnorePreviousPod is added. Pods whose CreationTimestamp is earlier than that of the associated ResourcePolicy can be ignored when the number of pods in a unit is counted.

    • PreemptPolicy is supported to attempt pod preemption between units.

  • GPUShare

    • The GPU sharing-based scheduling is accelerated. The 99th percentile scheduling latency of the Filter plug-in is reduced from milliseconds to microseconds.

v1.24.6-aliyun-5.9-e777ab5b

2023-11-16

  • The display of the cause of the scheduling failure due to the invalid disk type is optimized.

v1.24.6-aliyun-5.8-49fd8652

2023-10-16

  • Pods can be scheduled to Windows nodes.

  • Coscheduling is optimized to accelerate the scheduling of multiple concurrent tasks and reduce blocked tasks.

v1.24.6-aliyun-5.7-62c7302c

2023-09-20

  • The following issue is fixed: When GPU sharing is used to schedule pods, kube-scheduler occasionally fails to admit pods.

v1.24.6-aliyun-5.6-2bb99440

2023-08-31

  • A plug-in is added to kube-scheduler to detect the available IP addresses on a node. If no IP addresses are available on the node, pods are no longer scheduled to the node.

  • A topology-aware scheduling plug-in is added to kube-scheduler. This plug-in can schedule pods to the same topological domain and automatically retries scheduling on multiple topological domains.

  • kube-scheduler updates the usage and request information about ElasticQuotaTree every one second.

v1.24.6-aliyun-5.5-5e8aac79

2023-07-05

  • The following issue is fixed: Pods occasionally remain in the Pending state for a long time when Coscheduling is used.

  • User experience is optimized for using Coscheduling and elastic node pools at the same time. If some pods in a pod group cannot be scheduled or created due to incorrect node selector configurations, the other pods in the PodGroup will not trigger scale-out activities.

v1.24.6-aliyun-5.4-d81e785e

2023-07-03

  • The issue the max parameter in a ResourcePolicy does not take effect is fixed.

  • The impact of a large number of pending pods on the kube-scheduler performance is mitigated. This update brings the throughput of kube-scheduler up to a level close to that when the cluster does not contain pending pods.

v1.24.6-aliyun-5.1-95d8a601

2023-05-26

Fields such as min-available and Matchpolicy can be updated for Coscheduling.

v1.24.6-aliyun-5.0-66224258

2023-05-22

  • The maximum number of replicated pods can be specified in the Unit field when you configure priority-based resource scheduling.

  • Topology-aware GPU scheduling is supported.

v1.24.6-aliyun-4.1-18d8d243

2023-03-31

Elastic resources can be used to schedule pods to ARM-based virtual nodes.

v1.24.6-4.0-330eb8b4-aliyun

2023-03-01

  • GPU sharing:

    • The kube-scheduler status error during GPU-accelerated node downgrades is fixed.

    • The issue that you cannot allocate all GPU memory of a GPU-accelerated node is fixed.

    • Pods on GPU-accelerated nodes can be preempted.

  • Coscheduling:

    • Gangs can be claimed by using PodGroups or calling the Koordinator API.

    • Gang scheduling retries can be controlled by claiming a matchpolicy.

    • Gang groups are supported.

    • The naming of gangs must meet the rules for DNS subdomains.

  • Custom parameters are added to support Loadaware-related configurations.

v1.24.6-3.2-4f45222b-aliyun

2023-01-13

The issue that the GPU memory used by pods cannot be displayed as normal due to the incorrect shared GPU memory information is fixed.

v1.24.6-ack-3.1

2022-11-14

  • The score feature is enabled for GPU sharing by default. The score feature is disabled by default in earlier versions.

  • Load-aware scheduling is supported.

v1.24.6-ack-3.0

2022-09-27

Capacity scheduling is supported.

v1.24.3-ack-2.0

2022-09-21

  • GPU sharing is supported.

  • Coscheduling is supported.

  • Elastic Container Instance-based scheduling is supported.

  • Intelligent CPU scheduling is supported.

Release notes for v1.22

Version number

Release date

Description

v1.22.15-aliyun-6.4.5.08196303

2024-05-23

  • Network topology-aware scheduling

    • Occasional network topology-aware scheduling failures are fixed.

v1.22.15-aliyun-6.4.4.7fc564f8

2024-05-16

  • CapacityScheduling

    • The running AddPod on PreFilter plugin error that occasionally occurs is fixed.

v1.22.15-aliyun-6.4.3.e858447b

2024-04-22

  • Custom priority-based resource scheduling

    • The following issue is fixed: When you delete a ResourcePolicy, the status is occasionally abnormal.

v1.22.15-aliyun-6.4.2.4e00a021

2024-03-18

  • CapacityScheduling

    • Occasional preemption failures in ACK Lingjun clusters are fixed.

  • GPU cards in a cluster can be added to the blacklist by using ConfigMaps.

v1.22.15-aliyun-6.4.1.1205db85

2024-02-29

  • Custom priority-based resource scheduling

    • Occasional concurrency conflicts are fixed.

v1.22.15-aliyun-6.4.0.145bb899

2024-02-28

  • CapacityScheduling

    • Incorrect quota statistics due to specific quota features are fixed.

v1.22.15-aliyun-6.3.a669ec6f

2024-01-10

  • Custom priority-based resource scheduling

    • The following issue is fixed: When you use custom priority-based resource scheduling, the affinity and spread of Elastic Container Instance-based pods across zones do not take effect.

    • MatchLabelKeys is supported.

  • Topology-aware CPU scheduling

    • The following issue is fixed: The same vCPU is assigned to a pod multiple times. Therefore, the pod fails to start on the node.

  • Elastic Container Instance-based scheduling

    • The following issue is fixed: When the alibabacloud.com/burst-resource tag is used to specify a policy, the resources are still scheduled to an elastic container instance even if the value of the tag is not eci or eci_only.

  • CapacityScheduling

    • Automatic task preemption is supported in ACK Lingjun clusters.

v1.22.15-aliyun-6.1.e5bf8b06

2023-12-13

  • CapacityScheduling

    • The quota can be specified. You can use quota.scheduling.alibabacloud.com/name to specify the quota of a pod.

    • Queue association is supported. This feature allows you to collect statistics on only resources of Kube Queue-managed pods.

    • The preemption logic is optimized. In the new version, CapacityScheduling preemption does not cause the usage of preempted pods to be lower than the minimum value or higher than the maximum value.

  • Custom elastic resource priorities

    • Labels for updating the unit and node of ResourcePolicy are updated. After the unit and node are updated, the Deletion-Cost of the pod is updated as well.

    • IgnoreTerminatingPod is added. Pods that are being deleted can be ignored when the number of pods in a unit is counted.

    • IgnorePreviousPod is added. Pods whose CreationTimestamp is earlier than that of the associated ResourcePolicy can be ignored when the number of pods in a unit is counted.

    • PreemptPolicy is supported to attempt pod preemption between units.

  • GPUShare

    • The GPU sharing-based scheduling is accelerated. The 99th percentile scheduling latency of the Filter plug-in is reduced from milliseconds to microseconds.

v1.22.15-aliyun-5.9-04a5e6eb

2023-11-16

  • The display of the cause of the scheduling failure due to the invalid disk type is optimized.

v1.22.15-aliyun-5.8-29a640ae

2023-10-16

  • Pods can be scheduled to Windows nodes.

  • Coscheduling is optimized to accelerate the scheduling of multiple concurrent tasks and reduce blocked tasks.

v1.22.15-aliyun-5.7-bfcffe21

2023-09-20

  • The following issue is fixed: When GPU sharing is used to schedule pods, kube-scheduler occasionally fails to admit pods.

v1.22.15-aliyun-5.6-6682b487

2023-08-14

  • A plug-in is added to kube-scheduler to detect the available IP addresses on a node. If no IP addresses are available on the node, pods are no longer scheduled to the node.

  • A topology-aware scheduling plug-in is added to kube-scheduler. This plug-in can schedule pods to the same topological domain and automatically retries scheduling on multiple topological domains.

  • kube-scheduler updates the usage and request information about ElasticQuotaTree every one second.

v1.22.15-aliyun-5.5-82f32f68

2023-07-05

  • The following issue is fixed: Pods occasionally remain in the Pending state for a long time when Coscheduling is used.

  • The user experience of PodGroups in elastic node pools is optimized. If some pods in a pod group cannot be scheduled or created due to incorrect node selector configurations, the other pods in the PodGroup will not trigger scale-out activities.

v1.22.15-aliyun-5.4-3b914a05

2023-07-03

  • The issue the max parameter in a ResourcePolicy does not take effect is fixed.

  • The impact of a large number of pending pods on the kube-scheduler performance is mitigated. This update brings the throughput of kube-scheduler up to a level close to that when the cluster does not contain pending pods.

v1.22.15-aliyun-5.1-8a479926

2023-05-26

Fields such as min-available and Matchpolicy can be updated for PodGroups.

v1.22.15-aliyun-5.0-d1ab67d9

2023-05-22

  • The maximum number of replicated pods can be specified in the Unit field when you configure priority-based resource scheduling.

  • Topology-aware GPU scheduling is supported.

v1.22.15-aliyun-4.1-aec17f35

2023-03-31

Elastic resources can be used to schedule pods to ARM-based virtual nodes.

v1.22.15-aliyun-4.0-384ca5d5

2023-03-03

  • GPU sharing:

    • The kube-scheduler status error during GPU-accelerated node downgrades is fixed.

    • The issue that you cannot allocate all GPU memory of a GPU-accelerated node is fixed.

    • Pods on GPU-accelerated nodes can be preempted.

  • Coscheduling:

    • Gangs can be claimed by using PodGroups or calling the Koordinator API.

    • Gang scheduling retries can be controlled by claiming a matchpolicy.

    • Gang groups are supported.

    • The naming of gangs must meet the rules for DNS subdomains.

  • Custom parameters are added to support Loadaware-related configurations.

v1.22.15-2.1-a0512525-aliyun

2023-01-10

The issue that the GPU memory used by pods cannot be displayed as normal due to the incorrect shared GPU memory information is fixed.

v1.22.15-ack-2.0

2022-11-30

  • Custom parameter settings are supported.

  • Load-aware scheduling is supported.

  • Priority-based scheduling is supported. You can use this feature to schedule pods to node pools based on priorities.

  • The computing power of GPUs can be shared.

v1.22.3-ack-1.1

2022-02-27

The issue that GPU sharing and scheduling do not work when the cluster contains only one node is fixed.

v1.22.3-ack-1.0

2021-01-04

  • Intelligent CPU scheduling is supported.

  • Coscheduling is supported.

  • Capacity scheduling is supported.

  • Elastic Container Instance-based scheduling is supported.

  • GPU sharing is supported.

Release notes for v1.20

Version number

Release date

Description

v1.20.11-aliyun-10.6-f95f7336

2023-09-22

  • Occasional incorrect statistics on quota usage in ElasticQuotaTree are fixed.

v1.20.11-aliyun-10.3-416caa03

2023-05-26

  • The cache error that occasionally occurs during GPU sharing in earlier Kubernetes versions is fixed.

v1.20.11-aliyun-10.2-f4a371d3

2023-04-27

  • Elastic resources can be used to schedule pods to ARM-based virtual nodes.

  • The issue that load-aware scheduling does not work as expected when the CPU usage of pods on a node exceeds the CPU request of the pods is fixed.

v1.20.11-aliyun-10.0-ae867721

2023-04-03

The Matchpolicy field is supported by Coscheduling.

v1.20.11-aliyun-9.2-a8f8c908

2023-03-08

  • CapacityScheduling: The kube-scheduler status error caused by quotas with the same name is fixed.

  • Cloud disk scheduling is supported.

  • GPU sharing and scheduling:

    • The kube-scheduler status error during GPU-accelerated node downgrades is fixed.

    • The occasional issue that you cannot allocate all GPU memory of a GPU-accelerated node is fixed.

    • Pods on GPU-accelerated nodes can be preempted.

  • Topology-aware CPU scheduling: Pods that have CPU scheduling enabled are not scheduled to nodes that have Numa disabled.

  • Custom parameters are added.

v1.20.4-ack-8.0

2022-08-29

Bugs are fixed.

v1.20.4-ack-7.0

2022-02-22

Priority-based scheduling is supported. You can use this feature to schedule pods to node pools based on priorities.

v1.20.4-ack-4.0

2021-09-02

  • Load-aware scheduling is supported.

  • Elastic Container Instance-based scheduling is supported.

v1.20.4-ack-3.0

2021-05-26

Intelligent CPU scheduling based on sockets and L3 cache (last level cache) is supported.

v1.20.4-ack-2.0

2021-05-14

Capacity scheduling is supported.

v1.20.4-ack-1.0

2021-04-07

  • Intelligent CPU scheduling is supported.

  • Coscheduling is supported.

  • Topology-aware GPU scheduling is supported.

  • GPU sharing is supported.

Release notes for v1.18

Version number

Release date

Description

v1.18-ack-4.0

2021-09-02

Load-aware scheduling is supported.

v1.18-ack-3.1

2021-06-05

Node pools are supported by Elastic Container Instance-based scheduling.

v1.18-ack-3.0

2021-03-12

Scheduling based on both Elastic Container Instance and ECS is supported.

v1.18-ack-2.0

2020-11-30

Topology-aware GPU scheduling and GPU sharing are supported.

v1.18-ack-1.0

2020-09-24

Intelligent CPU scheduling and Coscheduling are supported.

Release notes for v1.16

Version number

Release date

Description

v1.16-ack-1.0

2020-07-21

  • Intelligent CPU scheduling is supported by clusters that run Kubernetes 1.16.

  • Coscheduling is supported by clusters that run Kubernetes 1.16.