All Products
Search
Document Center

Container Service for Kubernetes:Schedule ARM or multi-architecture workloads to ARM-based virtual nodes

最終更新日:Sep 14, 2024

By default, an ACK cluster schedules all workloads to virtual nodes that use the x86 architecture. If your cluster contains ARM-based virtual nodes and other virtual nodes, such as x86-based virtual nodes, you can configure Kubernetes scheduling to schedule ARM workloads only to ARM-based virtual nodes or preferably schedule multi-arch workloads to ARM-based virtual nodes.

Prerequisites

  • Cluster:

    An ACK cluster that runs Kubernetes 1.20 or later is created. For more information, see Create an ACK managed cluster and Manually update ACK clusters.

    Note

    ARM-based Elastic Compute Service (ECS) instances are available only in certain regions and zones. Make sure that your cluster is created in one of the regions. For more information about the regions and zones in which ARM-based ECS instances are available, see Instance Types Available for Each Region.

  • Component: The ack-virtual-node component is installed and the component version is 2.9.0 or later. For more information, see ack-virtual-node.

Usage notes

If your cluster runs a Kubernetes version earlier than 1.24, when you define a nodeSelector or nodeAffinity to schedule workloads to ARM-based virtual nodes, you must also add a toleration to tolerations to tolerate the kubernetes.io/arch=arm64:NoSchedule taint. If your cluster runs Kubernetes 1.24 or later, the scheduler can automatically recognize the kubernetes.io/arch=arm64:NoSchedule taint of ARM-based virtual nodes. You do not need to add a toleration to tolerations.

Billing

Pods deployed on elastic container instances that adopt the ARM architecture are billed based on the ECS instance type used to create the elastic container instances. The pods are not billed based on the usage of vCPUs and memory.

Important

After an Elastic Container Instance-based pod is created, you can run the kubectl describe pod command to view the YAML content of the pod. The k8s.aliyun.com/eci-instance-spec parameter indicates the ECS instance type used by the pod. The pod is billed based on the ECS instance type.

For more information about the ECS instance types that use the ARM architecture, refer to the following topics:

Step 1: Add ARM-based virtual nodes

Before you deploy an ARM workload in your cluster, you must create an ARM-based virtual node. You can configure an eci-profile to create ARM-based virtual nodes. You can use one of the following methods to modify the eci-profile ConfigMap. For more information, see Configure an eci-profile.

Use the ACK console

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side navigation pane, choose Configurations > ConfigMaps.

  3. Select kube-system from the Namespace drop-down list. Find eci-profile and click Edit in the Actions column. Then, set the enableLinuxArm64Node parameter to true. Click OK.

    image.png

    Note

    If all vSwitches of the cluster reside in zones that do not support ARM-based instances, create a vSwitch in a zone that supports ARM-based instances. Then, specify the ID of the vSwitch for the vSwitchIds parameter. For more information about how to create a vSwitch in a zone, see Create and manage a vSwitch.

    After you complete the configurations, wait about 30 seconds. Then, you can find the virtual node named virtual-kubelet-<zoneId>-linux-arm64 on the Nodes page.

Run the kubectl edit command

Prerequisites

The kubeconfig file of the cluster is obtained and kubectl is used to connect to the cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.

Procedure

Run the following command to modify the eci-profile ConfigMap:

kubectl edit configmap eci-profile -n kube-system
  1. Set the enableLinuxArm64Node parameter to true. If the parameter does not exist, add the parameter and set it to true.

  2. Specify the vSwitchIds parameter. Make sure that at least one vSwitch specified in the vSwitchIds parameter resides in a zone that supports ARM-based instances.

    Note

    If all vSwitches of the cluster reside in zones that do not support ARM-based instances, create a vSwitch in a zone that supports ARM-based instances. Then, specify the ID of the vSwitch for the vSwitchIds parameter. For more information about how to create a vSwitch in a zone, see Create and manage a vSwitch.

    After you complete the configurations, wait about 30 seconds. Then, you can find the virtual node named virtual-kubelet-<zoneId>-linux-arm64 on the Nodes page.

Step 2: Schedule workloads to ARM-based virtual nodes

Schedule ARM workloads to ARM-based virtual nodes

If your cluster contains ARM-based virtual nodes and other nodes but all workloads use the ARM architecture, you need to schedule the workloads only to ARM-based virtual nodes. If the pods are scheduled to other nodes, the pods cannot be started. By default, all ARM-based virtual nodes have the kubernetes.io/arch=arm64 label. You can configure nodeSelector or nodeAffinity to schedule workloads to ARM-based virtual nodes.

Use nodeSelector

You can add the following constraint to a pod to force the nodeSelector to schedule the pod to an ARM-based virtual node. In this case, the nodeSelector schedules the pod only to a node with the arm64 label. All ARM-based virtual nodes in an ACK cluster have this label.

nodeSelector:
  kubernetes.io/arch: arm64 # Specify the label that is used to select an ARM-based node.

The following sample code shows an example on how to schedule pods created by a Deployment to an ARM-based virtual node.

YAML file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: only-arm
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      nodeSelector:
        kubernetes.io/arch: arm64 # Specify the label that is used to select an ARM-based node. 
      containers:
      - name: nginx
        image: nginx

Use nodeAffinity

Prerequisites

The cluster has enabled virtual node scheduling, and the Kubernetes version and component version of the cluster meet the requirements.

Procedure

You can add the following constraint to a pod to schedule the pod to an ARM-based node based on node affinity. After this constraint is added, the pod can be scheduled only to a node that has the kubernetes.io/arm=arm64 label.

When the podSpec contains this constraint, the scheduler automatically tolerates the kubernetes.io/arch=arm64:NoSchedule taint.

affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/arch
          operator: In
          values:
          - arm64

The following sample code shows an example on how to schedule pods created by a Deployment to an ARM-based virtual node.

YAML file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: only-arm
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                - arm64
      containers:
      - name: nginx
        image: nginx

Schedule multi-architecture workloads to ARM-based virtual nodes

Prerequisites

The cluster has enabled virtual node scheduling, and the Kubernetes version and component version of the cluster meet the requirements.

Procedure

By default, workloads in an ACK cluster are scheduled to x86-based virtual nodes. Pods remain in the Pending state when x86-based virtual nodes are insufficient. If you use a multi-architecture image, such as an image that supports the x86 and ARM architectures, you need to schedule pods to nodes that are across the x86 and ARM architectures.

For example, you can configure node affinity to preferably schedule workloads to ARM-based virtual nodes or x86-based virtual nodes. If the requested virtual nodes are insufficient, the scheduler attempts to schedule workloads to other types of virtual nodes.

      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                - arm64

Preferably schedule workloads to ARM-based virtual nodes

The following sample code shows an example on how to preferably schedule workloads to ARM-based virtual nodes.

YAML file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: arm-prefer
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      tolerations:
# Tolerate the taint of ARM-based virtual nodes. 
      - key: kubernetes.io/arch
        operator: Equal
        value: arm64
        effect: NoSchedule
# Preferably schedule the workload to an ARM-based node. 
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                - arm64
      containers:
      - name: my-container
        image: nginx

Preferably schedule workloads to x86-based virtual nodes

The following sample code shows an example on how to preferably schedule workloads to x86-based virtual nodes.

YAML file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: amd-prefer
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      tolerations:
# Tolerate the taint of ARM-based virtual nodes. 
      - key: kubernetes.io/arch
        operator: Equal
        value: arm64
        effect: NoSchedule
# Preferably schedule the workload to a x86-based virtual node. 
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 1
            preference:
              matchExpressions:
              - key: kubernetes.io/arch
                operator: In
                values:
                - amd64
      containers:
      - name: my-container
        image: nginx

FAQ

Why are pods scheduled to x86-based ECS instances after I configure nodeAffinity to preferably schedule pods to ARM-based nodes?

By default, the cluster scheduler preferably schedules workloads to ECS instances. If the resources of ECS instances are insufficient, the cluster scheduler schedules workloads to virtual nodes. If you do not modify the weight of the scheduler scoring plug-in, pods may be scheduled to x86-based ECS instances even if the cluster has sufficient resources and nodeAffinity is configured to preferably schedule the pods to ARM-based nodes. Therefore, you can use the nodeAffinity configurations in this topic to specify the priority to schedule workloads to ARM-based or x86-based virtual nodes. The priority to schedule workloads to virtual nodes or ECS instances cannot be guaranteed.

Can I use ARM-based preemptible instances?

Yes, ARM-based preemptible instances are available. For more information, see Use preemptible instances.

How do I configure a network that supports ARM-based virtual nodes after I create a cluster?

After you create an ACK cluster in a zone that supports ARM-based instances, you can modify the vSwitchIds parameter in the eci-profile ConfigMap to select vSwitches that reside in the zone. This ensures that the virtual nodes you create support the ARM architecture.

What are the limits of using ARM-based nodes in an ACK cluster?

Components that are displayed on the Marketplace page of the ACK console do not support the ARM architecture. Only the following components support the ARM architecture:

  • Key components

  • Logging and monitoring components

  • Volume components

  • Network components

References