You can use a YurtAppSet to distribute an application to multiple node pools in an ACK Edge cluster. YurtAppSets can quickly respond to node pool label changes. This allows YurtAppSets to centrally manage the configurations, such as the number of pods and the application versions, of a workload that is distributed to multiple node pools. This topic describes how to use YurtAppSets to manage applications in ACK Edge clusters.
Background information
Traditional application deployment
In edge computing scenarios, computing nodes may be deployed across regions, and an application may need to run on nodes in different regions. Assume that you want to use Deployments to deploy an application in different regions. Traditionally, you add the same labels to nodes in the same region and create duplicate Deployments that are configured with different node selectors. This way, the system schedules the Deployments to nodes in different regions by matching node labels against the node selectors.
As the number of regions and differentiated requirements for applications in different regions increase, application management and maintenance become more complex. The following list describes the main challenges:
Cumbersome update procedure: When you update an application, you need to manually update all Deployments in different regions. This reduces the update efficiency.
Complex application maintenance: You need to manually distinguish and maintain Deployments in different regions. This leads to burdensome application O&M as the number of regions increases.
Redundant application configurations: The configurations of Deployments in different regions are highly similar, resulting in complex and error-prone configuration management.
Application deployment based on YurtAppSets
YurtAppSets are provided by ACK Edge to reduce the complexity of distributed deployment in edge computing scenarios. YurtAppSets are upper-layer abstractions that allow you to centrally manage multiple workloads. For example, you can use YurtAppSets to create, update, and delete multiple Deployments in a centralized manner.
YurtAppSets support the following features to address the disadvantages of the traditional deployment mode, including inefficient application updates, complex application maintenance, and redundant application configurations.
Unified template definition (workloadTemplate)
You can use the workloadTemplate parameter in the configurations of YurtAppSets to specify a template that is used to deploy the same application in multiple regions. This prevents duplicate application configurations and deployments and ensures efficient and consistent batch operations such as creations, updates, and deletions.
Automated deployment (nodepoolSelector)
You can use the nodepoolSelector parameter in the configurations of YurtAppSets to specify the labels that are used to select node pools. This keeps the application in sync with node pools. When you create or delete node pools, the system automatically selects matching node pools based on the nodepoolSelector parameter to deploy workloads. This simplifies O&M work.
Regionally differentiated deployment (workloadTweaks)
You can use the workloadTweaks parameter in the configurations of YurtAppSets to customize the workloads in specific regions. You do not need to manage or update each workload in the regions.
Create a YurtAppSet
If the Kubernetes version of your ACK Edge clusters is 1.26 or later, create a YurtAppSet.
If the Kubernetes version of your ACK Edge clusters is earlier than 1.26, create a UnitedDeployment.
Kubernetes 1.26 or later
Create a YurtAppSet that defines a Deployment template.
The following YAML template is an example:
apiVersion: apps.openyurt.io/v1beta1
kind: YurtAppSet
metadata:
name: example
namespace: default
spec:
revisionHistoryLimit: 5
pools:
- np1xxxxxx
- np2xxxxxx
nodepoolSelector:
matchLabels:
yurtappset.openyurt.io/type: "nginx"
workload:
workloadTemplate:
deploymentTemplate:
metadata:
labels:
app: example
spec:
replicas: 2
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- image: nginx:1.19.1
imagePullPolicy: Always
name: nginx
workloadTweaks:
- pools:
- np2xxxxxx
tweaks:
replicas: 3
containerImages:
- name: nginx
targetImage: nginx:1.20.1
patches:
- path: /metadata/labels/test
operation: add
value: test
The following table describes the parameters in the YAML template.
Parameter | Description | Required |
spec.pools | The node pools in which the application is deployed. The value must be of the slice type. We recommend that you use the nodepoolSelector parameter to select node pools. | No |
spec.nodepoolSelector | The label selector that is used to select node pools to deploy the application. If you specify this parameter and the spec.pools parameter at the same time, the node pools that match both parameters are selected. | No |
spec.workload.workloadTemplate | The workload template. Set the value to either | Yes |
spec.workload.workloadTweaks | The custom modifications to specific workloads. | No |
spec.workload.workloadTweaks[*].pools | The node pools that host the application to which the custom modifications apply. The value must be of the slice type. | No |
spec.workload.workloadTweaks[*].nodepoolSelector | The label selector that is used to select the node pools that host the application to which the custom modifications apply. | No |
spec.workload.workloadTweaks[*].tweaks.replicas | The number of replicated pods created for the workload after modification. | No |
spec.workload.workloadTweaks[*].tweaks.containerImages | The image used to deploy the workload after modification. | No |
spec.workload.workloadTweaks[*].tweaks.patches | You can use this parameter to modify the fields in the workloadTemplate parameter. | No |
spec.workload.workloadTweaks[*].tweaks.patches[*].path | The path of the field in the workloadTemplate parameter. | No |
spec.workload.workloadTweaks[*].tweaks.patches[*].operation | The operation to perform on the path. Valid values: add, remove, and replace. | No |
spec.workload.workloadTweaks[*].tweaks.patches[*].value | The value after modification. This parameter takes effect only on the add or replace operation. | No |
status.conditions | The status information of the YurtAppSet, including whether a node pool is selected and the workload status. | |
status.readyWorkloads | The number of workloads whose replicated pods are all in the Ready state. | |
status.updatedWorkloads | The number of workloads whose replicated pods are all updated to the latest version. | |
status.totalWorkloads | The number of workloads managed by the YurtAppSet. |
Kubernetes versions earlier than 1.26
Create a UnitedDeployment that defines a Deployment template.
The following YAML template is an example:
apiVersion: apps.openyurt.io/v1alpha1
kind: UnitedDeployment
metadata:
name: example
namespace: default
spec:
revisionHistoryLimit: 5
selector:
matchLabels:
app: example
workloadTemplate:
deploymentTemplate:
metadata:
creationTimestamp: null
labels:
app: example
spec:
replicas: 2
selector:
matchLabels:
app: example
template:
metadata:
creationTimestamp: null
labels:
app: example
spec:
containers:
- image: nginx:1.19.3
imagePullPolicy: Always
name: nginx
dnsPolicy: ClusterFirst
restartPolicy: Always
topology:
pools:
- name: cloud
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- np4b9781c40f0e46c581b2cf2b6160****
replicas: 2
- name: edge
nodeSelectorTerm:
matchExpressions:
- key: apps.openyurt.io/nodepool
operator: In
values:
- np47832359db2e4843aa13e8b76f83****
replicas: 2
tolerations:
- effect: NoSchedule
key: apps.openyurt.io/taints
operator: Exists
The following table describes the parameters in the YAML template.
Parameter | Description |
spec.workloadTemplate | The workload template. Valid values: |
spec.topology.pools | Configurations of multiple node pools. |
spec.topology.pools[*].name | The name of the node pool. |
spec.topology.pools[*].nodeSelectorTerm | The node affinity rule used to select node pools. Set the key to Note You can view the ID of a node pool below the name of the node pool on the Node Pools page in the ACK console. |
spec.topology.pools[*].tolerations | Tolerations for node pools. |
spec.topology.pools[*].replicas | The number of pods to be created in each node pool. |
Use YurtAppSets to manage applications at the edge
Update the application version: Modify the fields in the
spec.workload.workloadTemplate
parameter to trigger application updates. The YurtAppSet updates the templates of all workloads in all node pools accordingly. Then, the node pool controller updates the pods of the workloads.
Implement a canary release in a specific region: Modify the
spec.workload.workloadTweak[*].containerImages
parameter to trigger image updates for pods in a specific region.Scale pods in a specific region: Modify the
spec.workload.workloadTweak[*].replicas
parameter to trigger pod scaling in a specific region.Deploy an application in a new region: Create a new node pool and add a label that matches the spec.nodepoolSelector parameter in the application configurations to the node pool. The YurtAppSet detects the change and automatically deploys the workload in the new node pool. Then, add nodes in the region to the node pool.
Bring applications offline in a specific region: Delete the node pools where the application is deployed in the region. YurtAppSet detects the change and automatically deletes the workloads in the region.