To use the features of ACK Pro clusters, such as control plane hosting and control plane high availability, when you have only ACK dedicated clusters, you can perform a hot migration to migrate from ACK dedicated clusters to ACK Pro clusters without business interruptions.
Prerequisites
Prerequisite | Description |
Cluster | An ACK dedicated cluster (to be migrated) that runs Kubernetes 1.18 or later is created. For more information about how to update a cluster, see Manually upgrade ACK clusters. |
OSS bucket | An Object Storage Service (OSS) bucket is created in the region of the ACK cluster to be migrated and hotlink protection is disabled for the bucket because hotlink protection can cause migration failures. For more information, see Create a bucket and Hotlink protection. |
Considerations
Item | Description |
Public access | Some old ACK dedicated clusters still use Internet-facing Server Load Balancer (SLB) instances to expose the API server over public access. After you migrate from these clusters to ACK Pro clusters, the Internet-facing SLB instances can no longer be used to expose the API server to public access. You must manually associate an elastic IP address (EIP) with the internal-facing SLB instance of the API server so that the API server can be exposed to public access. For more information about how to switch to the EIP mode, see Control public access to the API server of a cluster. |
Custom pod configurations | If your ACK dedicated cluster has custom pod configurations enabled, you cannot migrate the cluster to an ACK Pro cluster. You need to stop terway-controlplane before the migration starts and then enable terway-controlplane after the migration is complete. For more information, see Stop terway-controlplane before cluster migration. For more information about how to customize pod configurations, see Configure a static IP address, a separate vSwitch, and a separate security group for each pod. |
Master nodes | Cloud Assistant Agent is not installed on some old master nodes. You must manually install it. For more information, see Install the Cloud Assistant Agent. After the migration is complete, the status of the master nodes will change to Not Ready. |
Release of ECS instances | If you choose to release Elastic Compute Service (ECS) instances when you remove master nodes, ACK will release all pay-as-you-go ECS instances and their data disks. You need to manually release subscription ECS instances. For more information, see Release an instance. |
Step 1: Perform a hot migration to migrate from an ACK dedicated cluster to ACK Pro cluster
Before you start, make sure that all prerequisites are met and you have read and understand the considerations. After you migrate to an ACK Pro cluster, you cannot roll back to the ACK dedicated cluster.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, choose More > Migrate to Pro in the Actions column of the ACK cluster to be migrated.
In the Migrate to Pro dialog box, complete the precheck and Resource Access Management (RAM) authorization, select the OSS bucket that you created for hot migration, and then click OK.
After the migration is complete, the Migrate to Pro dialog box displays a message. You can check the type of the ACK cluster and the status of the master nodes.
Cluster type: Go back to the Clusters page. The cluster type in the Type column changes from ACK Dedicated to ACK Pro.
Master node status: On the Clusters page, click Details in the Actions column of the cluster. In the left-side navigation pane, choose Nodes > Nodes. If the Role/Status column of the master nodes displays Unknown, the master nodes are disconnected from the cluster. You can refer to Step 2: Remove master nodes from the ACK dedicated cluster after the hot migration is complete to remove master nodes after the hot migration is complete.
Step 2: Remove master nodes from the ACK dedicated cluster after the hot migration is complete
After the hot migration is complete, you can use the console or run kubectl commands to remove master nodes from the ACK dedicated cluster.
Use the console
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side navigation pane, choose .
On the Nodes page, choose More > Remove in the Actions column of a master node or select one or more master nodes and click Batch Remove at the bottom. In the dialog box that appears, configure parameters and click OK.
Use kubectl
Before you run the commands, make sure that the kubectl client is connected to the cluster. For more information about how to use kubectl to connect to a cluster, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Run the following command to query and record the names of the master nodes to be removed:
kubectl get node | grep control-plane
Run the following command to remove a master node. Replace
<MASTER_NAME>
with the name of the master node.kubectl delete node <MASTER_NAME>
To remove multiple master nodes at a time, replace
<MASTER_NAME>
with the names of the master nodes. For example, to remove master nodes cn-hangzhou.192.xx.xx.65 and cn-hangzhou.192.xx.xx.66 at the same time, run the following command:kubectl delete node cn-hangzhou.192.xx.xx.65 cn-hangzhou.192.xx.xx.66
Step 3: Handle components
Reinstall the ALB Ingress controller
If your ACK dedicated cluster has the ALB Ingress controller installed, you must reinstall it after the migration is complete. For more information about how to install the ALB Ingress controller, see Manage components. After the ALB Ingress controller is installed, you need to use kubectl to run the following command to delete the original Deployment. Before you run the commands, make sure that the kubectl client is connected to the cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
kubectl delete deployment alb-ingress-controller -n kube-system
Reinstall the ACK Virtual Node component
If your ACK dedicated cluster has the ACK Virtual Node component installed, to migrate without business interruptions, you must reinstall the ACK Virtual Node component in the ACK Pro cluster after the migration is complete.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side navigation pane, choose .
On the Add-ons page, find and install the ACK Virtual Node component.
After the ACK Virtual Node component is installed, run the following commands in sequence to delete the original components and configurations.
# Delete the original vk-webhook Service, ack-virtual-node-controller Deployment, ClusterRoleBindings related to virtual nodes, and virtual node ServiceAccounts in sequence. kubectl -n kube-system delete service vk-webhook kubectl -n kube-system delete deployment ack-virtual-node-controller kubectl -n kube-system delete clusterrolebinding virtual-kubelet kubectl -n kube-system delete serviceaccount virtual-kubelet
After the migration is complete, create pods to check whether the cluster runs as normal.
What to do next
After you migrate to an ACK Pro cluster, you need to manually limit the permissions of the worker RAM role assumed by nodes in the cluster in order to enhance node security. For more information, see Manually limit the permissions of the worker RAM role of an ACK managed cluster.
If your ACK dedicated cluster has cGPU Basic Edition installed, after you migrate to an ACK Pro cluster, you need to upgrade cGPU Basic Edition to cGPU Professional Edition. For more information, see Upgrade cGPU Basic Edition to cGPU Professional Edition in an ACK Pro cluster.
FAQs
Can I roll back after I migrate the ACK dedicated cluster?
If the migration is successful, you cannot roll back. If the migration fails, the system will automatically roll back the cluster.
Are my services on the ACK dedicated cluster affected during migration?
During the cluster migration, the managed components of the ACK dedicated cluster will enter sleep mode, but running services will not be affected.
How long does the migration process approximately take?
The cluster migration includes three stages: the control plane enters sleep mode, etcd data is backed up, and managed components are started. The entire process is expected to take about 10 to 15 minutes, during which the API server will be unavailable for approximately 5 to 10 minutes.
Will the IP address change after I migrate the cluster?
After the migration, the IP address of the SLB API server will remain the same. If you access the cluster through KubeConfig, the cluster endpoint will also remain unchanged.
How do I handle failures in environment variable configurations for ACK Virtual Node during the precheck?
If the ACK Virtual Node component is installed in the ACK dedicated cluster, you must manually configure an internal endpoint for kube-apiserver before the migration starts. To do this, perform the following steps:
On the Cluster Information page, obtain the internal endpoint of kube-apiserver.
On the Deployments page, select the kube-system namespace, find the Deployment named ack-virtual-node-controller, and then add the following environment variables to the
spec.template.spec.containers[0].env
field of the Deployment:
KUBERNETES_APISERVER_HOST
: the private IP address of kube-apiserver.KUBERNETES_APISERVER_PORT
: the private port of kube-apiserver, which is set to 6443 in most cases.