When your ACK cluster runs low on IP addresses, you can add a secondary CIDR block to its VPC. This expands the available IP address pool so that new nodes and pods can be provisioned for growing workloads.
Prerequisites
You must have one of the following clusters:
-
An ACK dedicated cluster.
-
An ACK managed cluster created in February 2021 or later. For more information, see Create an ACK managed cluster or Create an ACK dedicated cluster (discontinued).
ACK managed clusters created before February 2021 must first be upgraded to ACK Pro clusters before you can add a secondary CIDR block. For details, see Hot migration from ACK managed Basic clusters to ACK managed Pro clusters.
Step 1: Select a secondary CIDR block
Before you add a secondary CIDR block, identify all CIDR blocks already in use to avoid overlapping address ranges. The secondary CIDR block must not overlap with any existing CIDR block.
Check existing CIDR blocks
-
Log on to the ACK console. In the left-side navigation pane, click Clusters.
-
On the Clusters page, find the target cluster and click its name. In the left-side pane, click Cluster Information.
-
On the Cluster Information page, click the Basic Information tab, then click the link next to VPC.
-
On the VPC Details page, click the CIDR Block Management tab to view the CIDR blocks in use.
CIDR blocks to check
Collect the following CIDR blocks. The secondary CIDR block must not overlap with any of them.
| CIDR block type | Where to find it | Notes |
|---|---|---|
| VPC CIDR block | See View a VPC. | Required for all clusters. |
| Pod and Service CIDR blocks | See View cluster information. | Terway plug-in: check the CIDR block of Services only. Flannel plug-in: check the CIDR blocks of both pods and Services. |
| Connection CIDR blocks | Check your network connections. | CIDR blocks used by Express Connect circuits, VPN gateways, and Cloud Enterprise Network (CEN) instances connected to the VPC. |
Choose a non-overlapping CIDR block
Select a CIDR block that does not overlap with any block listed above.
Example (Flannel cluster):
| CIDR block | Value |
|---|---|
| VPC | 192.168.0.0/16 |
| Pod | 172.20.0.0/16 |
| Service | 172.21.0.0/16 |
| Express Connect / VPN / CEN | None |
In this case, 10.0.0.0/8 is a valid secondary CIDR block because it does not overlap with any existing ranges.
Step 2: Add a secondary CIDR block and create a vSwitch
-
Log on to the VPC console.
-
In the top navigation bar, select the region where the VPC is deployed.
-
On the VPCs page, find the target VPC and click its ID.
-
On the VPC Details page, click the CIDR Block Management tab. Click Add Secondary IPv4 CIDR Block and enter the CIDR block you selected in Step 1.
-
Create a vSwitch in the secondary CIDR block. For instructions, see Create a vSwitch.
Step 3: Add permit rules in the security group
Add inbound and outbound rules to the cluster security group to permit access from and to the secondary CIDR block.
-
Log on to the ACK console. In the left-side navigation pane, click Clusters.
-
On the Clusters page, find the target cluster and click its name. In the left-side pane, click Cluster Information.
-
On the Basic Information tab, click the ID of the cluster security group next to Control Plane Security Group.
-
Add inbound and outbound rules to permit access from and to the secondary CIDR block.
Step 4: Add the vSwitch to the node pool and scale out
Add the vSwitch of the secondary CIDR block to the node pool. After the node pool scales out, new nodes use IP addresses from the secondary CIDR block.
-
Log on to the ACK console.
-
On the Clusters page, find the target cluster and click the cluster name. In the left-side navigation pane, choose .
-
Click Edit in the Actions column of the target node pool, select the vSwitch of the secondary CIDR block, and click Confirm.
-
Click Scale in the Actions column of the same node pool to scale out the node pool.
For ACK managed clusters created before February 15, 2023, submit a ticket to have technical support configure the control planes. Without this configuration, the control planes cannot access newly created nodes or their pods, which causes the following issues:
-
kubectl execandkubectl logsfailures -
Webhook and APIService call failures
-
Pod and other resource creation failures
(Optional) Step 5: Add pod vSwitches for Terway clusters
If your cluster uses the Terway plug-in, update the Terway plug-in's vSwitch configuration so that pods can use IP addresses from the secondary CIDR block. For instructions, see Modify the pod vSwitches.
When the Terway network mode is not set to DataPathV2, a Terway pod that uses an IP address from the secondary CIDR block to access the ClusterIP has its source IP replaced with the host IP through Source Network Address Translation (SNAT). If the node is in a security group or configured with a whitelist, add a security group rule to allow access from the node's IP address or CIDR block.