We recommend that you plan the cluster size, network features, virtual private cloud (VPC)-related configurations (VPCs and vSwitches), and network configurations (container network plug-ins, container CIDR block, and Service CIDR block) in advance to ensure efficient use of network resources and reserve sufficient space for future business expansion. This topic describes how to plan the network architecture of an ACK managed cluster that meet your business requirements in an Alibaba Cloud VPC environment.
Network size planning
Region and zone
Instances in different zones within a region can communicate with each other. Even if one zone is down, other zones can work as expected. The network latency between instances in the same zone is low. You can plan regions and zones based on the following information.
Item | Description |
Latency requirement | If user locations are close to the regions where the resources are deployed, the network latency is low and the access is fast. |
Supported regions and zones | Different Alibaba Cloud services are supported by different regions and zones. You can select a zone and a region based on the service that you require. |
Cost | The price of a cloud service may vary with the region. We recommend that you select a region based on your requirements. |
High availability and disaster recovery | If your services require high disaster recovery capabilities, you can deploy your services in different zones within the same region. You can also deploy your services in multiple regions to implement inter-region disaster recovery. |
Compliance | You need to select a region that meets the data compliance requirements and business filing policies of your country or region. |
A VPC cannot be deployed across regions. If you want to deploy your services across regions, you must create a VPC in each region. You can use VPC peering connections or Cloud Enterprise Network (CEN) to enable communication among VPCs in different regions. vSwitches are zone-level resources. When you use vSwitches, take note of the following information:
If you select multiple zones due to the Elastic Compute Service (ECS) inventory factor, you need to reserve sufficient CIDR blocks in advance and take into account the latency increase caused by traffic detours between zones.
Some regions provide only one zone, such as China (Nanjing - Local Region). If you have requirements for intra-region disaster recovery, we recommend that you cautiously consider selecting this region.
Number of VPCs
VPC provides a secure and flexible network environment in the cloud. Different VPCs are isolated from each other. Instances in a VPC can communicate with each other. You can plan the number of your VPCs based on your business requirements.
| Scenarios |
Single VPC | Your service is deployed in one region and the business scale is small. In addition, you do not have requirements for network isolation. If you use VPC for the first time, we recommend that you use one VPC to quickly get started. You focus on costs and do not want to pay for multiple VPCs.
|
Multiple VPCs | Your services need to be deployed in different regions and the business scale is large. Services in one region need to be isolated. The business architecture is complex, and each department needs independent management.
|
Note
By default, you can create at most 10 VPCs in each region. You can go to the Quota Management page or Quota Center page to request a quota increase.
Number of vSwitches
vSwitches are zone-level resources. All instances in VPCs are deployed in vSwitches. vSwitch division helps you properly plan IP addresses. vSwitches in a VPC can communicate with each other by default.
Item | Description |
Latency | The latency between zones in the same region is low. However, complex system calls and cross-zone calls may increase the latency. |
High availability and disaster recovery | We recommend that you create at least two vSwitches in a VPC and deploy the vSwitches in different zones to implement cross-zone disaster recovery. You can deploy services in multiple zones and configure security rules in a unified manner. This improves the system availability and disaster recovery capability. |
Business scale and division | Typically, you can deploy different service modules in different vSwitches. For example, you can deploy the web layer, logic layer, and data layer in different vSwitches to create a standard web architecture. |
You can plan vSwitches based on the following information:
When you use a VPC, we recommend that you deploy at least two vSwitches in different zones. This way, when one vSwitch is down, the other vSwitch in another zone can take over, which implements cross-zone disaster recovery.
The latency between zones in the same region is low. However, the latency needs to be adapted and verified by the business system. The network latency may be increased due to the complex network topology. We recommend that you optimize and adapt the system to meet your requirements for high availability and low latency.
In addition, the scale and planning of your service system must also be taken into consideration when you determine the number of vSwitches to be created. In normal cases, you can plan vSwitches based on your business attributes. For example, Internet services need to deployed in a public vSwitch, and other services can be deployed accordingly. After your services are deployed in multiple zones, you can configure security policies in a unified manner.
Note
By default, you can create at most 150 vSwitches in each VPC. You can go to the Quota Management page or Quota Center page to request a quota increase.
Cluster scale
Number of nodes | Scenario | VPC planning | Zone planning |
Less than 100 nodes | Non-core businesses | Single VPC | 1 (2 or more is recommended) |
100 or more nodes | General businesses that require multiple zones | Single VPC | 2 or more |
100 or more nodes | Core businesses that require high reliability and multiple regions | Multiple VPCs | 2 or more |
Network connection planning
Single cluster in a single VPC
The CIDR block of a VPC is specified when you create the VPC. When you create an ACK cluster in the VPC, make sure that the pod CIDR block and Service CIDR block do not overlap with the VPC CIDR block. This ensures the network communication within the cluster and prevents conflicts with external VPCs.
Multiple clusters in a single VPC
Create multiple clusters in a VPC.
The CIDR block of the VPC is specified when you create the VPC. When you create a cluster, the VPC CIDR block, Service CIDR block, and pod CIDR block in each cluster cannot overlap with each other.
Pod CIDR blocks cannot overlap among all clusters, but Service CIDR blocks (virtual CIDR blocks) can overlap with one another.
If your clusters use Flannel, the packets of pods must be forwarded by the VPC router. The ACK managed cluster automatically generates a route table for each destination pod CIDR block on the VPC router.
Note
In this case, the clusters are partially interconnected. Pods in one cluster can access pods and ECS instances in another cluster, but cannot access services in another cluster. For example, cluster IP services can only be accessed within the cluster. To expose services, you can use the LoadBalancer Service or Ingresses.
Multi-cluster interconnection across VPCs
We recommend that you plan the connection of multiple clusters across VPCs in the following scenarios:
Inter-region deployment
Service isolation
Large-scale business system
If multiple business systems in a region require strict isolation by using VPCs, such as isolation between the production environment and the staging environment, you can deploy the production cluster and the test cluster in different VPCs to provide better logical isolation and security. You can also use VPC peering connections, VPN gateways, and CEN to connect VPCs deployed in the same region.

If your business architecture is complex and each service and department requires an independent VPC to manage their clusters and resources and for flexible management, we recommend that you configure multiple VPCs and multiple clusters.

Important
To avoid issues such as routing errors caused by IP conflicts in multi-cluster interconnection across VPC scenarios, you must follow the following network planning requirements for newly created clusters:
The CIDR block of the new cluster does not overlap with the VPC CIDR block.
The CIDR block of the new cluster does not overlap with the CIDR block of other clusters.
The CIDR block of the new cluster does not overlap with the pod CIDR block of other clusters.
The CIDR block of the new cluster does not overlap with the Service CIDR block of other clusters.
Communication between cloud clusters and data centers
Similar to the scenario of multi-cluster interconnection across VPCs, if a VPC is connected to a data center, packets of specific CIDR blocks are routed to the data center. In this case, the pod CIDR block of a cluster in the VPC cannot overlap with these CIDR blocks. To access pods in the VPC from the data center, you must configure a routing table for the Virtual Border Router (VBR) in the data center.
Container network plug-in planning
ACK managed clusters support two types of container network plug-ins: Terway and Flannel. Different plug-ins affect the supported features and network configurations. For example, Terway supports NetworkPolicy, which provides policy-based network control, while Flannel does not. The container CIDR block of Terway is allocated from VPC, while the container CIDR block of Flannel is a specified virtual network segment.
Important
The container network plug-in must be installed when you create a cluster and cannot be changed after the cluster is created. We recommend that you select a network plug-in based on your network requirements.
Feature planning
For more information about the comparison between Terway and Flannel, see Comparison between Terway and Flannel.
CIDR block planning
Important
The control plane of ACK managed clusters uses the 7.0.0.0/8
CIDR block. To prevent network conflicts that may disrupt cluster management access, avoid selecting CIDR ranges overlapping with 7.0.0.0/8
.

Configuration example
If your cluster uses the Terway network plug-in, you must configure the following parameters:
VPC
You can specify one of the following CIDR blocks or their subsets as the primary IPv4 CIDR block of the VPC: 192.168.0.0/16, 172.16.0.0/12, and 10.0.0.0/8. These CIDR blocks are standard private CIDR blocks as defined by Request for Comments (RFC) documents. The subnet mask must be 8 to 28 bits in length. Example: 192.168.0.0/16.
You may use custom CIDR ranges as the primary IPv4 CIDR block for your VPC, provided they do not overlap with the following reserved network segments or their subnets: 100.64.0.0/10, 224.0.0.0/4, 127.0.0.0/8, 169.254.0.0/16, and 7.0.0.0/8.
In scenarios in which multiple VPCs are used or in hybrid cloud scenarios in which you want to connect data centers to VPCs, we recommend that you use standard RFC CIDR blocks as VPC CIDR blocks with subnet masks no more than 16 bits in length. Make sure that the CIDR blocks of the VPCs and data centers do not overlap.
IPv6 CIDR blocks are assigned by the VPC after you enable IPv6 for the VPC. If you want to enable IPv6 for containers, select Terway for the Network Plug-in parameter.
vSwitch
The vSwitches associated with ECS instances allow nodes to communicate with each other. The CIDR blocks of vSwitches in the VPC must be subsets of the VPC CIDR block. This indicates that the CIDR blocks of vSwitches must be the same as or fall within the VPC CIDR block. When you specify the CIDR block, take note of the following items:
The system allocates IP addresses from the CIDR block of a vSwitch to the ECS instances that are associated with the vSwitch.
You can create multiple vSwitches in a VPC. However, the CIDR blocks of these vSwitches cannot overlap with each other.
The vSwitch and the pod vSwitch must be in the same zone.
Pod vSwitch
The IP addresses of pods are assigned from the CIDR block of pod vSwitches. This allows pods to communicate with each other. Pod is an abstraction in ACK. Each pod has an IP address. The CIDR blocks that you specify when you create pod vSwitches in the VPC must be subsets of the VPC CIDR block. When you specify the CIDR block, take note of the following items:
In a Container Service cluster that has Terway installed, the IP addresses of pods are assigned by pod vSwitches.
The pod vSwitch CIDR block cannot overlap with the Service CIDR block.
The vSwitch and pod vSwitch must be in the same zone.
Service CIDR block
Important
You cannot change the Service CIDR block after the creation.
Service is a Kubernetes concept. The Service CIDR block provides IP addresses for ClusterIP type Services. Each Service has an IP address. When you specify the CIDR block, take note of the following items:
The IP address of a Service is effective only within the cluster.
The Service CIDR block cannot overlap with the vSwitch CIDR block.
The Service CIDR block cannot overlap with the pod vSwitch CIDR block.
Service IPv6 CIDR block
If you enable IPv4/IPv6 dual stack, you must specify an IPv6 CIDR block for Services. When you specify the CIDR block, take note of the following items:
You must specify a Unique Local Unicast Address (ULA) space within the address range fc00::/7. The prefix must be 112 bits to 120 bits in length.
We recommend that you specify an IPv6 CIDR block that has the same number of IP addresses as the Service CIDR block.

Configuration example
VPC CIDR Block | vSwitch CIDR Block | Container CIDR block | Service CIDR block | Maximum number of pod IP addresses |
192.168.0.0/16 | 192.168.0.0/24 | 172.20.0.0/16 | 172.21.0.0/20 | 65536 |
If your cluster uses the Flannel network plug-in, you must configure the following parameters:
VPC
You can specify one of the following CIDR blocks or their subsets as the primary IPv4 CIDR block of the VPC: 192.168.0.0/16, 172.16.0.0/12, and 10.0.0.0/8. These CIDR blocks are standard private CIDR blocks as defined by Request for Comments (RFC) documents. The subnet mask must be 8 to 28 bits in length. Example: 192.168.0.0/16.
You may use custom CIDR ranges as the primary IPv4 CIDR block for your VPC, provided they do not overlap with the following reserved network segments or their subnets: 100.64.0.0/10, 224.0.0.0/4, 127.0.0.0/8, 169.254.0.0/16, and 7.0.0.0/8.
In scenarios in which multiple VPCs are used or in hybrid cloud scenarios in which you want to connect data centers to VPCs, we recommend that you use standard RFC CIDR blocks as VPC CIDR blocks with subnet masks no more than 16 bits in length. Make sure that the CIDR blocks of the VPCs and data centers do not overlap.
IPv6 CIDR blocks are assigned by the VPC after you enable IPv6 for the VPC. If you want to enable IPv6 for containers, select Terway for the Network Plug-in parameter.
vSwitch
The vSwitches associated with ECS instances allow nodes to communicate with each other. The CIDR blocks of vSwitches in the VPC must be subsets of the VPC CIDR block. This indicates that the CIDR blocks of vSwitches must be the same as or fall within the VPC CIDR block. When you specify the CIDR block, take note of the following items:
The system allocates IP addresses from the CIDR block of a vSwitch to the ECS instances that are associated with the vSwitch.
You can create multiple vSwitches in a VPC. However, the CIDR blocks of these vSwitches cannot overlap with each other.
Container CIDR block
Important
You cannot change the container CIDR block after the creation.
The IP addresses of pods are assigned from the container CIDR blocks. This allows pods to communicate with each other. Pod is an abstraction in ACK. Each pod has an IP address. When you specify the CIDR block, take note of the following items:
Enter a CIDR block in the Pod CIDR Block field.
The container CIDR block of pods cannot overlap with the vSwitch CIDR block.
The container CIDR block of pods cannot overlap with the Service CIDR block.
For example, if the VPC CIDR block is 172.16.0.0/12, the container CIDR block cannot be 172.16.0.0/16 or 172.17.0.0/16 because these CIDR blocks are subsets of 172.16.0.0/12.
Service CIDR block
Important
You cannot change the Service CIDR block after the creation.
Service is a Kubernetes concept. The Service CIDR block provides IP addresses for ClusterIP type Services. Each Service has an IP address. When you specify the CIDR block, take note of the following items:
The Service CIDR block is effective only within the cluster.
The Service CIDR block cannot overlap with the vSwitch CIDR block.
The Service CIDR block cannot overlap with the container CIDR block.