Alibaba Cloud Container Compute Service (ACS) provides stable and high-performance container networks by integrating the Kubernetes network model, Virtual Private Cloud (VPC), and Server Load Balancer (SLB). This topic describes the important terms used in ACS cluster networking and Alibaba Cloud network infrastructure, such as container network interface (CNI), Service, Ingress, and DNS service discovery. You can understand these terms to optimize application deployment models and network access methods.
Networking features of ACS
Category | Networking feature | Supported | References |
Network configuration management | Dual-stack (IPv4 and IPv6) | No | None |
Configure network configurations for individual pods | Yes | ||
Configure security groups for pods | Yes | ||
North-south traffic management | Configure pods to access the Internet | Yes | |
Expose pods to the Internet | Yes | ||
Use LoadBalancer Services | Yes | Use an automatically created CLB Service to expose an application | |
Use Ingresses | Yes |
Service
Cloud-native applications require agile iterations and fast scaling. Containers and the related network resources have short lifecycles. To achieve fast workload scaling, you must configure automatic load balancing and use a static IP address. ACS allows you to create a Service as the ingress and load balancer of a pod. How a Service works
When you create a Service, ACS assigns a stable IP address to the Service.
You can configure the
selector
parameter to select a pod and map the IP address and port of the Service to the IP address and port of the pod for load balancing.
ACS provides the following types of Services to handle requests from different sources and clients:
ClusterIP
A ClusterIP Service is used to handle access within the cluster. If you want your application to provide services within the cluster, create a ClusterIP Service.
NoteBy default, ClusterIP is selected when you create a Service.
LoadBalancer
A LoadBalancer Service is used to expose an application to the Internet. A LoadBalancer Service uses an SLB instance to expose applications. Therefore, LoadBalancer Services provide higher availability and performance than NodePort Services. For more information about how to use a LoadBalancer Service to expose applications, see Use an existing SLB instance to expose an application and Use an automatically created SLB instance to expose an application.
Headless Service
A Headless Service is defined by setting the
clusterIP
field toNone
in the Service configuration file. A Headless Service does not have a fixed virtual IP address (VIP). When a client accesses the domain of the Service, DNS returns the IP addresses of all backend pods. The client must use DNS load balancing to balance the loads across pods.
ExternalName
An ExternalName Service is used to map an external domain name to a Service within the cluster. For example, you can map the domain name of an external database to a Service name within the cluster. This allows you to access the database within the cluster through the Service name.
For more information, see Considerations for configuring a LoadBalancer Service.
Ingress
In ACS clusters, Services support Layer 4 load balancing. However, Ingresses manage external access to Services in the cluster at Layer 7. You can use Ingresses to configure different Layer 7 forwarding rules. For example, you can forward requests to different Services based on domain names or paths. For more information, see ALB Ingress management.
Example
In common architectures that decouple the frontend from the backend, different access paths are used to distinguish the frontend from the backend. In this case, Ingresses can be used to implement Layer 7 load balancing across different applications.
DNS service discovery
ACS uses DNS for service discovery. For example, the name of a Service can be resolved to the cluster IP address of the Service on a client. The name of a pod can be resolved to the IP address of the pod by using a StatefulSet. DNS-based service discovery allows you to access applications without the need to use the IP addresses of the applications or worry about the environments in which the applications are deployed.
CoreDNS automatically converts the name of a Service to the IP address of the Service. This allows you to use the same Service name to access the Service in different environments. For more information about how to use and fine-tune the DNS component, see How DNS works and configure DNS.
Network infrastructure
VPC
VPC is a type of private network provided by Alibaba Cloud. VPCs are logically isolated from each other. You can create and manage cloud services in VPCs, such as Elastic Compute Service (ECS) instances, ApsaraDB RDS instances, and SLB instances.
Each VPC consists of one vRouter, at least one private CIDR block, and at least one vSwitch.
Private CIDR blocks
When you create a VPC and a vSwitch, you must specify the private IP address range for the VPC in CIDR notation.
You can use one of the standard private CIDR blocks listed in the following table as the private CIDR block of a VPC, or use a custom CIDR block. For more information about CIDR blocks, see Plan networks. For more information, see the Plan and design a VPC topic in User Guide.
CIDR block
Description
192.168.0.0/16
Number of available private IP addresses (excluding IP addresses reserved by the system): 65,532
172.16.0.0/12
Number of available private IP addresses (excluding IP addresses reserved by the system): 1,048,572
10.0.0.0/8
Number of available private IP addresses (excluding IP addresses reserved by the system): 16,777,212
Custom CIDR block
You can also use a custom CIDR block other than 100.64.0.0/10, 224.0.0.0/4, 127.0.0.0/8, 169.254.0.0/16, or their subnets.
vRouters
A vRouter is the hub of a VPC. As a core component, it connects the vSwitches in a VPC and serves as a gateway between a VPC and other networks. After a VPC is created, a vRouter is automatically created for the VPC. A vRouter can be associated with only one route table.
For more information about route tables, see Route table overview.
For more information, see the Route table overview topic in User Guide.
vSwitches
A vSwitch is a basic network component that connects different cloud resources in a VPC. After you create a VPC, you can create vSwitches to create one or more subnets for the VPC. vSwitches in the same VPC can communicate with each other. You can deploy your applications in vSwitches that belong to different zones to improve service availability.
For more information about vSwitches, see Create and manage a vSwitch.
For more information, see the Create a vSwitch topic in User Guide.
SLB
After you connect ECS instances to an SLB instance, SLB uses VIPs to virtualize the ECS instances and adds the ECS instances to an application service pool. The application service pool features high performance and high availability. Client requests are distributed across the ECS instances based on forwarding rules. For more information about SLB, see SLB overview.
SLB checks the health status of the ECS instances and automatically removes unhealthy ECS instances from the pool to eliminate single points of failure. This improves the availability of your applications. You can also use SLB to defend your applications against DDoS attacks.
SLB consists of the following components:
SLB instances
An SLB instance is a running entity of the SLB service. An SLB instance receives and distributes traffic to backend servers. To get started with SLB, you must create an SLB instance and add at least one listener and two ECS instances to the SLB instance.
Listener
A listener checks client requests and forwards them to backend servers. Listeners also perform health checks on backend servers.
Backend servers
ECS instances are attached to SLB instances as backend servers to receive and process client requests. You can add ECS instances to a server pool, or create vServer groups or primary/secondary server groups to manage ECS instances in batches.