All Products
Search
Document Center

Container Service for Kubernetes:Comparison between Terway and Flannel

Last Updated:Aug 20, 2024

Container Service for Kubernetes (ACK) supports two types of container network plug-ins: Terway and Flannel. These two plug-ins have different underlying architecture and features, and are applicable to different scenarios. You can only select the container network plug-in when you create a cluster. You cannot change the plug-in after the cluster is created. We recommend that you decide the network plug-in type before you create a cluster.

Feature

Terway

Flannel

Source

Developed by Alibaba Cloud and optimized for ACK with multiple modes.

An open source Container Network Interface (CNI) plug-in.

Applicable scenarios

Large-scale clusters where IP address waste needs to be avoided and high network performance is required. You can also use this plug-in if you need custom control over the container network.

Small clusters that use a simple container network without network policies or custom control.

Pod CIDR block

  • Assigns elastic network interfaces (ENIs) to containers. Pods use addresses allocated from the virtual private cloud (VPC).

  • You can expand the pod CIDR block.

  • Pods use CIDR blocks that are independent of the VPC. The pod CIDR block, node CIDR block (VPC CIDR block), and Service CIDR block are independent and cannot overlap.

  • You cannot expand the pod CIDR block.

Pod quota on a node

Pods use the ENI of the node, and the pod quota is determined by the specification of the node. For more information about calculation methods, see Work with Terway.

The pod quota is determined by the size of the pod CIDR block allocated to the node and is independent of the node specification. By default, each node supports 256 pods.

Cluster scale

Supports 5,000 nodes by default, and up to 15,000 nodes after requesting a quota increase.

Each node matches a VPC route entry in the route table. Limited by the size of the VPC route table, a VPC supports 200 nodes by default, and up to 1,000 nodes after requesting a quota increase.

Network performance

  • Supports the exclusive ENI mode: An ENI is assigned to each pod to ensure optimal network performance.

  • Supports the DataPathV2 acceleration mode: Extended Berkeley Packet Filter (eBPF) is used to bypass the network protocol stack of the node when pods access endpoints within the cluster.

Pods use the network protocol stack of the node for external access. NAT gateways are required and some IP addresses may be wasted.

IPv4/IPv6 dual-stack

Supported.

Not supported.

Support for network policies

Supports network policies. You can create complex network policies to regulate access between containers.

Not supported.

Static IP

Allows you to specify a static IP address for each pod.

Not supported.

Network security

Allows you to specify a separate vSwitch and a separate security group for each pod.

Does not support configuring separate vSwitches and security groups for pods.

Session persistence

The backend server of a Server Load Balancer (SLB) instance connects to pods. With session persistence, your services are not interrupted when backend pods change.

The backend server of an SLB instance connects to pods by using NodePort. Your services may be interrupted if backend pods change.

Multi-cluster intercommunication

Pods in multiple clusters can communicate with each other if a security group rule is configured to allow access to the relevant ports.

Not supported.

Source IP retention in pods

The source IP pods used to access other endpoints within the VPC is the Pod IP, facilitating auditing.

The source IP used for pods to access other endpoints within the VPC is the node IP, and the Pod IP cannot be retained.

What to do next

  • After you create the cluster, the CIDR blocks allocated to pods, Services, and nodes cannot be modified. A CIDR block determines the maximum number of IP addresses available for cloud resources, which may affect your workloads. We recommend that you plan the CIDR blocks in advance to logically isolate resources from each other for operations such as access control and custom routes. For more information, see Plan the network of an ACK cluster.

  • After you plan the CIDR blocks:

    • If you plan to use Terway, see Work with Terway to install Terway during cluster creation.

    • If you plan to use Flannel, see Work with Flannel to install Flannel during cluster creation.