Terway is an open source Container Network Interface (CNI) plug-in developed by Alibaba Cloud. Terway works with Virtual Private Cloud (VPC) and allows you to use standard Kubernetes network policies to regulate how containers communicate with each other.
Before you begin
To better understand the working mode of Terway, we recommend that you read this topic before you use the Terway network plug-in.
Before you read this topic, we recommend that you understand the terms used in container network plug-ins and select a container network plug-in. For more information, see Networking overview and Comparison between Terway and Flannel.
Before you create a cluster, you must plan the CIDR blocks of the cluster. For more information, see Plan the network of an ACK cluster.
Billing
Terway is free of charge. However, pods used by Terway are deployed on each node. These pods consume a small amount of node resources. For more information about the billing of Alibaba Cloud services used by ACK, see Cloud service billing.
Method for calculating the maximum number of pods on a node
In Terway mode, the maximum number of pods on a node depends on the number of elastic network interfaces (ENIs) provided by the Elastic Compute Service (ECS) instance. Terway has a lower limit for the pods on a node. The maximum number of pods on a node must be at least equal to the lower limit to that the node can join a cluster. The following table describes the details.
Terway mode | Maximum number of pods on a node | Example | Maximum number of pods that support static IP addresses, separate vSwitches, and separate security groups on a node |
Shared ENI mode | (EniQuantity - 1) × EniPrivateIpAddressQuantity, where EniQuantity is the number of ENIs provided by an ECS instance type, and EniPrivateIpAddressQuantity is the number of private IP addresses provided by an ENI. Note The maximum number of pods on a node must be greater than 11 so that the node can join a cluster. | In this example, the general-purpose ecs.g7.4xlarge instance type is used. This instance type provides 8 ENIs, and each ENI provides 30 private IP addresses. The maximum number of pods on a node is (8 - 1) × 30 = 210. Important The maximum number of pods that can use ENIs on a node is a fixed value determined by the instance type. If you modify the value of the | 0 |
Shared ENI + Trunk ENI | EniTotalQuantity -EniQuantity, where EniTotalQuantity is the maximum number of network interfaces supported by an ECS instance type, and | ||
Exclusive ENI | EniQuantity - 1 Note To join the cluster, the maximum number of pods on a node must be greater than 6. | In this example, the general-purpose ecs.g7.4xlarge instance type is used. This instance type supports 8 ENIs. The maximum number of pods supported by a node is (8 - 1) = 7. | EniQuantity - 1 |
In Terway v1.11.0 and later versions, you can create node pools that run in exclusive ENI mode or shared ENI mode, and these two types of node pools can exist in the same cluster. For more information, see Terway.
View the maximum number of pods supported by the node network
Method 1: When you create a node pool in the ACK console, you can view the maximum number of pods supported by an instance type in the Terway Mode (Supported Pods) column of the Instance Type section.
Method 2: Perform the following steps to manually calculate the maximum number of pods supported by an ECS instance type.
Read the relevant documentation to obtain the number of ENIs provided by the instance type. For more information, see Overview of instance families.
Query the information in OpenAPI Explorer. Specify the instance type of the node in the
InstanceTypes
parameter and click Initiate Call. The returned value of theEniQuantity
parameter indicates the number of ENIs provided by the instance type. TheEniPrivateIpAddressQuantity
parameter indicates the number of private IP addresses provided by each ENI. TheEniTotalQuantity
parameter indicates the maximum number of network interfaces supported by an instance type.
Install Terway when you create a cluster
You can select Terway as the network plug-in only when you create a cluster. You cannot change the plug-in after the cluster is created.
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, click Create Kubernetes Cluster.
Configure the key network parameters for Terway. For more information about other cluster parameters, see Create an ACK managed cluster.
Parameter
Description
IPv6 Dual-stack
Select Enable to create a dual-stack cluster that supports IPv4 and IPv6.
If you enable IPv4/IPv6 dual-stack, a dual-stack cluster is created. This feature is in public preview. To use this feature, submit an application in the Quota Center console.
ImportantOnly clusters that run Kubernetes 1.22 and later support IPv4/IPv6 dual-stack.
IPv4 addresses are used for communication between worker nodes and the control plane.
You must select Terway as the network plug-in.
If you use the shared ENI mode of Terway, the ECS instance type must support IPv6 addresses. To add ECS instances of the specified type to the cluster, the number of IPv4 addresses supported by the ECS instance type must be the same as the number of IPv6 addresses. For more information about ECS instance types, see Overview of instance families.
You must use a VPC and ECS instances that support IPv4/IPv6 dual stack.
You must disable IPv4/IPv6 dual stack if you want to use Elastic Remote Direct Memory Access (eRDMA) in an cluster.
VPC
The VPC used by the cluster.
Network Plug-in
Select Terway.
DataPath V2
If you select this check box, the DataPath V2 acceleration mode is used. After you select the DataPath V2 acceleration mode, Terway adopts a traffic forwarding link that is different from the shared ENI mode for faster network communication. For more information about the features of DataPath V2, see Network acceleration.
Support for NetworkPolicy
If you select this check box, Kubernetes-native
NetworkPolicies
are supported.NoteIn Terway V1.9.2 and later, NetworkPolicies for new clusters are implemented using extended Berkeley Packet Filter (eBPF). When this feature is enabled, eBPF also handles the traffic from pods accessing services within the container network.
NoteThe feature of managing
NetworkPolicies
by using the console is in public preview. If you want to use the feature, log on to the Quota Center console and submit an application.Support for ENI Trunking
If you select this check box, the Trunk ENI feature is enabled. You can specify a static IP address, a separate vSwitch, and a separate security group for each pod.
NoteYou can select the Support for ENI Trunking option for an ACK managed cluster without the need to submit an application. If you want to enable the Trunk ENI feature in an ACK dedicated cluster, log on to the Quota Center console and submit an application.
By default, the Trunk ENI feature is enabled for newly created ACK managed clusters that run Kubernetes 1.31 or later versions.
vSwitch
The CIDR block of the vSwitch used by the nodes in the cluster. We recommend that you select at least 3 vSwitches in different zones to ensure high availability.
Pod vSwitch
The CIDR block of the vSwitch used by the pod. The CIDR block can overlap with the vSwitch CIDR block of the node.
Service CIDR
The Service CIDR block cannot overlap with the node or pod CIDR block.
IPv6 Service CIDR
The IP address can be configured after IPv4/IPv6 dual stack is enabled.
Introduction to Terway modes
For more information about how Terway works, see the following section.
Shared ENI mode and exclusive ENI mode
When you assign an IP address to a pod, Terway operates in one of two modes: Shared ENI Mode or Exclusive ENI Mode.
In Terway v1.11.0 and later versions, you can select either shared ENI mode or exclusive ENI mode for a node pool in a cluster, and this selection is no longer available during cluster creation.
The primary ENI on the node is allocated to the node OS. The remaining ENIs are used by Terway to configure the pod network. Therefore, do not manually configure the ENIs. For more information about how to manually manage ENIs, see Configure an ENI filter.
Item | Shared ENI mode | Exclusive ENI mode | |
Pod IP address management | ENI allocation mode | Multiple pods share the same ENI. | Each pod is assigned a separate ENI. |
Pod deployment density | The density of pod deployment is high. Hundreds of pods can be deployed on a single node. | The density of pod deployment is low. Single-digit pods can be deployed on a regular node. | |
Network architecture | |||
Data links | When a pod accesses other pods or is accessed as a Service backend, traffic passes through the network protocol stack of the node. | When a pod accesses a Service, traffic still passes through the protocol stack of the node OS. However, when a pod accesses other pods or is accessed as a Service backend, the attached ENI is directly used to bypass the node network protocol stack for higher performance. | |
Use scenario | Common Kubernetes scenarios. | The network performance of this mode is comparable to that of traditional virtual machines, which is suitable for scenarios that require high network performance, such as applications that require high network throughput or low latency. | |
Network acceleration | DataPath V2 network acceleration is supported. For more information, see Network acceleration. | Network acceleration is not supported. The exclusive ENI resources of a pod provide excellent network performance. | |
Support for NetworkPolicy | Kubernetes-native | The | |
Access control | The Trunk ENI feature allows you to specify a static IP address, a separate vSwitch, and a separate security group for each pod. For more information, see Configure a static IP address, a separate vSwitch, and a separate security group for each pod. | By default, you can configure a static IP address, a separate vSwitch, and a separate security group for a pod. |
Network acceleration
If you use the shared ENI mode of Terway, you can enable the network acceleration mode. After you enable the acceleration mode, Terway uses a traffic forwarding path different from that of the regular shared ENI mode to achieve higher performance. DataPath V2 is supported in Terway. For more information about the features of DataPath V2, see the following description.
DataPath V2 is an upgraded version of the earlier IPVLAN and eBPF acceleration mode. You can select only DataPath V2 when you create a cluster that is installed with Terway 1.8.0 or later.
The DataPath V2 and IPVLAN + eBPF acceleration modes apply only to node pools that run in shared ENI mode. They are not applicable to node pools that run in exclusive ENI mode.
DataPath V2 feature | Description |
Applicable Terway version | Clusters that are installed with Terway 1.8.0 or later. |
Network architecture | |
Accelerated data link |
|
Performance optimization |
|
Usage method | Create a cluster, set Network Plug-in to Terway, and select DataPath V2. |
Usage notes |
|
The IPVLAN + eBPF acceleration mode is selected in an existing cluster. For more information about the features of IPVLAN + eBPF, see the following description.
Access control
The shared ENI mode of Terway supports the NetworkPolicy
feature and the Support for ENI Trunking option. This allows you to facilitate the management of network traffic in a cluster. The exclusive ENI mode of Terway also supports specific traffic control capabilities.
Support for NetworkPolicy
Node pools that run in exclusive ENI mode of Terway does not support the
NetworkPolicy
feature.Node pools that run in shared ENI mode of Terway supports the Kubernetes-native
NetworkPolicy
feature. Network policies allow you to use user-defined rules to control network traffic between pods.Create a cluster, set Network Plug-in to Terway, and select Support for NetworkPolicy. This way, the cluster supports the
NetworkPolicy
feature. For more information, see Use network policies in ACK clusters.NoteThe feature of managing
NetworkPolicies
by using the console is in public preview. If you want to use the feature, log on to the Quota Center console and submit an application.
Configure a static IP address, a separate vSwitch, and a separate security group for a pod
Node pools that run in exclusive ENI mode of Terway allows you to specify a static IP address, a separate vSwitch, and a separate security group for each pod. This allows you to manage and isolate user traffic, configure network policies, and manage IP addresses in a fine-grained manner.
The Trunk ENI feature is an option in node pools that run in shared ENI mode of Terway. The Trunk ENI feature allows you to specify a static IP address, a separate vSwitch, and a separate security group for each pod.
Create a cluster, set Network Plug-in to Terway, and select Support for ENI Trunking. For more information, see Configure a static IP address, a separate vSwitch, and a separate security group for each pod.
NoteYou can select the Support for ENI Trunking option for an ACK managed cluster without the need to submit an application. If you want to enable the Trunk ENI feature in an ACK dedicated cluster, log on to the Quota Center console and submit an application.
By default, the Trunk ENI feature is enabled for newly created ACK managed clusters that run Kubernetes 1.31 or later versions.
After the Trunk ENI feature is enabled, the terway-eniip and terway-controlplane components are installed.