×
Community Blog Analysis of Alibaba Cloud Container Network Data Link (3): Terway ENIIP

Analysis of Alibaba Cloud Container Network Data Link (3): Terway ENIIP

Part 3 of this series mainly introduces the forwarding links of data plane links in Kubernetes Terway ENIIP modes.

By Yu Kai

Co-Author: Xieshi (Alibaba Cloud Container Service)

This article is the third part of the series, which mainly introduces the forwarding links of data plane links in Kubernetes Terway ENIIP modes. First, it can detect the reasons for the performance of customer access results in different scenarios and help customers further optimize the business architecture by understanding the forwarding links of the data plane in different scenarios. On the other hand, by understanding the forwarding links in-depth, when encountering container network jitter, customer O&M and Alibaba Cloud developers can know which link points to deploy and observe manually to further delimit the direction and cause of the problem.

Terway ENIIP Mode Architecture Design

You can configure multiple auxiliary IP addresses for an Elastic Network Interface (ENI). A single Elastic Network Interface can allocate 6 to 20 auxiliary IP addresses based on the instance type. In the ENI multi-IP mode, the auxiliary IP addresses are allocated to containers. This improves the scale and density of pod deployment.

In terms of network connection mode, Terway supports two solutions: Veth pair policy-based routing and ipvlan l. Terway mainly considers:

  1. How to send the traffic of the auxiliary IP of ENI out of the corresponding ENI on the node and use the mac address of ENI without losing packages
  2. How to be compatible with the kernel of the 3.10 version of CentOS 7.x, which is widely used in container services

1

The CIDR block used by the pod is the same as the CIDR block of the node.

2

There is a network interface controller inside the Pod; one is eth0, whose IP is the IP of the Pod. The MAC address of this network interface controller is inconsistent with the MAC address of the ENI on the console. At the same time, there are multiple ethx network interface controllers on ECS, indicating that the ENI subsidiary network interface controller is not directly mounted in the network namespace of the Pod.

3
4
5

The pod has a default route that only points to eth0, indicating that the pod accesses any address segment from eth0 as a unified ingress and egress.

6

As shown in the figure, we can see eth0@if63 in the network namespace of the container through ip addr, of which 63 will help us find the opposite of veth pair in the network namespace of the container in the OS of ECS. In ECS OS, you can use ip addr | grep 63: to find the virtual network interface controller cali44ae9fbceeb, which is the opposite of the veth pair on the ECS OS side.

7

How does ECS OS determine which container to go to for data traffic? Through OS Linux Routing, we can see that all traffic destined for Pod IP will be forwarded to the calico virtual network interface corresponding to Pod. So far, the network namespace of ECS OS and Pod has established a complete ingress and egress link configuration.

8

In the veth pair, multiple pods share an ENI to improve the pod deployment density of ECS. How do we know which ENI a pod is assigned to? Terway pods are deployed on each node using a daemonset. You can run the following command to view the Terway Pod on each node. Run the terway-cli show factory command to view the number of secondary ENIs on the node, the MAC address, and the IP address on each ENI.

9
10

Therefore, the Terway ENIIP mode can be summarized as:

  • In terms of network connection, select Veth pair policy-based routing.
  • Veth pair connects the network space between the host and the pod. The address of the pod comes from the auxiliary IP address of ENI, and the policy-based routing needs to be configured on the node to ensure the traffic of the auxiliary IP passes through the ENI to which it belongs.
  • Containers on the same host communicate through routes on the host to veth corresponding to other containers on the same host.
  • Container communication from different hosts is forwarded to the corresponding machine through the VPC network and then forwarded to the container through the route on the machine.
  • The communication between the container and the host where it is located is through the veth pair and route connected to the host namespace.
  • Containers are forwarded to other hosts through the VPC network to the corresponding machine. Other hosts are forwarded to containers through the VPC network to the corresponding ENI and then forwarded to the Veth of the container through routes.
  • Containers to leased lines and shared services are forwarded over the VPC network.
  • Access from the container to the Internet is converted from the source IP to the EIP address to the external network through the SNAT gateway configured by the VSwitch.
  • You can configure multiple auxiliary IP addresses for an ENI. A single ENI can allocate 6 to 20 auxiliary IP addresses based on the instance type. In the ENI multi-IP mode, the auxiliary IP addresses are allocated to containers. This improves the scale and density of pod deployment.

Terway ENIIP Mode Container Network Data Link Analysis

Based on the characteristics of container networks, network links in Terway ENI mode can be roughly divided into two major SOP scenarios: Pod IP and SVC. Further subdivided, seven different small SOP scenarios can be summarized.

11

Under the TerwayENI architecture, different data link access scenarios can be summarized into eight categories.

  • Access the Pod IP and access the Pod on the same node
  • Access Pod IP/SVC IP (Cluster or Local), mutually access between Pods on the same node (pods belong to the same or different ENIs)
  • Access to PodIP and mutually access between pods on different nodes
  • Access to the SVC ClusterIP from the node where the non-SVC backend pod resides in the cluster
  • In cluster mode, non-SVC backend pod node accesses SVC External IP in the cluster.
  • In local mode, non-SVC backend pod node accesses SVC External IP in the cluster.
  • Access SVC External IP outside the cluster

Scenario 1: Access Pod IP and Access Pod on the Same Node

Environment

12

The nginx-7d6877d777-zp5jg and 10.0.1.104 exist on the cn-hongkong.10.0.1.82 node.

Kernel Routing

The nginx-7d6877d777-zp5jg IP address is 10.0.1.104, the PID of the container on the host is 1094736, and the container network namespace has a default route pointing to container eth0.

13
14

This container eth0 corresponds to the veth pair of calif03b26f9a43 in the ECS OS.

15

In ECS OS, there is a route that points to Pod IP, and the next hop is calixxxxx. As you can see from the preceding section, the calixxx network interface controller is a pair composed of veth1 in each pod. Therefore, accessing the CIDR of SVC in the pod will point to veth1 instead of the default eth0 route.

Therefore, the main function of the calixx network interface controller here is to:

  1. Nodes access to pods
  2. When a node or pod accesses SVC CIDR, the ECS OS kernel protocol stack is converted to calixxx and veth1 to access the pod

16

Summary

The destination can be accessed.

nginx-7d6877d777-zp5jg netns eth0 can catch packets.

17

nginx-7d6877d777-zp5jg calif03b26f9a43 can catch packets.

18

19
Data Link Forwarding Diagram

  • It will pass through the calico network interface controller. Each pod that is not the host network will form a veth pair with the calico network interface controller, which is used to communicate with other pods or nodes.
  • The entire link and the request will not pass through the ENI allocated by the pod, and the IP rule hit in the ns of the OS is forwarded.
  • The entire request link is OS → calixxxxxx → ECS Pod net eth0.
  • The entire link goes through the kernel protocol stack twice: ECS OS and Pod.
  • The data link goes through two kernel protocol stacks, which are Pod1 protocol stack and ECS1 protocol stack.

Scenario 2: Access a Pod IP or SVC IP (Cluster or Local) and Access a Pod on the Same Node (The Pod Belongs to the Same or Different ENIs)

Environment

20
21

The nginx-7d6877d777-zp5jg and 10.0.1.104 exist on the cn-hongkong.10.0.1.82 node.

The centos-67756b6dc8-h5wnp and 10.0.1.91 exist on the cn-hongkong.10.0.1.82 node.

Service is nginx, ClusterIP is 192.168.2.115, and ExternalIP is 10.0.3.62.

Kernel Routing

The nginx-7d6877d777-zp5jg IP address is 10.0.1.104, the PID of the container on the host is 1094736, and the container network namespace has a default route pointing to container eth0.

22
23

This container eth0 corresponds to the veth pair of calif03b26f9a43 in the ECS OS.

24

You can use the preceding method to discover the cali44ae9fbceeb of the centos-67756b6dc8-h5wnp veth pair. The pod only has the default route in the network space.

25
26

In ECS OS, there is a route that points to Pod IP, and the next hop is calixxxxx. As you can see from the preceding section, the calixxx network interface controller is a pair composed of veth1 in each pod. Therefore, accessing the CIDR of SVC in the pod will point to veth1 instead of the default eth0 route.

Therefore, the main function of the calixx network interface controller here is to:

  1. Nodes access to pods
  2. When a node or pod accesses SVC CIDR, the ECS OS kernel protocol stack is converted to calixxx and eth0 to access the pod.

27

This indicates that related routing is performed at the ECS OS level, and the calixx network interface controller of pods serves as a bridge.

IPVS Rules on the Source ECS (if the SVC IP Is Accessed)

If the IP (ClusterIP or ExternalIP) of SVC is accessed on the same node, we check the relevant IPVS forwarding rules of SVC on the node.

The ExternalTrafficPolicy of the Service Is Local

The SVC nginx CLusterIP is 192.168.2.115, the ExternalIP is 10.0.3.62, and the backend is 10.0.1.104 and 10.0.3.58.

28
cn-hongkong.10.0.1.82

For the ClusterIP of SVC, you can see that both the backend pods of SVC will be added to the forwarding rule of IPVS.

29

For ExternalIP of SVC, you can see the backend of SVC. Only the backend Pod 10.0.1.104 of this node will be added to the forwarding rule of IPVS.

30

In SVC mode of LoadBalancer, if the ExternalTrafficPolicy is Local, all SVC backend pods are added to the IPVS forwarding rules of the node for the ClusterIP address. If the node is ExternalIP, only the SVC backend pods on the node are added to the IPVS forwarding rules. If the node does not have an SVC backend pod, the pod on the node fails to access the SVC ExternalIP address.

The ExternalTrafficPolicy of the Service Is Cluster

The SVC nginx1 CLusterIP is 192.168.2.253, the ExternalIP is 10.0.3.63, and the backend is 10.0.1.104 and 10.0.3.58.

31

cn-hongkong.10.0.1.82

For the ClusterIP of SVC, you can see that both the backend pods of SVC will be added to the forwarding rule of IPVS.

32

For ExternalIP of SVC, you can see that both backend pods of SVC will be added to the forwarding rule of IPVS.

33

In the SVC mode of LoadBalancer, if the ExternalTrafficPolicy is Cluster, all SVC backend pods are added to the IPVS forwarding rule of the node for ClusterIP or ExternalIP.

Summary

The destination can be accessed.

Conntrack Table Information

The ExternalTrafficPolicy of Service Nginx Is Local

The SVC nginx CLusterIP is 192.168.2.115, the ExternalIP is 10.0.3.62, and the backend is 10.0.1.104 and 10.0.3.58.

1.  If the ClusterIP of SVC is accessed, the conntrack information shows that src is the source pod 10.0.1.91, dst is SVC ClusterIP 192.168.2.115, and dport is the port in the SVC. The expectation is that 10.0.1.104 is packaged to 10.0.1.91.

34

2.  If the ExternalIP of SVC is accessed, the src is the source pod 10.0.1.91, the dst is SVC ExternalIP 10.0.3.62, and the dport is the port in the SVC based on the conntrack information. The expectation is that 10.0.1.104 is packaged to 10.0.1.91.

35

The ExternalTrafficPolicy of Service Nginx1 Is Cluster.

The SVC nginx1 CLusterIP is 192.168.2.253, the ExternalIP is 10.0.3.63, and the backend is 10.0.1.104 and 10.0.3.58.

1.  If ClusterIP of SVC is accessed, the conntrack information shows that src is the source pod 10.0.1.91, dst is SVC ClusterIP 192.168.2.253, and dport is the port in the SVC. The expectation is that 10.0.1.104 is packaged to 10.0.1.91.

36

2.  If ExternalIP of SVC is accessed, the src is source pod 10.0.1.91, the dst is SVC ExternalIP 10.0.3.63, and the dport is the port in the SVC based on the conntrack information. The expectation is that the IP 10.0.1.82 of the node ECS is packaged to 10.0.1.91.

37

In summary, we can see that src has changed many times. So, the real client IP may be lost in Cluster mode.

38
Data Link Forwarding Diagram

  • It will pass through the calico network interface controller. Each pod that is not the host network will form a veth pair with the calico network interface controller, which is used to communicate with other pods or nodes.
  • The entire link and the request will not pass through the ENI allocated by the pod, and the IP rule hit in the ns of the OS is forwarded.
  • The entire request link is ECS1 Pod1 eth0 → Pod1 calixxxxx → Pod2 calixxxxx → ECS1 Pod2 eth0.
  • Access SVC IP. SVC captures source pod eth0 and calixxx network interface controller but cannot capture eth0 and calixxx in the destination.
  • In the SVC mode of LoadBalancer, if the ExternalTrafficPolicy is Local, all SVC backend pods are added to the IPVS forwarding rules of the node for the ClusterIP address. If the node is ExternalIP, only the SVC backend pods on the node are added to the IPVS forwarding rules.
  • In the SVC mode of LoadBalancer, if the ExternalTrafficPolicy is Cluster, all SVC backend pods are added to the IPVS forwarding rule of the node for ClusterIP or ExternalIP, and the src address cannot be reserved.
  • Data links should go through three kernel protocol stacks: Pod1 protocol stack, ECS1 protocol stack, and Pod2 protocol stack.

Scenario 3: Access PodIP and Mutually Access between Pods on Different Nodes

Environment

39

The centos-67756b6dc8-h5wnp and 10.0.1.91 exist on the cn-hongkong.10.0.1.82 node.

The nginx-7d6877d777-lwrfc and 10.0.3.58 exist on the cn-hongkong.10.0.3.49 node.

Kernel Routing

The centos-67756b6dc8-h5wnp IP address is 10.0.1.104, the PID of the container on the host is 2211426, and the container network namespace has a default route pointing to container eth0.

You can use the preceding method to discover the cali44ae9fbceeb of the centos-67756b6dc8-h5wnp veth pair. The pod has only the default route in the network space.

40
41

In ECS OS, there is a route that points to Pod IP and the next hop is calixxxxx. As you can see from the preceding section, the calixxx network interface controller is a pair composed of veth1 in each pod. Therefore, accessing the CIDR of SVC in the pod will point to veth1 instead of the default eth0 route.

Therefore, the main function of the calixx network interface controller here is to:

  1. Nodes access to pods
  2. When a node or pod accesses the CIDR block of the SVC, it uses the protocol stack conversion of the ECS OS kernel to access the pod in calixxx and eth0. If the destination is an external address, it uses the ENI of the pod to exit ECS and enter VPC.

42

Summary

The destination can be accessed.

43
Data Link Forwarding Diagram

  • It will pass through the calico network interface controller. Each pod that is not the host network will form a veth pair with the calico network interface controller, which is used to communicate with other pods or nodes.
  • The entire link request passes through the ENI allocated by the pod and hits the IP rule in the ns of the OS and is forwarded.
  • After you exit ECS, the pod that you want to access and the vswitch to which the pod ENI belongs can hit the VPC routing rules or direct layer-2 forwarding on the VSW.
  • The entire request link is ECS1 Pod1 eth0 → ECS1 Pod1 calixxxxx → ECS1 ethx → vpc route rule (if any) → ECS2 ethx → ECS2 Pod2 calixxxxx → ECS2 Pod2 eth0.
  • Data links go through four kernel protocol stacks: Pod1 protocol stack, ECS1 protocol stack, Pod2 protocol stack, and ECS2 protocol stack.

Scenario 4: A Non-SVC Backend Pod Node in the Cluster Accesses the SVC ClusterIP

Environment

44
45
46

The nginx-7d6877d777-h4jtf and 10.0.3.58 cn-hongkong.10.0.1.82 exist on cn-hongkong.10.0.3.49 node.

centos-67756b6dc8-h5wnp and 10.0.1.91 exist on node.

Service1 is nginx, ClusterIP is 192.168.2.115, and ExternalIP is 10.0.3.62.

Service2 is ngin1, ClusterIP is 192.168.2.253, and ExternalIP is 10.0.3.63.

Kernel Routing

The kernel routing part has been described in detail in the summary of Scenario 2 and Scenario 3.

IPVS rules on source ECS.

According to the IPVS rules on source ECS in the summary of scenario 2, no matter in which SVC mode, for ClusterIP, all SVC backend pods will be added to the IPVS forwarding rule of the node.

Summary

The destination can be accessed.

Conntrack Table Information

The ExternalTrafficPolicy of Service Nginx Is Local

The SVC nginx CLusterIP is 192.168.2.115, the ExternalIP is 10.0.3.62, and the backend is 10.0.1.104 and 10.0.3.58.

cn-hongkong.10.0.1.82

47

On the source ECS, src is source pod 10.0.1.91, dst is SVC ClusterIP 192.168.2.115, and dport is the port in SVC. The expectation is that 10.0.3.58 is packaged to 10.0.1.91.

cn-hongkong.10.0.3.49

48

On the destination ECS, src is the source pod 10.0.1.91, dst is the Pod IP 10.0.3.58, and dport is the port of the pod. It is expected that this pod is packaged to 10.0.1.91.

The ExternalTrafficPolicy of Service Nginx1 Is Cluster

The SVC nginx1 CLusterIP is 192.168.2.253, the ExternalIP is 10.0.3.63, and the backend is 10.0.1.104 and 10.0.3.58.

cn-hongkong.10.0.1.82

49

On the source ECS, src is source pod 10.0.1.91, dst is SVC ClusterIP 192.168.2.115, and dport is the port in SVC. The expectation is that 10.0.3.58 is packaged to 10.0.1.91.

cn-hongkong.10.0.3.49

50

On the destination ECS, src is the source pod 10.0.1.91, dst is the Pod IP 10.0.3.58, and dport is the port of the pod. It is expected that this pod is packaged to 10.0.1.91.

For ClusterIP, source ECS will add all SVC backend pods to the IPVS forwarding rule of the node. The destination ECS cannot capture any SVC ClusterIP information and can only capture the IP of the source Pod, so it will return to the subsidiary network interface controller of the source Pod when returning the package.

51
Data Link Forwarding Diagram

  • It will pass through the calico network interface controller. Each pod that is not the host network will form a veth pair with the calico network interface controller, which is used to communicate with other pods or nodes.
  • The entire link request passes through the ENI allocated by the pod and hits the IP rule in the ns of the OS and is forwarded.
  • After you exit ECS, the pod that you want to access and the VSwitch to which the pod ENI belongs can hit the VPC routing rules or direct layer-2 forwarding on the VSW.
  • The entire request link is listed below.

Remove Direction

ECS1 Pod1 eth0 → ECS1 Pod1 calixxxxx → ECS1 primary network interface controller eth0 → vpc route rule (if any) → ECS2 secondary network interface controller ethx → ECS2 Pod2 calixxxxx -> ECS2 Pod2 eth0

Return Direction

ECS2 Pod2 eth0 → ECS2 Pod2 calixxxxx → ECS2 secondary network interface controller ethx → vpc route rule (if any) → ECS1 secondary network interface controller eth1 → ECS1 Pod1 calixxxxx → ECS1 Pod1 eth0

  • For ClusterIP, source ECS will add all SVC backend pods to the IPVS forwarding rule of the node. The destination ECS cannot capture any SVC ClusterIP information and can only capture the IP of the source Pod, so it will return to the subsidiary network interface controller of the source Pod when returning the package.
  • Data link goes through four kernel protocol stacks: Pod1 protocol stack, ECS1 protocol stack, Pod2 protocol stack, and ECS2 protocol stack.

Scenario 5: In Cluster Mode, the Node Where the Non-SVC Backend Pod Is Located in the Cluster Accesses the SVC External IP

Environment

52
53

The nginx-7d6877d777-h4jtf and 10.0.3.58 exist on the cn-hongkong.10.0.3.49 node.

The centos-67756b6dc8-h5wnp and 10.0.1.91 exist on the cn-hongkong.10.0.1.82 node.

Service2 is ngin1, ClusterIP is 192.168.2.253, and ExternalIP is 10.0.3.63.

Kernel Routing

The kernel routing part has been described in detail in the summary of Scenario 2 and Scenario 3.

IPVS rules on source ECS.

According to the IPVS rules on the source ECS in the summary of scenario 2, in the cluster mode of the ExternalTrafficPolicy, for ExternalIP, all SVC backend pods will be added to the IPVS forwarding rules of the node.

Summary

The destination can be accessed.

Conntrack Table Information

The ExternalTrafficPolicy of Service Nginx1 Is Cluster

The SVC nginx1 CLusterIP is 192.168.2.253, the ExternalIP is 10.0.3.63, and the backend is 10.0.1.104 and 10.0.3.58.

cn-hongkong.10.0.1.82

54

On the source ECS, src is source pod 10.0.1.91, dst is SVC ExternalIP 10.0.3.63, and dport is the port in SVC. The expectation is that 10.0.3.58 is packaged to the address 10.0.1.82 of the source ECS.

cn-hongkong.10.0.3.49

55

On the destination ECS, src is the IP address 10.0.1.82 of the source pod, dst is the IP address 10.0.3.58 of the pod, and dport is the port of the pod. This pod is expected to be packaged to the address 10.0.1.82 of the source ECS.

When the ExternalTrafficPolicy is Cluster, for ExternalIP, the source ECS will add all SVC backend pods to the IPVS forwarding rule of the node. The destination ECS cannot capture any SVC ExternalIP information. It can only capture the IP of the ECS where the source pod is located. So, when returning the package, it will return to the main network interface of the ECS where the source pod is located, which is different from accessing ClusterIP in scenario 4.

56
Data Link Forwarding Diagram

  • It will pass through the calico network interface controller. Each pod that is not the host network will form a veth pair with the calico network interface controller, which is used to communicate with other pods or nodes.
  • The entire link request passes through the ENI allocated by the pod, directly hits the IP rule in the ns of the OS, and is forwarded.
  • After you exit ECS, the pod that you want to access and the VSwitch to which the pod ENI belongs can hit the VPC routing rules or direct layer-2 forwarding on the VSW.
  • The entire request link is ECS1 Pod1 eth0 → ECS1 Pod1 calixxxxx → ECS1. Primary ENI eth0 → vpc route rule (if any) → ECS2 Secondary Network Interface controller ethx → ECS2 Pod2 calixxx → ECS2 Pod2 eth0
  • When the ExternalTrafficPolicy is Cluster, for ExternalIP, the source ECS will add all SVC backend pods to the IPVS forwarding rule of the node. The destination ECS cannot capture any SVC ExternalIP information. It can only capture the IP of the ECS where the source pod is located, so when returning the package, it will return to the main network interface controller of the ECS where the source pod is located.
  • Data link goes through four kernel protocol stacks: Pod1 protocol stack, ECS1 protocol stack, Pod2 protocol stack, and ECS2 protocol stack.

Scenario 6: In Local Mode, the Node Where the Non-SVC Backend Pod Is Located in the Cluster Accesses SVC External IP

Environment

57
58

The nginx-7d6877d777-h4jtf and 10.0.3.58 exist on the cn-hongkong.10.0.3.49 node.

The centos-67756b6dc8-h5wnp and 10.0.1.91 exist on the cn-hongkong.10.0.1.82 node.

Service1 is nginx, ClusterIP is 192.168.2.115, and ExternalIP is 10.0.3.62.

Kernel Routing

The kernel routing part has been described in detail in the summary of Scenario 2 and Scenario 3.

IPVS rules on the source ECS.

The ExternalTrafficPolicy of the Service Is Local.

The SVC nginx CLusterIP is 192.168.2.115, the ExternalIP is 10.0.3.62, and the backend is 10.0.1.104 and 10.0.3.58.

59

cn-hongkong.10.0.1.82

For the SVC's ExternalIP, you can see the SVC's backend without any forwarding rules.

60

According to the IPVS rules on the source ECS in Scenario 2, when the ExternalTrafficPolicy is in Local mode, for ExternalIP, only the backend pods of the SVC on the current node are added to the IPVS forwarding rules on the node. If the node does not have an SVC backend, there are no forwarding rules.

Summary

The destination can not be accessed.

Conntrack Table Information

The ExternalTrafficPolicy of the Service Is Local.

The SVC nginx1 CLusterIP is 192.168.2.253, the ExternalIP is 10.0.3.63, and the backend is 10.0.1.104 and 10.0.3.58.

cn-hongkong.10.0.1.82 has no conntrack record tables.

61

62
Data Link Forwarding Diagram

  • It will pass through the calico network interface controller. Each pod that is not the host network will form a veth pair with the calico network interface controller, which is used to communicate with other pods or nodes.
  • The entire link request does not pass through the ENI allocated by the pod, directly hits the IP rule in the ns of the OS, and is forwarded.
  • The entire request link is ECS1 Pod1 eth0 → ECS1 Pod1 calixxxxx → ECS host space ipvs/iptables rule. The link is terminated without backend forwarding ep.
  • If the ExternalTrafficPolicy is set to Local, the external IP only adds the backend pods of the SVC on the current node to the forwarding rules of the node. If the node does not have an SVC backend, no forwarding rules exist.

Scenario 7: Access SVC External IP outside the Cluster

Environment

63
64

The nginx-7d6877d777-h4jtf and 10.0.3.58 exist on the cn-hongkong.10.0.3.49 node.

The nginx-7d6877d777-kxwdb and 10.0.1.29 exist on the cn-hongkong.10.0.1.47 node.

Service1 is nginx, ClusterIP is 192.168.2.115, and ExternalIP is 10.0.3.62.

SLB-Related Configurations

In the SLB console, you can see that the backend server group of the lb-j6cw3daxxukxln8xccive virtual server group is the ENI eni-j6c4qxbpnkg5o7uog5kr and eni-j6c6r7m3849fodxdf5l7 of two backend nginx pods.

65

From the external perspective of the cluster, the backend virtual server group of SLB is the network interface controller of the two ENIs to which the backend pods of SVC belong, and the internal IP address is the address of the pods.

Summary

The destination can be accessed.

66
Data Link Forwarding Diagram

  • Data Link: client → SLB → Pod ENI + Pod Port → ECS1 Pod1 eth0
  • The data link must go through the kernel protocol stack twice: the Pod1 protocol stack and the ECS protocol stack.

Summary

This article focuses on ACK data link forwarding paths in different SOP scenarios in Terway ENIIP mode. With the customer's demand for extreme performance, Terway ENIIP can be divided into seven SOP scenarios. The forwarding links, technical implementation principles, and cloud product configurations of these seven scenarios are sorted out and summarized. This provides a preliminary guide to deal with link jitter, optimal configuration, and link principles under the Terway ENIIP architecture. In Terway ENIIP mode, the veth pair is used to connect the network space of the host and pod. The address of the pod comes from the auxiliary IP address of the Elastic Network Interface, and policy-based routing need to be configured on the node to ensure that the traffic of the auxiliary IP passes through the Elastic Network Interface to which it belongs. As such, ENI multi-Pod sharing can be realized, which improves the deployment density of the Pod. However, the veth pair will use the ECS kernel protocol stack for forwarding. The performance of this architecture is not as good as the ENI mode. In order to improve the performance, ACK developed the Terway ebpf + ipvlan architecture based on the ebpf and ipvlan technologies of the kernel.

0 1 0
Share on

Alibaba Cloud Native

204 posts | 12 followers

You may also like

Comments

Alibaba Cloud Native

204 posts | 12 followers

Related Products