In the Container Service for Kubernetes (ACK) Edge cluster, an Ingress is an API object that supports Layer 7 load balancing. You can use Ingresses to manage external access to Services in a registered cluster. This topic introduces Ingresses, and describes how an Ingress controller works and how to use an Ingress controller.
Concepts
In the ACK Edge cluster, an Ingress serves as the entry point of Services to be accessed from the Internet, handling almost all traffic within the cluster. An Ingress is a Kubernetes resource that manages external access to the services in a Kubernetes cluster. By configuring Ingress resources, you can define different forwarding rules to access backend pods that are associated to each Service in the cluster. For more information about the principles of Ingresses, see Ingress overview.
You can configure only HTTP traffic routing rules on Ingresses. Ingresses do not support advanced features such as load balancing algorithms and session affinity. To enable these features, you must configure them on Ingress controllers.
How to deploy an Ingress controller in an ACK Edge cluster
ACK Edge clusters extend the functionality of edge node pools based on ACK Pro clusters to connect edge nodes and the data center. For more information about node pools, see Node pool overview. The ACK Edge cluster consists of the following two parts:
Cloud node pool: Contains resources such as Elastic Compute Service (ECS) resources within the cluster VPC.
Edge node pool: Multiple edge node pools can exist in the cluster, primarily for connecting edge nodes and the data center.
An Ingress controller is the entry point for requests, forwarding corresponding HTTP/HTTPS requests to the backend pods associated with the Service. You can deploy the Ingress controller using the following methods:
Deployment method | Feature | Cloud-edge network mode/Traffic topology |
Node pool deployment | Deploy a set of Ingresses in each required node pool in the cluster. Only the NGINX Ingress is supported. For more information, see Install the NGINX Ingress controller. |
|
Cloud deployment | Deploy a set of Ingresses only in the cloud node pool. Both the NGINX Ingress and the Application Load Balancer (ALB) Ingress are supported. For more information, see Install the NGINX Ingress controller and Manage the ALB Ingress controller. | Leased lines with traffic topology disabled are used. |
Node pool deployment
Deploys Ingress controllers in both the cloud node pool and the edge node pool.
The Ingress controller in the cloud node pool uses LoadBalancer Services to provide Internet-facing services. The IP address of a Classic Load Balancer (CLB) instance is used as the endpoint.
The Ingress controller in the edge node pool uses NodePort Services to provide Internet-facing services. The IP address of any node within the node pool can be used as the endpoint.
You must configure a Service topology to ensure that requests are forwarded to the backend pods associated with the Service within the same node pool through the Ingress controller. For more information, see Configure a Service topology.
Cloud deployment
Deploys Ingress controllers only in the cloud node pool.
Ensures that the cloud node pool and edge node pool are connected through a leased line for intranet communication, such as the interconnection between the host and the container networks.
The Ingress controller in the cloud node pool uses LoadBalancer Services to provide Internet-facing services. The IP address of a CLB instance is used as the endpoint.
Requests are forwarded to the backend pods associated to the Service through the Ingress controller. The traffic topology is not used in this method.