This topic provides answers to some frequently asked questions (FAQ) about Application Load Balancer (ALB) Ingresses.
Table of contents
What are the differences between ALB Ingresses and NGINX Ingresses?
Can I enable internal access and external access for ALB Ingresses at the same time?
Why am I unable to view the ALB Ingress controller pod in the cluster?
How do I ensure that the ALB domain name used by an ALB Ingress remains unchanged?
What do I do if no error occurs in an ALB Ingress but changes do not take effect?
How do I reduce the server reconciliation time when Services perform pod scaling?
Why do ALB Ingress rules fail to take effect?
ALB Ingresses maintain routing rules in inline mode. If multiple ALB Ingresses use the same ALB instance and the configuration of an ALB Ingress contains errors, the other ALB Ingresses do not take effect.
If the ALB Ingresses that you create do not take effect, an ALB Ingress that you created before you create the current ALB Ingresses may encounter an error. In this scenario, you need to find the ALB Ingress and fix the error so that the newly created ALB Ingresses can take effect.
What are the differences between ALB Ingresses and NGINX Ingresses?
We recommend that you use ALB Ingresses. You need to manually maintain NGINX Ingresses. Unlike NGINX Ingresses, ALB Ingresses are developed based on ALB, a fully managed maintenance-free cloud service. ALB Ingresses can serve as high-performance gateways and provide powerful Ingress traffic management capabilities.
ALB Ingresses listen for requests sent to the kube-system-fake-svc-80 server group by default. What is the purpose of the server group?
You must create a default forwarding rule before you can create a listener. Each forwarding rule can be associated with only one server group. The kube-system-fake-svc-80 server group is fake and used by the default forwarding rule. The fake server group does not process requests and cannot be deleted.
Can I enable internal access and external access for ALB Ingresses at the same time?
Yes. You can enable internal access and external access for ALB Ingresses at the same time. If you want to enable internal access and external access for an ALB Ingress at the same time, you need to create an Internet-facing ALB instance. The ALB instance automatically creates an elastic IP address (EIP) in each zone and uses the EIPs to communicate with the Internet. The ALB instance is also assigned a private virtual IP address (VIP). You can use the VIP to access the ALB instance over an internal network. If you want to enable only internal access for an ALB Ingress, you can create an internal-facing ALB instance.
Why am I unable to view the ALB Ingress controller pod in the cluster?
You can view the ALB Ingress controller pod in the kube-system namespace only if your cluster is a Container Service for Kubernetes (ACK) dedicated cluster. You cannot view the ALB Ingress controller pod in ACK Basic, ACK Pro, and Alibaba Cloud Container Compute Service (ACS) clusters because the ALB Ingress controller is a fully managed component in these clusters.
How do I ensure that the ALB domain name used by an ALB Ingress does not change?
After you use an AlbConfig to create an ALB instance, the ALB Ingress uses an IngressClass to reference the AlbConfig. This allows the ALB Ingress to use the domain name of the ALB instance. If you do not modify the IngressClass associated with the ALB Ingress or the AlbConfig, the domain name remains unchanged.
Is an ALB instance automatically created if I choose to use an ALB Ingress when I create an ACK managed cluster?
No, no ALB instance is created. If you choose to use an ALB Ingress when you create an ACK managed cluster, the system automatically installs the ALB Ingress controller but does not create an ALB instance.
Why are the ALB configuration changes that I made in the ALB console lost, the rules that I added deleted, and access logs disabled?
To modify the configuration of an ALB Ingress, you need to modify the configuration saved in the ALB Ingress or AlbConfig on the API server of the cluster. If you modify the ALB configuration in the ALB console, the changes are not synchronized to the API server. As a result, the changes do not take effect. In addition, an internal call or a cluster operation will trigger ACS to overwrite the ALB configuration in the ALB console with the configuration saved in the ALB Ingress or AlbConfig. We recommend that you modify the configuration saved in the ALB Ingress or AlbConfig.
What do I do if the HTTP 503 status code is returned when I delete a forwarding rule of an ALB Ingress immediately after I create the forwarding rule?
Check whether the ALB Ingresses that correspond to the forwarding rule contain the canary:true
annotation. To perform canary releases, you need to redirect traffic from the old Service version to the canary version. Therefore, you do not need to add the canary:true
annotation to the ALB Ingress of the old Service version. For more information about how to use ALB Ingresses to implement canary releases, see Use ALB Ingresses to perform canary releases.
Canary releases support only two Ingresses and a limited number of forwarding conditions. We recommend that you use custom forwarding rules to route traffic in a more flexible manner. For more information, see Configure custom forwarding rules for ALB Ingresses.
What do I do if an ALB Ingress does not encounter an error but changes do not take effect?
If reconciliation events related to an AlbConfig are not performed or configuration change events are not processed, the IngressClass of the ALB Ingress may be associated with the wrong AlbConfig. Check whether the parameters
setting of the IngressClass is correctly configured based on the user guide. For more information, see Use an AlbConfig to configure an ALB instance.
Why are some listeners deleted after I create an ALB instance in the console and run the kubectl apply command to update the network ACL configuration in the AlbConfig?
We recommend that you run the kubectl edit
command to update resources. To use the kubectl apply
command to update resources, run the kubectl diff
command to preview the changes and make sure that the changes meet your requirements before you run the kubectl apply
command. Then, you can run the kubectl apply
command to apply the changes to the resources in your cluster.
The kubectl apply
command updates an AlbConfig by overwriting the AlbConfig. Therefore, when you run the kubectl apply
command to update the network ACL configuration in an AlbConfig, make sure that the YAML file contains the configuration of all listeners specified in the AlbConfig. Otherwise, some listeners will be deleted.
If listeners are deleted after you run the kubectl apply
command, we recommend that you use the following method to restore the listeners.
Check whether the YAML file contains the configuration of all listeners.
If some listeners are missing, proceed to the next step. If all listeners are included, no operation is required.
Run the following command to add the configuration of the missing listeners to the AlbConfig:
kubectl -n <namespace> edit AlbConfig <albconfig-name> # Replace <namespace> and <albconfig-name> with the namespace of the AlbConfig and the name of the AlbConfig.
How do I reduce the server reconciliation time when Services perform pod scaling?
When Services in an ACK cluster perform pod scaling, the server reconciliation time varies based on the number of Ingresses that are associated with the Services. You can use the following methods to reduce the server reconciliation time:
Limit the number of Ingresses: Make sure that no more than 30 Ingresses are associated with each Service.
Merge Ingress rules: If an excessive number of Ingresses are used, you can associate multiple Services with the same Ingress and create Ingress rules in the Ingress to improve the server reconciliation efficiency.
How do I enable the system to automatically assign weights to nodes when a cluster uses the Flannel network plug-in and the Local mode is enabled for the Service?
The ALB Ingress controller 2.13.1-aliyun.1 and later supports automatic allocation of node weights. Make sure that you update the ALB Ingress controller to the latest version before you use this feature.
In this example, the Flannel network plug-in is installed in the cluster and the Local mode is enabled for the Service to show how to calculate the weights of nodes. The following figure shows that the business pod named app=nginx is deployed on three Elastic Compute Service (ECS) instances and provides external services based on Service A.
Total number of pods associated with a Service | Description |
Number of pods ≤ 100 | The ALB Ingress controller specifies the number of pods deployed on each node as the weight of each node. In the preceding figure, the numbers of pods that are deployed on the ECS 1, ECS 2, and ECS 3 instances are 1, 2, and 3. Therefore, the weight of the ECS 1 instance is set to 1, the weight of the ECS 2 instance is set to 2, and the weight of the ECS 3 instance is set to 3. Traffic is distributed to the three ECS instances at a ratio of 1:2:3. This way, the loads are evenly allocated to the pods. |
Number of pods > 100 | The ALB Ingress controller calculates the weight of each node based on the ratio of the number of pods deployed on each node to the total number of pods in percentage. For example, if the numbers of pods deployed on the three ECS instances in the preceding figure are 100, 200, and 300, the weight of the ECS 1 instance is set to 16, the weight of the ECS 2 instance is set to 33, and the weight of the ECS 3 instance is set to 50. Traffic is distributed to the three ECS instances at a ratio of 16:33:50. This way, the loads are evenly allocated to the pods. |