Nginx Ingress Controller can be configured to support three modes simultaneously: public and private network access, public network access only, and private network access only, meeting client access requirements in different network environments.
In the cluster, the load balancer instance receives client requests and forwards them to the Nginx Ingress Controller workload, which then forwards the requests to other Services.
Configure Nginx Ingress to support both public and private networks
To enable simultaneous support for both public and private network access, deploy two Services for the backend Pods of the Nginx Ingress Controller, associating each with load balancer instances of public and private network types, respectively.
Check the current load balancer network type.
kubectl describe service -n kube-system nginx-ingress-lb | grep "service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type"Create and save a file named
nginx-ingress-lb-new.yaml, then executekubectl apply -f nginx-ingress-lb-new.yamlto create the Service.Private network Service example
apiVersion: v1 kind: Service metadata: name: nginx-ingress-lb-intranet namespace: kube-system labels: app: nginx-ingress-lb annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: intranet spec: type: LoadBalancer externalTrafficPolicy: "Cluster" ports: - port: 80 name: http targetPort: 80 - port: 443 name: https targetPort: 443 selector: app: ingress-nginxVerify the newly created Service status. If it returns
200, the new Service is working properly.curl -s -o /dev/null -w "%{http_code}\n" http://$(kubectl get service -n kube-system nginx-ingress-lb-intranet -o jsonpath='{.status.loadBalancer.ingress[0].ip}')Execute
kubectl get service nginx-ingress-lb-intranet, record the Service's External IP, then configure DNS according to the new Service type:Log in to the DNS console.
Add a DNS record:
Field
Value
Record Type
A
Host Record
Enter your desired subdomain (e.g., intranet)
Record Value
The IP address of the new Service.
Change network type
This operation requires deleting and recreating the Service to replace the load balancer instance, which will cause temporary interruption of Nginx Ingress. The deleted load balancer instance and its corresponding IP cannot be recovered.
Delete the existing Service.
kubectl delete service nginx-ingress-lb -n kube-systemCreate a new Service with the desired network type:
Private network Service
apiVersion: v1 kind: Service metadata: name: nginx-ingress-lb namespace: kube-system labels: app: nginx-ingress-lb annotations: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type: intranet spec: type: LoadBalancer externalTrafficPolicy: "Cluster" ports: - port: 80 name: http targetPort: 80 - port: 443 name: https targetPort: 443 selector: app: ingress-nginxPublic network Service
apiVersion: v1 kind: Service metadata: name: nginx-ingress-lb namespace: kube-system labels: app: nginx-ingress-lb spec: type: LoadBalancer externalTrafficPolicy: "Cluster" ports: - port: 80 name: http targetPort: 80 - port: 443 name: https targetPort: 443 selector: app: ingress-nginxTest the new Service status. If it returns
200, the new Service is working properly.curl -s -o /dev/null -w "%{http_code}\n" http://$(kubectl get service -n kube-system nginx-ingress-lb -o jsonpath='{.status.loadBalancer.ingress[0].ip}')Execute
kubectl get service nginx-ingress-lb, record the Service's External IP, and then configure DNS according to the new Service type:# Get the Service's External IP kubectl get service nginx-ingress-lb -n kube-system # The output will show the External IP that you need to record # Example output: # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE # nginx-ingress-lb LoadBalancer 172.20.10.150 47.99.123.45 80:31234/TCP,443:31567/TCP 5m # Record the EXTERNAL-IP value (e.g., 47.99.123.45) # Then configure DNS with the recorded IP address: # 1. Login to your DNS management console # 2. Add an A record with: # - Record Type: A # - Host Record: your-desired-subdomain (e.g., intranet, www, api) # - Record Value: [the External IP you recorded above] # - TTL: 600 (or your preferred TTL value)
FAQ
Why can't I first create a new Service, then delete the old one?
When changing the network type of Nginx Ingress Controller, you cannot use the approach of creating a new Service first and then deleting the old Service. You can only delete the old Service first and then create the new Service. This is because when the Nginx Ingress Controller component is upgraded, the workload needs to match the Service default name (nginx-ingress-lb). Since Services cannot have duplicate names, creating a new Service first and then deleting the old Service would cause the workload to be unable to match the load balancer instance, resulting in upgrade failure.
Why is the IP accessed by clients inconsistent with the endpoint displayed in the console?
The endpoint displayed on the console Ingress page refers to the IP address of the load balancer instance belonging to the Service named nginx-ingress-lb. When multiple LoadBalancer type Services are configured, Nginx Ingress can still forward all requests normally, but the console will not display the load balancer IPs of other Services. The IP actually accessed by clients depends on the load balancer instance being used (which can be confirmed through domain name resolution testing), so it may be inconsistent with the endpoint displayed in the console.
If you delete nginx-ingress-lb and recreate a Service with the same name, the endpoint displayed by Ingress needs to be refreshed by updating the Ingress resources.Recovery steps if configuration fails
If configuration fails, please execute the following steps in order as soon as possible:
Delete the newly created Service to avoid component creation failure due to duplicate names.
Uninstall and reinstall the Nginx Ingress Controller component through the console to create a new default Service in the cluster and restore the Nginx Ingress entry point.
Configure DNS to add domain name resolution for the new default Service (
nginx-ingress-lb), then test whether forwarding works properly.