When no Server Load Balancer (SLB) instance is available, the cloud-controller-manager (CCM) component automatically creates and manages a Classic Load Balancer (CLB) or Network Load Balancer (NLB) instance for a LoadBalancer Service. This topic describes how to use an automatically created SLB instance to expose an application. In this topic, an NGINX application is used.
Precautions
The CCM creates and configures SLB resources for only Services with the
Type=LoadBalancer
setting.The CCM uses a declarative API and automatically updates the configurations of an SLB instance to match the configurations of the exposed Service when specific conditions are met. The SLB configurations that you modify in the SLB console may be overwritten by the CCM.
ImportantDo not modify the configurations of an SLB instance that is created and maintained by Kubernetes in the SLB console. Otherwise, Kubernetes may overwrite your modifications and consequently the relevant LoadBalancer Service may become inaccessible.
You cannot change the SLB instance that is associated with a LoadBalancer Service after the Service is created. To change the SLB instance, you must create a new Service.
If you change the setting for a Service from Type=LoadBalancer
to Type!=LoadBalancer
, the CCM deletes the configurations of the SLB instance created for the Service. As a result, the Service cannot be accessed by using the SLB instance.
Quotas
The CCM creates SLB instances for Services with the
Type=LoadBalancer
setting. By default, you can have a maximum of 60 SLB instances within your Alibaba Cloud account. To create more SLB instances, apply for a quota increase in the log on to the Quota Center console and submit an application.The CCM automatically creates listeners that use Service ports for SLB instances. By default, each SLB instance supports at most 50 listeners. To increase the number of listeners supported by each SLB instance, apply for a quota increase in the log on to the Quota Center console and submit an application.
For more information about the limits on SLB, see Limits.
To query the SLB resource quotas, go to the Quota Center page in the SLB console.
Step 1: Deploy an application
The following section describes how to use the kubectl command-line tool to deploy an application.
Create a file named my-nginx.yaml file and add the following YAML content to the file:
apiVersion: apps/v1 # for versions before 1.8.0 use apps/v1beta1 kind: Deployment metadata: name: my-nginx # The name of the sample application. labels: app: nginx spec: replicas: 3 # The number of replicated pods. selector: matchLabels: app: nginx # You must specify the same value in the selector of the Service that is used to expose the application. template: metadata: labels: app: nginx spec: containers: - name: nginx image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest # Replace with the address of the image that you want to use in the format of <image_name:tags>. ports: - containerPort: 80 # The port that you want to expose in the Service.
Run the following command to deploy the my-nginx application:
kubectl apply -f my-nginx.yaml
Run the following command to query the status of the application:
kubectl get deployment my-nginx
Expected output:
NAME READY UP-TO-DATE AVAILABLE AGE my-nginx 3/3 3 3 50s
Step 2: Use an automatically created SLB instance to expose an application
You can create a LoadBalancer Service in the ACS console or by using kubectl. After the Service is created, you can use the Service to expose the application.
Use the ACS console
Log on to the ACS console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click the name of the cluster or click Details in the Actions column. The details page of the cluster appears.
In the left-side navigation pane of the details page, choose
On the Services page, click Create in the upper-right corner of the page.
In the Create Service dialog box, configure the parameters.
Parameter
Description
Example
Name
The name of the Service.
my-nginx-svc
Service Type
The type of the Service. This parameter specifies how the Service is accessed. Select Server Load Balancer. Then, select Public Access and Create SLB Instance. You can click Modify to modify the SLB instance specification based on your business requirements. The default specification is used in this example.
slb.s1.small
External Traffic Policy
The External Traffic Policy parameter is available only if you set the Service Type parameter to Node Port or Server Load Balancer.
Local: routs traffic only to the pods of the current node.
Cluster: routes traffic to pods on other nodes in the cluster.
Local
Backend
The backend application that you want to associate with the Service. If you do not select a backend application, no Endpoint objects are created. For more information, see Services-without-selectors.
Name: app.
Value: my-nginx.
Port Mapping
The Service port and container port. The Service port corresponds to the
port
field in the YAML file and the container port corresponds to thetargetPort
field in the YAML file. The container port must be the same as the port that is exposed in the backend pod.80
Annotations
The annotations to be added to the Service to configure the SLB instance. For more information, see Add annotations to the YAML file of a Service to configure a CLB instance.
ImportantDo not reuse the SLB instance of the API server in the cluster. Otherwise, cluster access failures may occur.
In this example, two annotations are added to specify the pay-by-bandwidth billing method and set the maximum bandwidth to 2 Mbit/s to limit the amount of traffic that flows through the Service. Example:
service.beta.kubernetes.io/alibaba-cloud-loadbalancer-charge-type:paybybandwidth
service.beta.kubernetes.io/alibaba-cloud-loadbalancer-bandwidth:2
Label
The label to be added to the Service, which identifies the Service.
None
Use kubectl
Create a file named my-nginx-svc.yaml and add the following content to the file.
Set the selector parameter to the value of the matchLabels parameter in the my-nginx.yaml file to associate the Service with the backend application. In this example, this parameter is set to
app: nginx
.apiVersion: v1 kind: Service metadata: labels: app: nginx name: my-nginx-svc namespace: default spec: ports: - port: 80 protocol: TCP targetPort: 80 selector: app: nginx type: LoadBalancer
Run the following command to create a Service named my-nginx-svc and use the Service to expose the application:
kubectl apply -f my-nginx-svc.yaml
Run the following command to confirm that the LoadBalancer Service is created:
kubectl get svc my-nginx-svc
Expected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-nginx-svc LoadBalancer 172.21.5.82 39.106.XX.XX 80/TCP 5m
Run the following command to access the application:
curl <YOUR-External-IP> # Replace <YOUR-External-IP> with the external IP address that you obtained in the preceding step.
Expected output:
<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>