All Products
Search
Document Center

Container Service for Kubernetes:Service FAQ

Last Updated:Aug 13, 2024

This topic provides answers to some frequently asked questions about Services in Container Service for Kubernetes (ACK) clusters. You can read this topic to resolve the following issues: Why does the cluster fail to access the IP address of a Server Load Balancer (SLB) instance? Why does the system fail to use an existing SLB instance for more than one Service? How do I troubleshoot Cloud Controller Manager (CCM) update failures?

Table of contents

FAQ about using existing SLB instances

Other issues

FAQ about SLB

How can I use SLB instances in an ACK cluster?

If the NGINX Ingress controller is installed when you create an ACK cluster, two SLB instances are automatically created for the cluster.

The following section describes the purposes of the SLB instances:

  • API server SLB: The SLB instance is used to expose the API server of the cluster. To access the cluster, you must communicate with the SLB instance. The SLB instance uses a TCP listener that listens on port 6443. The backend server of the SLB instance is a pod of the API server or an Elastic Compute Service (ECS) instance that serves as a master node.

  • NGINX Ingress controller SLB: The SLB instance is associated with the nginx-ingress-controller Service in the kube-system namespace to forward requests. vServer groups are dynamically associated with the pods of the NGINX Ingress controller. The SLB instance uses a TCP listener that listens on ports 80 and 443.

Which external traffic policy should I use when I create a Service, Local or Cluster?

The features of the Local and Cluster external traffic policies vary based on the network plug-in that is used by the cluster. For more information about the differences between the Local external traffic policy and the Cluster external traffic policy, see External traffic policies: Local and Cluster.

Why are no events collected during the synchronization between a Service and an SLB instance?

If no event is generated after you run the kubectl -n {your-namespace} describe svc {your-svc-name} command, check the version of the CCM.

  • If the CCM version is earlier than V1.9.3.276-g372aa98-aliyun, no event is generated for the synchronization between a Service and an SLB instance. We recommend that you update the CCM version. For more information about how to view and update the CCM version, see Manually update the CCM.

  • If the CCM version is V1.9.3.276-g372aa98-aliyun or later, submit a ticket.

How do I handle an SLB instance that remains in the Pending state?

  1. Run the kubectl -n {your-namespace} describe svc {your-svc-name} command to view the events.

  2. Troubleshoot the errors that are reported in the events. For more information about how to troubleshoot errors that are reported in the events, see Errors and solutions.

    If no errors are reported in the events, see Why are no events collected during the synchronization between a Service and an SLB instance?

What do I do if the vServer groups of an SLB instance are not updated?

  1. Run the kubectl -n {your-namespace} describe svc {your-svc-name} command to view the events.

  2. Troubleshoot the errors that are reported in the events. For more information about how to troubleshoot errors that are reported in the events, see Errors and solutions.

    If no errors are reported in the events, see Why are no events collected during the synchronization between a Service and an SLB instance?

What do I do if the annotations of a Service do not take effect?

  1. Perform the following steps to view the errors:

    1. Run the kubectl -n {your-namespace} describe svc {your-svc-name} command to view the events.

    2. Troubleshoot the errors that are reported in the events. For more information about how to troubleshoot errors that are reported in the events, see Errors and solutions.

  2. If no errors are reported, you can resolve the issue based on the following scenarios:

    • Make sure that the CCM version meets the requirements of the annotations. For more information about the correlation between annotations and CCM versions, see Use annotations to configure CLB instances.

    • Log on to the ACK console. On the Services page, click the name of the Service that you want to manage and check whether annotations are configured for the Service. If annotations are not configured for the Service, configure annotations for the Service.

      For more information about how to configure annotations, see Use annotations to configure CLB instances.

      For more information about how to view the list of Services, see Getting started.

    • Verify that the annotations are valid.

Why are the configurations of an SLB instance modified?

When specific conditions are met, the CCM calls a declarative API to update the configurations of an SLB instance based on the Service configurations. If you modify the configurations of an SLB instance in the SLB console, the CCM may overwrite the changes. We recommend that you use annotations to configure an SLB instance. For more information about how to configure annotations for an SLB instance, see Use annotations to configure CLB instances.

Important

If the SLB instance is created and managed by the CCM, we recommend that you do not modify the configurations of the SLB instance in the SLB console. Otherwise, the CCM may overwrite the configurations and the Service may be unavailable.

Why does the cluster fail to access the IP address of an SLB instance?

  • Scenario 1: The SLB instance uses a private IP address and is not automatically created for a Service. In this case, the backend pods of the SLB instance and the client pod are deployed on the same node. Consequently, the client pod cannot access the private IP address of the SLB instance.

    This is because Layer 4 SLB does not allow you to use the backend servers of an SLB instance as the clients to access the backend services. Consequently, the cluster fails to access the private IP address of the SLB instance. To resolve this issue, you can use the following methods to avoid deploying the client pod and backend pods on the same node:

    • Change the IP address of the SLB instance to a public IP address.

    • Create a Service that automatically creates an SLB instance. When you configure the Service, set the external traffic policy to Cluster. This way, requests from within the cluster are forwarded by kube-proxy instead of the SLB instance.

  • Scenario 2: The external traffic policy of the Service that is used to expose your application is set to Local. As a result, pods in the cluster cannot access the IP address of the SLB instance.

    For more information about the issue and how to resolve the issue, see What do I do if the cluster cannot access the IP address of the SLB instance exposed by the LoadBalancer Service?

What do I do if the cluster cannot access the IP address of the SLB instance exposed by the LoadBalancer Service?

Issue

In an ACK cluster, only specific nodes can access the IP address of the SLB instance whose externalTrafficPolicy is set to Local. Access failures occur if you use Ingresses to access the SLB instance.

Cause

externalTrafficPolicy: Local is set for the SLB instance. In Local mode, the IP address of the SLB instance is accessible only if pods are provisioned on the local node (the node that runs the SLB instance). The IP address of the SLB instance is external to the ACK cluster. If nodes or pods in the ACK cluster cannot access the IP address without using a second hop, requests cannot reach the SLB instance. As a result, the IP address of the SLB instance is considered an extended IP address of the Service that uses the SLB instance. Requests are forwarded by kube-proxy based on iptables or IP Virtual Server (IPVS).

In this scenario, if the requested pod is not provisioned on the local node, a connectivity issue occurs. The IP address of the SLB instance is accessible only if the requested pod is provisioned on the local node. For more information, see Why kube-proxy add external-lb's address to node local iptables rule?.

Solution

You can resolve the issue by using one of the following methods. We recommend that you use the first method.

  • Access the IP address of the SLB instance from within the ACK cluster by using the ClusterIP Service or the Ingress name. The Ingress name is nginx-ingress-lb.kube-system.

  • Set externalTrafficPolicy to Cluster for the LoadBalancer Service. This method ensures that requests are forwarded to pods on all nodes. However, source IP addresses cannot be preserved because SNAT is used. This means that backend applications cannot obtain client IP addresses. Run the following command to modify the Ingress:

    kubectl edit svc nginx-ingress-lb -n kube-system
  • If the network plug-in of the cluster is set to Terway and the exclusive or inclusive ENI mode is selected, you can set externalTrafficPolicy to Cluster for the LoadBalancer Service and add the elastic network interfaces (ENI) annotation. Example: annotation: service.beta.kubernetes.io/backend-type: "eni". The annotation adds pods that are assigned ENIs as the backend servers of the LoadBalancer Service. This way, client IP addresses can be preserved and the IP address of the SLB instance can be accessed from within the cluster. For more information, see Use annotations to configure CLB instances.

    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        service.beta.kubernetes.io/backend-type: eni
      labels:
        app: nginx-ingress-lb
      name: nginx-ingress-lb
      namespace: kube-system
    spec:
      externalTrafficPolicy: Cluster

When is an SLB instance automatically deleted?

The system automatically deletes the SLB instance of the LoadBalancer Service under specific conditions if the SLB instance is created by the CCM. The following table describes the conditions under which the SLB instance is automatically deleted.

Condition

When an SLB instance created by the CMM is used

When an existing SLB instance is used

Delete the LoadBalancer Service

Delete the SLB instance

Retain the SLB instance

Change the Service type of the LoadBalancer Service

Delete the SLB instance

Retain the SLB instance

If I delete a Service, is the SLB instance associated with the Service automatically deleted?

If the SLB instance is reused, it is not deleted together with the Service. If the SLB instance is not reused, it is deleted. If a Service contains the annotation service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: {your-slb-id}, the corresponding SLB instance is reused.

If you change the type of the Service, for example, from LoadBalancer to NodePort, and the SLB instance is created by the CCM, the SLB instance is automatically deleted.

What do I do if I accidentally delete an SLB instance?

  • Scenario 1: What do I do if I accidentally delete the SLB instance of the API server?

    The deleted SLB instance cannot be restored. You must create a new SLB instance. For more information, see Create an ACK Pro cluster.

  • Scenario 2: What do I do if I delete the SLB instance of an Ingress?

    Perform the following steps to recreate the SLB instance:

    1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

    2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Network > Services.

    3. On the Services page, select kube-system from the Namespace drop-down list. Then, find and click nginx-ingress-lb in the Service list.

      If you cannot find nginx-ingress-lb, click Create from YAML in the upper-right corner of the page. Use the following template to create a Service named nginx-ingress-lb.

      apiVersion: v1
      kind: Service
      metadata:
        labels:
          app: nginx-ingress-lb
        name: nginx-ingress-lb
        namespace: kube-system
      spec:
        externalTrafficPolicy: Local
        ports:
        - name: http
          port: 80
          protocol: TCP
          targetPort: 80
        - name: https
          port: 443
          protocol: TCP
          targetPort: 443
        selector:
          app: ingress-nginx
        type: LoadBalancer
    4. In the Actions column of the Service, click Edit YAML, delete the content in the status field, and then click Update. This way, the CCM creates a new SLB instance.

  • Scenario 3: What do I do if I delete an SLB instance that is configured to handle workloads?

    • If you no longer need the Service that is associated with the SLB instance, delete the Service.

    • If you want to keep the Service, perform the following steps:

      1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

      2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Network > Services.

      3. On the Services page, click All Namespaces from the Namespace drop-down list. In the Service list, find the Service.

      4. In the Actions column of the Service, click Edit YAML, delete the content in the status field, and then click Update. This way, the CCM creates a new SLB instance.

How do I rename an SLB instance if the CCM version is V1.9.3.10 or earlier?

For CCM versions later than V1.9.3.10, a tag is automatically added to the SLB instances in the cluster. You need only to change the value if you want to rename an SLB instance. For CCM V1.9.3.10 and earlier, you must manually add a specific tag to an SLB instance if you want to rename the SLB instance.

Note
  • You can rename an SLB instance by adding a tag to the instance only if the CCM version is V1.9.3.10 or earlier.

  • The Service type is LoadBalancer.

  1. Log on to a master node in an ACK cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.

  2. Run the kubectl get svc -n ${namespace} ${service} command to view the Service type and IP address of the Service.service类型

    Note

    Replace namespace with the cluster namespace and service with the Service name.

  3. Run the following command to create the tag that you want to add to the SLB instance:

    kubectl get svc -n ${namespace} ${service} -o jsonpath="{.metadata.uid}"|awk -F "-" '{print "kubernetes.do.not.delete: "substr("a"$1$2$3$4$5,1,32)}'

    tag

  4. Log on to the SLB console, select the region where the SLB instance is deployed, and then find the specified SLB instance based on the IP address that is returned in Step 2.

  5. Add the tag that is generated in Step 3 to the SLB instance. Callout 1 in the preceding figure is the tag key, and callout 2 is the tag value. For more information, see Manage tags.

How does the CCM calculate node weights in Local mode?

In this example, pods with the app=nginx label are deployed on three ECS instances. In the following figure, when externalTrafficPolicy is set to Local, the pods provide services for external users by using Service A. The following sections describe how node weights are calculated.

CCM2

For CCM V1.9.3.276-g372aa98-aliyun and later

The weights of pods are slightly imbalanced due to the precision of the calculation formula. For CCM V1.9.3.276-g372aa98-aliyun and later, the weight of each node equals the number of pods deployed on the node. In the following figure, the weights of the ECS instances are 1, 2, and 3. Traffic loads are distributed to the ECS instances at the ratio of 1:2:3. This way, the loads of the pods are more balanced than the pods in the preceding figure.

The node weight is calculated based on the following formula.node权重

node

For CCM versions that are later than V1.9.3.164-g2105d2e-aliyun but earlier than V1.9.3.276-g372aa98-aliyun

For CCM versions that are later than V1.9.3.164-g2105d2e-aliyun but earlier than V1.9.3.276-g372aa98-aliyun, the node weights are calculated based on the number of pods deployed on each node, as shown in the following figure. The weights of the ECS instances are 16, 33, and 50 based on this calculation. Therefore, traffic loads are distributed to the ECS instances at the ratio of approximately 1:2:3.

The node weight is calculated based on the following formula.计算公式

ccm4

For CCM versions earlier than V1.9.3.164-g2105d2e-aliyun

For CCM versions that are earlier than V1.9.3.164-g2105d2e-aliyun, the following figure shows that the weight of each ECS instance in Local mode is 100. Traffic loads are evenly distributed to the ECS instances. However, the loads of the pods are different because the pods are unevenly deployed on the ECS instances. For example, the pod on ECS 1 takes the heaviest load and the pods on ECS 3 take the lightest load.

CCM3

How do I query the IP addresses, names, and address types of all SLB instances in a cluster?

  1. Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.

  2. Run the following command to obtain the name, IP address, and address type of each LoadBalancer Service in all namespaces.

    kubectl get services -A -ojson | jq '.items[] | select(.spec.type == "LoadBalancer") | {name: .metadata.name, namespace: .metadata.namespace, ip: .status.loadBalancer.ingress[0].ip, lb_type: .metadata.annotations."service.beta.kubernetes.io/alibaba-cloud-loadbalancer-address-type"}'

    Expected output:

    {
      "name": "test",
      "namespace": "default",
      "ip": "192.168.*.*",
      "lb_type": "intranet"
    }
    {
      "name": "nginx-ingress-lb",
      "namespace": "kube-system",
      "ip": "47.97.*.*",
      "lb_type": "null"
    }

How do I ensure that the LoadBalancer gracefully disconnects existing connections when I change the backend of the LoadBalancer Service?

You can configure connection draining by using the annotations service.beta.kubernetes.io/alibaba-cloud-loadbalancer-connection-drain and service.beta.kubernetes.io/alibaba-cloud-loadbalancer-connection-drain-timeout. After you remove the backend from the Service, the LoadBalancer continues to handle existing connections during the drain-timeout period. For more information, see Configure connection draining for a listener.

FAQ about using existing SLB instances

Why does the system fail to use an existing SLB instance for more than one Service?

  • Check the version of the CCM. If the version is earlier than v1.9.3.105-gfd4e547-aliyun, the CCM cannot use existing SLB instances for more than one Service. For more information about how to view and update the CCM version, see Manually update the CCM.

  • Check whether the reused SLB instance is created by the cluster. The SLB instance cannot be reused if it is created by the cluster.

  • Check whether the SLB instance is used by the API server. The SLB instance cannot be reused if it is used by the API server.

  • If the SLB instance is an internal-facing SLB instance, check whether the SLB instance and the cluster are deployed in the same virtual private cloud (VPC). The SLB instance cannot be reused if they are deployed in different VPCs.

Why is no listener created when I reuse an existing SLB instance?

Make sure that you set service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners to "true" in the annotation settings. If you do not set the value to true, no listener is automatically created. If you do not set the value to true, no listener is automatically created.

Note

The CCM does not overwrite the listeners of an existing CLB instance due to the following reasons:

  • If the listeners of the CLB instance are associated with applications, service interruptions may occur after the listeners are overwritten.

  • The CCM supports limited backend configurations and cannot handle complex configurations. To use complex backend configurations, you can create listeners in the console. The listeners do not overwrite the existing ones.

Therefore, we recommend that you do not overwrite the listeners of an existing CLB instance. You can forcibly overwrite the listeners if the ports on which these listeners listen are no longer used.

Other issues

How do I troubleshoot CCM update failures?

For more information about how to troubleshoot CCM update failures, see How Can I Troubleshoot a Check Failure That Occurs Before I Update the CCM?.

What do I do if errors occur in Services?

The following table describes how to fix the errors that occur in Services.

Error message

Description and solution

The backend server number has reached to the quota limit of this load balancers

The quota of backend servers is insufficient.

Solution: You can use the following methods to resolve this issue.

  • By default, you can associate up to 200 backend servers with each CLB instance. To request a quota increase, submit a ticket. For more information about how to query and increase the quota, go to the Quota Management page in the SLB console.

  • We recommend that you set externalTrafficPolicy of the CLB instance to Local (externalTrafficPolicy: Local). The system may create a large number of backend servers in Cluster mode. If you want to use the Cluster mode, we recommend that you use the service.beta.kubernetes.io/alibaba-cloud-loadbalancer-backend-label label to specify the vServers that you want to use. This reduces the number of vServer groups that are required. For more information about how to associate backend servers with a CLB instance by using the preceding label, see Add annotations to the YAML file of a Service to configure CLB instances.

  • If multiple Services share a CLB instance, all backend servers used by the Services are counted. We recommend that you create a CLB instance for each created Service.

The loadbalancer does not support backend servers of eni type

Shared-resource CLB instances do not support elastic network interfaces (ENIs).

Solution: If you want to specify an ENI as a backend server, create a high-performance CLB instance. Add the annotation: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-spec: "slb.s1.small" annotation to the Service.

Important

Make sure that the annotations that you add meet the requirements of the CCM version. For more information about the correlation between annotations and CCM versions, see Add annotations to the YAML file of a Service to configure CLB instances.

There are no available nodes for LoadBalancer

No backend server is associated with the CLB instance. Check whether pods are associated with the Service and whether the pods run as expected.

Solution:

  • If no pod is associated with the Service, associate application pods with the Service.

  • If the associated pods do not run as expected, refer to Pod troubleshooting and troubleshoot the issue.

  • If no backend server is associated with the CLB instance, but the pods run as normal, check whether the pods are deployed on master nodes. If the pods are deployed on master nodes, evict the pods to worker nodes. If the pods are not deployed on master nodes, submit a ticket.

  • alicloud: not able to find loadbalancer named [%s] in openapi, but it's defined in service.loaderbalancer.ingress. this may happen when you removed loadbalancerid annotation

  • alicloud: can not find loadbalancer, but it's defined in service

The system fails to associate a Service with the CLB instance.

Solution: Log on to the SLB console and search for the CLB instance in the region of the Service based on the EXTERNAL-IP.

  1. If the CLB instance does not exist and the Service is no longer required, delete the Service.

  2. If the CLB instance exists, perform the following steps:

    1. If the CLB instance is created in the SLB console, add the service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id annotation to the Service. For more information, see Add annotations to the YAML file of a Service to configure CLB instances.

    2. If the CLB instance is automatically created by the CCM, check whether the kubernetes.do.not.delete label is added to the CLB instance. If the label is not added to the CLB instance, add the label to the CLB instance. For more information, see How do I rename an SLB instance when the CCM version is V1.9.3.10 or earlier?

ORDER.ARREARAGE Message: The account is arrearage.

Your account has overdue payments.

PAY.INSUFFICIENT_BALANCE Message: Your account does not have enough balance.

The account balance is insufficient.

Status Code: 400 Code: Throttlingxxx

API throttling is triggered for the CLB instance.

Solution:

  1. Go to the Quota Center page and check whether the CLB resource quotas are sufficient.

  2. Run the following command to check whether errors occur in the Service. If errors occur in the Service, refer to the information provided in this table to troubleshoot the errors.

    kubectl -n {your-namespace} describe svc {your-svc-name}

Status Code: 400 Code: RspoolVipExist Message: there are vips associating with this vServer group.

The listener that is associated with the vServer group cannot be deleted.

Solution:

  1. Check whether the annotation of the Service contains the ID of the CLB instance. Example: service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: {your-slb-id}.

    If the annotation contains the CLB instance ID, the CLB instance is reused.

  2. Log on to the SLB console and delete the listener that uses the Service port. For more information, see Manage forwarding rules for a listener.

Status Code: 400 Code: NetworkConflict

The reused internal-facing CLB instance and the cluster are not deployed in the same virtual private cloud (VPC).

Solution: Make sure that your CLB instance and the cluster are deployed in the same VPC.

Status Code: 400 Code: VSwitchAvailableIpNotExist Message: The specified VSwitch has no available ip.

The idle IP addresses in the vSwitch are insufficient.

Solution: Use service.beta.kubernetes.io/alibaba-cloud-loadbalancer-vswitch-id: "${YOUR_VSWITCH_ID}" to specify another vSwitch in the same VPC.

The specified Port must be between 1 and 65535.

The targetPort field does not support STRING type values in ENI mode.

Solution: Set the targetPort field in the Service YAML file to a value of the INTEGER type or update the CCM. For more information about how to update the CCM, see Update the CCM.

Status Code: 400 Code: ShareSlbHaltSales Message: The share instance has been discontinued.

By default, earlier versions of CCM automatically create shared-resource CLB instances, which are no longer available for purchase.

Solution: Update the CCM.

can not change ResourceGroupId once created

You cannot modify the resource group of a CLB instance after the resource group is created.

Solution: Delete the service.beta.kubernetes.io/alibaba-cloud-loadbalancer-resource-group-id:"rg-xxxx" annotation from the Service.

can not find eniid for ip x.x.x.x in vpc vpc-xxxx

The specified IP address of the ENI cannot be found in the VPC.

Solution: Check whether the service.beta.kubernetes.io/backend-type: eni annotation is added to the Service. If the annotation is added to the Service, check whether Flannel is used as the network plug-in of the cluster. If Flannel is used, delete the annotation from the Service. Flannel does not support the ENI mode.

  • The operation is not allowed because the instanceChargeType of loadbalancer is PayByCLCU.

  • User does not have permission modify InstanceChargeType to spec.

You cannot change the billing method of the Classic Load Balancer (CLB) instance used by a Service from pay-as-you-go to pay-by-specification.

Solution:

  • Delete the service.beta.kubernetes.io/alibaba-cloud-loadbalancer-spec annotation from the Service.

  • If service.beta.kubernetes.io/alibaba-cloud-loadbalancer-instance-charge-type is added to the Service, set the value to PayByCLCU.

SyncLoadBalancerFailed the loadbalancer xxx can not be reused, can not reuse loadbalancer created by kubernetes.

The CLB instance created by the CCM is reused.

Solution:

  1. Check the YAML file of the related Service and record the CLB instance ID in the service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id annotation.

  2. Troubleshoot the issue based on the status of the Service.

    • If the Service is in the Pending state, change the value of the service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id annotation to the ID of a CLB instance that is manually created in the CLB console.

    • If the Service is not in the Pending state, perform the following operations:

      • If the IP address of the CLB instance is the same as the external IP addresses of the Service, delete the service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id annotation.

      • If the IP address of the CLB instance is different from the external IP addresses of the Service, log on to the CLB console, select the region in which the cluster resides, find the CLB instances based on the external IP address of the Service, and then change the value of the service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id annotation to the ID of a manually created CLB instance. If no corresponding CLB instance is found, change the value of the service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id annotation to the ID of a CLB instance that is manually created in the CLB console. Then, recreate the Service.

alicloud: can not change LoadBalancer AddressType once created. delete and retry

You cannot change the type of a CLB instance after it is created.

Solution: Recreate the related Service.

the loadbalancer lb-xxxxx can not be reused, service has been associated with ip [xxx.xxx.xxx.xxx], cannot be bound to ip [xxx.xxx.xxx.xxx]

You cannot associate a CLB instance with a Service that is already associated with another CLB instance.

Solution: You cannot reuse an existing CLB instance by modifying the value of the service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id annotation. To change the CLB instance that is associated with a Service, you must delete and recreate the Service.

How do I configure listeners for a NodePort Service?

You can use the CCM to configure listeners only for LoadBalancer Services. Therefore, you need to change the type of Service from NodePort to LoadBalancer.

How do I access a NodePort Service?

  • You can access a NodePort Service from within the cluster by sending requests to ClusterIP+Port or Node IP+NodePort Service port. The default port number exposed by a NodePort Service is greater than 30000.

  • You can access a NodePort Service from outside the cluster by sending requests to Node IP+NodePort Service port. The default port number exposed by a NodePort Service is greater than 30000.

  • If you want to access the NodePort Service over the Internet or VPCs other than the one in which the cluster resides, you must create a LoadBalancer Service and use the external endpoint of the LoadBalancer Service to expose the NodePort Service.

    Note

    If the external traffic policy of your Service is set to Local, make sure that at least one backend pod of the Service runs on the node on which the Service is deployed. For more information about the external traffic policies supported by Services, see External traffic policies: Local and Cluster.

How do I configure a proper node port range?

The Kubernetes API server allows you to configure the --service-node-port-range parameter to specify the port range for NodePort services or LoadBalancer Services on nodes. The default port range is 30000 to 32767. In an ACK Pro cluster, you can specify a custom port range by configuring the parameters of control plane components. For more information, see Customize the parameters of control plane components in ACK Pro clusters.

  • Exercise caution when you specify a custom node port range. Make sure that the node port range does not overlap with the port range specified by the net.ipv4.ip_local_port_range kernel parameter of Linux on nodes in the cluster. The ip_local_port_range kernel parameter of a node specifies the local port range for all Linux programs on the node. The default value of ip_local_port_range is 32768 to 60999.

  • The default values of --service-node-port-range and ip_local_port_range do not overlap with each other. If the two port ranges overlap with each other after you modify one of them, network errors may occasionally occur on nodes. In addition, health checks may fail to be performed for your applications and nodes may be disconnected from your cluster. In this case, we recommend that you reset the parameters to the default values or modify the parameter values to make sure that the port ranges do not overlap with each other.

  • After you modify the node port range, some existing NodePort or LoadBalancer Services may still use ports that belong to the range specified by the ip_local_port_range kernel parameter. In this case, you must modify the ports used by the NodePort or LoadBalancer Services. You can run the kubectl edit <service-name> command and then change the value of the spec.ports.nodePort parameter to an idle node port.