All Products
Search
Document Center

Container Service for Kubernetes:Configure a Service topology

最終更新日:Apr 30, 2024

The backend endpoints of Kubernetes-native Services are randomly distributed across nodes. Consequently, when Service requests are distributed to nodes across node groups, these requests may fail to reach the nodes or may not be answered promptly. You can configure a Service topology to expose an application on an edge node only to the current node or nodes in the same edge node pool. This topic describes how a Service topology works and how to configure a Service topology.

Background Information

In edge computing, edge nodes are classified into groups by zone, region, or other logical attribute, such as CPU architecture, Internet service provider (ISP), or cloud service provider. Nodes in different groups are isolated from each other in one way or another. For example, these nodes may be disconnected, may not share the same resources, may have heterogeneous resources, or may run applications that are independently deployed.

How a Service topology works

To resolve the preceding issues, Container Service for Kubernetes (ACK) Edge provides a feature to manage the topology of endpoints of Kubernetes-native Services. For example, you can configure a Service topology to expose an application on an edge node only to the current node or nodes in the same edge node pool. The following figure shows how a Service topology works.

image
  • Service 1 is associated with Pod 2 and Pod 3. The annotation:"openyurt.io/topologyKeys: kubernetes.io/zone" annotation specifies the node pool that is allowed to access Service 1.

  • Pod 2 is deployed on Node 2 and Pod 3 is deployed on Node 4. Node 2 belongs to Node Pool A and Node 4 belongs to Node Pool B.

  • Pod 3 and Pod 1 do not belong to the same node pool. As a result, when Pod 1 accesses Service 1, the traffic is forwarded only to Pod 2. The traffic is not forwarded to Pod 3.

Usage notes

  • For versions earlier than v1.26.3-aliyun.1: You must add the Service topology annotation to a Service when you create the Service. If you add the annotation after the Service is created, the Service topology does not take effect. In this case, you must delete and recreate the Service.

  • For v1.26.3-aliyun.1 and later: You can add or modify the Service topology annotation after you create a Service. The Service topology takes effect immediately after you add or modify the annotation.

Annotations

You can add a Service topology annotation to a Kubernetes-native Service to configure a Service topology. The following table describes the annotations that you can use to configure a Service topology.

annotation Key

annotation Value

Description

openyurt.io/topologyKeys

kubernetes.io/hostname

Specifies that the Service can be accessed only by the node where the Service is deployed.

openyurt.io/topologyKeys

kubernetes.io/zone or openyurt.io/nodepool

Specifies that the Service can be accessed only by the nodes in the node pool where the Service is deployed. If the version of the ACK Edge cluster is 1.18 or later, we recommend that you use openyurt.io/nodepool.

-

-

Specifies that access to the Service is not limited.

Configure a Service topology

You can configure a Service topology in the ACK console or by using a CLI.

Method 1: Configure a Service topology in the ACK console

To create a Service that can be accessed only by nodes in the node pool where the Service is deployed, you only need to add an annotation to the Service. For example, you can set Name to openyurt.io/topologyKeys and Value to kubernetes.io/zone. For more information about how to create a Service, see Getting started.

G-9

Method 2: Configure a Service topology by using a CLI

Create a Service that uses the topological domain of a specific node pool. The following code block is an example of the YAML template:

apiVersion: v1
kind: Service
metadata:
  annotations:
    openyurt.io/topologyKeys: kubernetes.io/zone
  name: my-service-nodepool
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: nginx
  sessionAffinity: None
  type: ClusterIP