All Products
Search
Document Center

Container Service for Kubernetes:(Discontinued) Release notes for ACK Edge of Kubernetes 1.18

Last Updated:Aug 20, 2024

ACK Edge is a cloud-managed solution provided by Container Service for Kubernetes (ACK). You can use ACK Edge to implement collaborative cloud-edge computing. This topic describes the release notes for ACK Edge of Kubernetes 1.18.

Updated features

Feature

Description

Version of ACK Edge cluster

The version of ACK Edge is updated to 1.18.8-aliyunedge.1.

Cloud-edge O&M channel and O&M monitoring

  • tunnel-server intercepts and handles the traffic of edge O&M and monitoring based on cluster Domain Name System (DNS) resolutions instead of the iptables rules of individual nodes.

  • Monitoring components that depend on the cloud-edge O&M channel, such as metrics-server and prometheus, are no longer required to be deployed on the same node as tunnel-server.

  • tunnel-server can be deployed on multiple pod replicas and support load balancing among all nodes.

  • The meta server module is added to the cloud-edge O&M channel. This module is used to handle Prometheus metrics and debug/pprof. The endpoint of tunnel-server is http://127.0.0.1:10265. The endpoint of edge-tunnel-agent is http://127.0.0.1:10266. You can change the port in an endpoint by specifying the --meta-port startup parameter of a component.

Autonomy of edge nodes

Edge caching, health checks, service endpoints, and traffic analysis are optimized. Edge traffic autonomy is enhanced. Access from edge applications to kube-apiserver in InCluster mode is enhanced. The following section describes the improvements:

  • Traffic topology of Services at the edge is supported by edge-hub and is no longer dependent on Kubernetes feature gates.

  • The endpoint of a Service at the edge is automatically changed by edge-hub to the public endpoint of kube-apiserver of the cluster. This allows applications at the edge to access the cluster in InCluster mode.

  • CustomResourceDefinitions (CRDs) can be cached by edge-hub. For example, the nodenetworkconfigurations CRD can be cached. This CRD is used to store network information for Flannel.

  • Health checks in the cloud are improved by edge-hub. During health checks, Lease heartbeats instead of healthz requests are sent.

  • Port 10261 and port 10267 are listened on by edge-hub. Port 10261 is used to forward requests. Port 10267 is used to handle local requests sent to edge-hub, such as liveness probes, metrics, and pprof of yurthub that are sent to edge-hub.

  • The node_edge_hub_proxy_traffic_collector metric is supported by edge-hub. This metric shows the traffic generated when components of edge nodes such as kubelet and kube-proxy access Kubernetes resources, such as pods and Deployments.

Cell-based management at the edge

The Patch field is supported in cell-based management (based on the UnitedDeployment controller) at the edge. This field allows you to customize the configurations of each node pool. For example, you want to deploy nodes in different node pools in a deployment cell by using different local image repositories. In this case, you can specify an image address for each node pool by using the Patch field.

Edge nodes

Nodes that run the Ubuntu 20.04 operating system can be added to edge Kubernetes clusters.

Edge network

  • The cloud-edge network built by using Flannel is optimized. List operations and watch operations are no longer performed on nodes. Instead, list operations and watch operations are performed on related CRDs. This reduces the traffic generated by these operations.

  • Annotations about traffic management at the edge are adjusted. For more information, see the Annotations about traffic management at the edge section of this topic.

Annotations about traffic management at the edge

  • The following table describes the keys of annotations supported by Kubernetes 1.16 for traffic management at the edge.

    Annotation key

    Annotation value

    Description

    openyurt.io/topologyKeys

    kubernetes.io/hostname

    Specifies that the Service can be accessed only by the node on which the Service is deployed.

    openyurt.io/topologyKeys

    kubernetes.io/zone

    Specifies that the Service can be accessed only by the nodes in the node pool in which the Service is deployed.

    N/A

    N/A

    Specifies that access to the Service is unlimited.

  • In Kubernetes 1.18, the valid values of openyout.io/topologyKeys are modified. Valid values: kubernetes.io/zone and openyurt.io/nodepool. These values specify that the Service can be accessed only by the nodes in the node pool in which the Service is deployed. We recommend that you set the value to openyurt.io/nodepool.