ACK Edge is the first cloud-native edge computing service that coordinates workloads in the cloud and at the edge based on a non-intrusive method. ACK Edge allows you to deploy application pods at the edge and set these pods to use InClusterConfig to access the Kubernetes API server without making pod-facing changes. This topic describes how to run application pods that use InClusterConfig at the edge.
Background information
The following issues occur when you want to deploy application pods in an open source Kubernetes cluster to the edge and set these pods to use InClusterConfig to access the Kubernetes API server:
Issue 1: Application pods access the Kubernetes API server based on the addresses in InClusterConfig. The default load balancing rules (iptables/ipvs) configured on the node forward external requests to the application pods of the Kubernetes API server based on their IP addresses. However, the pods at the edge and the Kubernetes API server in the cloud belong to different networks. Therefore, the pods at the edge cannot access the IP addresses of pods in the cloud. As a result, the application pods at the edge cannot use InClusterConfig to access the Kubernetes API server.
Issue 2: After Issue 1 is fixed, if the application pods restarts due to network jitters in the cloud, the pods at the edge cannot retrieve workload configurations from the Kubernetes API server. This affects the restart of application pods.
For more information about how to access the API from a pod, see Accessing the API from a Pod.
Solution
You can enable the edge-hub of ACK Edge on edge nodes to fix the preceding issues based on a non-intrusive method. Then, you can set application pods at the edge to use InClusterConfig to access the Kubernetes API server without making pod-facing changes. Take note of the following details:
The endpoints of pods deployed at the edge are automatically changed from environment variables KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to the HTTPS endpoint and port of edge-hub (
KUBERNETES_SERVICE_HOST=169.254.2.1
andKUBERNETES_SERVICE_PORT=10268
) without the awareness of the application pods. This way, the application pods can use InClusterConfig to access the Kubernetes API server based on edge-hub. This fixes the first issue.You must enable the caching feature of edge-hub. This way, application pods can retrieve data from the local cache when they are restarted. This fixes the second issue.
For more information about how to enable the caching feature of edge-hub, see the Enable the caching feature of edge-hub section of this topic.
Enable the caching feature of edge-hub
We recommend that you disable the caching feature for pods that receive a large number of list or watch requests because data is cached on local disks.
You must restart the pods after the caching feature is enabled for the pods.
Obtain the User-Agent header.
The User-Agent header can be found in the startup command of the application pod.
apiVersion: v1 kind: Pod metadata: name: edge-app-pod spec: containers: - name: "edge-app" image: "xxx/edge-app-amd64:1.18.8" command: - /bin/sh - -ec - | # The User-Agent header is found in the startup command: edge-app /usr/local/bin/edge-app --v=2
You can also find the User-Agent header in the edge-hub logs, such as
{User-Agent} watch {resource)
. Example:I0820 07:50:18.899015 1 util.go:221] edge-app get services: /api/v1/services/xxx with status code 200, spent 21.035061152ms
Enable the caching feature of edge-hub.
To enable the caching feature of edge-hub, add the User-Agent header included in the request destined for application pods to the cache_agents field in the edge-hub-cfg ConfigMap.
The following YAML template provides an example:
apiVersion: v1 kind: ConfigMap metadata: name: edge-hub-cfg namespace: kube-system data: # This caches data when the edge-app pod whose User-Agent header is edge-app accesses the Kubernetes API server. # Restart the application pod after the caching feature is enabled. cache_agents: "edge-app" # Separate multiple components with commas (,).
Check whether data returned by the application pod is cached.
Check whether cache data exists in the /etc/kubernetes/cache/{User-Agent} directory on the node that hosts the application pod.