Before you start using Container Service for Kubernetes (ACK), we recommend that you familiarize yourself with the basic concepts and terms to help you better understand the service. This topic describes the basic concepts and terms that are commonly used in ACK.
cluster
A cluster is a collection of cloud resources, and is used to deploy containerized applications. A cluster includes resources such as Elastic Compute Service (ECS) instances, Server Load Balancer (SLB) instances, and virtual private clouds (VPCs).
The following table lists the cluster types provided by ACK and their descriptions.
Cluster type
Description
ACK Pro cluster
ACK Pro clusters are built on top of ACK standard clusters to enhance reliability and security for enterprise-grade environments and large-scale workloads. Alibaba Cloud offers a service-level agreement (SLA) that includes compensation clauses for ACK Pro clusters.
ACK Basic cluster
ACK creates and maintains the control planes of ACK Basic clusters. Therefore, you need to create and maintain only worker nodes. With ACK basic clusters, you can run your businesses in a convenient manner and at lower resource costs.
ACK dedicated cluster
You need to create three control planes and an appropriate number of worker nodes when you create an ACK dedicated cluster. This enables fine-grained control over cluster infrastructures. However, you must plan and manage the cluster and update nodes on your own.
ACK cluster for heterogeneous computing
This type of cluster leverages nodes equipped with heterogeneous resources, such as NVIDIA GPUs and Hanguang NPUs, to provide heterogeneous computing capabilities. These nodes can be deployed with standard CPU nodes to form heterogeneous computing clusters. This type of cluster eliminates the need to install and manage drivers and is suitable for mainstream AI computing frameworks. It also allows you to share and isolate GPU resources among multiple containers.
ACK cluster that supports sandboxed containers
This type of cluster runs sandboxed containers on ECS Bare Metal instances. This ensures high performance for containers in scenarios where the load is heavy and high bandwidth is required.
ACK cluster for confidential computing
This type of cluster provides confidential computing capabilities based on Intel Software Guard Extensions (Intel SGX) to protect your code and data. This type of cluster is suitable for scenarios such as data protection, blockchain, key management, intellectual property rights, and genomics computing.
ACK Edge cluster
You can use ACK Edge clusters to coordinate services in the cloud and services at the edge. ACK Edge clusters provide node autonomy, cell-based management, and network traffic management, and support native APIs that you can use to deploy and manage resources at the edge without needing to rewrite any code. This provides a native and centralized method for application lifecycle management and resource scheduling in edge computing scenarios.
ACK Serverless cluster
ACK Serverless clusters are instantiations that do not require you to create control planes or worker nodes. All you need to do is to use the ACK console or the CLI to configure resources for containers, specify container images for applications, configure how to provide external services, and start applications.
Registered clusters
You can register external Kubernetes clusters in ACK and manage the clusters in a centralized manner. This allows you to manage Kubernetes clusters that reside in data centers or on thirty-party clouds.
Node
A node is a VM or a physical server that has Docker Engine installed and can be used to host and manage containers. When you add a node to an ACK cluster, ACK installs an agent program on the node and then registers the node in the cluster. You can increase or decrease the number of nodes in an ACK cluster based on your business requirements.
Node pool
A node pool is a group of nodes that have the same configurations.
ACK provides regular node pools and managed node pools.
Node pool type
Description
Regular node pool
A regular node pool contains one or more nodes that have the same configurations in a cluster. Each node pool corresponds to a scaling group. When you scale a regular node pool, ACK uses Auto Scaling to add or remove nodes. You can create and manage multiple regular node pools based on your requirements.
NoteSome system components are installed in the default node pool. When the system automatically scales the default node pool, the system components may become unstable. If you want to use the auto scaling feature, we recommend that you create a new node pool that has auto scaling enabled.
Managed node pool
Managed node pools can automate O&M tasks for specific nodes. For example, managed node pools can automatically patch Common Vulnerabilities and Exposures (CVE) vulnerabilities or fix specific anomalies. This reduces your O&M work.
For information, see Managed node pool overview.
VPC
A VPC is a logically isolated virtual networking environment that you control on the cloud. The CIDR block, route tables, and gateways of a VPC are fully customizable. You can deploy Alibaba Cloud resources in VPCs, such as ECS instances, SLB instances, and ApsaraDB RDS (RDS) instances.
Security group
Security groups act as virtual firewalls and provide Stateful Packet Inspection (SPI) and packet filtering capabilities. You can use security groups to define security domains in the cloud. A security group is a logically isolated group of instances that reside in the same region. All instances in a security group are mutually trusted and protected under the same security group rules.
App catalog
App catalog is a feature that ACK provides to facilitate application deployment. App catalog is integrated with Helm to provide extended features, such as a GUI to help you install Helm charts.
Orchestration template
Orchestration templates can be used to save Kubernetes configurations in the YAML format.
Knative
Knative is a Kubernetes-based serverless framework. The purpose of Knative is to create a cloud-native and cross-platform orchestration standard for serverless applications.
Kubernetes
Kubernetes is an open source, portable, and extensible platform on which you can manage containerized workloads and services. Kubernetes facilitates declarative configuration and automation.
Container
Containers are used to package an application along with its runtime dependencies. A node can run multiple containers.
Image
A container image is the standard form of an application package. A container image represents the binary data that encapsulates an application and all its software dependencies. You can deploy applications from custom images that are hosted on Docker Hub, Container Registry, or a private image registry. An image ID is a unique identifier composed of the URI of the image registry and the image tag. The default image tag is
latest
.
Image registry
An image registry stores container images provided by Kubernetes and images that are built from containers.
Control plane
A control plane manages the worker nodes and the components in a cluster. Components include kube-apiserver, kube-scheduler, kube-controller-manager, etcd, and container network plug-ins.
Worker node
A worker node is a VM or physical node that runs workloads and communicates with the control plane. A worker node hosts the scheduled pods and communicates with control planes. A worker node runs components, such as the Docker runtime environment, kubelet, kube-proxy, and other optional components.
Namespace
Namespaces are used to divide cluster resources into virtual and isolated spaces. By default, a Kubernetes cluster is initialized with three namespaces: default, kube-system, and kube-public. The cluster administrator can create new namespaces.
Pod
A pod is the smallest deployable unit that you can create or manage for applications in Kubernetes. A pod encapsulates one or more containers, storage resources, a unique IP address, and configurations that specify how the containers run.
ReplicationController
A ReplicationController ensures that a specified number of pod replicas are always running in a Kubernetes cluster. In other words, a ReplicationController monitors the number of running pod replicas and adjusts them to the specified number. You must set the number of running pod replicas to one or larger for a ReplicationController. If the number of running pod replicas drops below the number you specify, the ReplicationController launches new pod replicas. If the number of pod replicas exceeds the number you specify, the ReplicationController terminates redundant pod replicas.
ReplicaSet
ReplicaSet is the successor to ReplicationController and supports more selector types. ReplicaSets are not independently deployed, but are used by Deployments to guarantee the availability of the desired number of pods.
Workload
Workloads are applications that run in Kubernetes. The following table lists the types of workloads in Kubernetes.
Workload type
Description
Deployment
A Deployment performs a one-time operation on a Kubernetes cluster. Deployments are suitable for running applications that have the same features but are independent of each other.
StatefulSet
A StatefulSet ensures the orderly deployment, scaling, and rolling update of applications. If you want to use volumes to persist data for your workloads, you can choose StatefulSet as the type of your workloads.
DaemonSet
A DaemonSet ensures that all or partial nodes in your cluster run a pod. Unlike Deployments, DaemonSets create pods on specified nodes and ensure that all specified nodes run the DaemonSet pods. DaemonSets are suitable for logging and monitoring Kubernetes clusters.
Job
A Job runs a one-time task. You can use a Job to run multiple pods in parallel.
CronJob
A CronJob performs periodic or recurring operations based on a schedule. CronJobs are suitable for tasks such as backing up data and sending emails.
CustomResourceDefinition (CRD)
You can use CRDs to add third-party workloads to Kubernetes. CRDs provide a method to define custom resources.
Label
Labels are key-value pairs that are added to resource objects. Labels are intended to specify attributes of objects that are useful and relevant to users. Labels do not imply semantics to the core system. You can add labels to objects when you create the objects and modify the labels at any time afterwards. If you want to add multiple labels to a resource object, the key of each label must be unique.
Service
A Service represents an abstract way to expose a set of backend pods. When a Service receives requests, kube-proxy selects the backend pods that match the selectors of the Service and sends the requests to the ports that are specified in the Service configuration.
Ingress
An Ingress is a collection of rules that manage external access to Services in your cluster. With Ingresses, you can configure externally-reachable URLs for Services, balance traffic, conduct SSL connections, and offer name-based virtual hosting. You can create an Ingress by sending an HTTP Post request to the API server of your cluster. An Ingress controller is responsible for fulfilling an Ingress, usually with a load balancer. You can also configure edge routers or additional frontends to handle traffic with high availability.
ConfigMap
ConfigMaps can be used to store fine-grained information, such as an attribute. ConfigMaps can also be used to store coarse-grained information, such as configuration files or JSON objects. You can use ConfigMaps to store non-sensitive, unencrypted configuration information.
Secret
A Secret is used to store and manage sensitive information, such as passwords and certificates.
Volume
Kubernetes volumes are similar to Docker volumes. A Docker volume serves a container, while a Kubernetes volume serves a pod within its lifetime. The volumes declared in a pod are shared by all containers in the pod.
Persistent volume (PV)
A PV is a storage resource in a Kubernetes cluster, just like a node is a computing resource in a cluster. The lifecycle of a PV is independent of the lifecycle of the pod that has the PV mounted. Different types of PVs can be provisioned by using different types of StorageClasses.
Persistent volume claim (PVC)
A PVC is a consumer of PVs, just like a pod is a consumer of nodes.
StorageClass
A StorageClass is used to enable dynamic provisioning of PVs. You can enable dynamic provisioning to automate PV creation based on your requirements.
Auto scaling
Auto scaling is a feature that ACK provides to dynamically scale computing resources for your business in a cost-effective manner. Auto scaling is suitable for online workloads, large-scale computing and training tasks, GPU-accelerated deep learning tasks, model inference and model training tasks that use shared GPUs, and workloads whose load changes periodically. The following table describes the auto scaling components provided by ACK.
Scaling category
Component
Description
Workload scaling
Horizontal Pod Autoscaler (HPA)
HPA automatically scales pods based on CPU utilization. You can use HPA to scale workloads that support the scale operation, such as Deployments and StatefulSets.
CronHPA
To reduce resource waste in some scenarios, ACK provides the kubernetes-cronhpa-controller component to automatically scale resources based on predefined schedules. You can use the CronHPA to scale workloads that support the scale operation, such as Deployments and StatefulSets. The CronHPA is compatible with the HPA. You can use the CronHPA and HPA in combination to scale workloads.
Vertical Pod Autoscaler (VPA)
The VPA automatically adjusts the CPU and memory reservations for your pods based on the resource usage of the pods. This adjustment can improve cluster resource utilization and free up CPU and memory for other pods. This way, pods are scheduled to nodes that have sufficient resources available. The VPA also maintains the amount of resources specified by the
request
andlimit
parameters of the pod. The VPA is used for applications that cannot be horizontally scaled. Typically, the VPA is used when pods are recovered from anomalies.Resource scaling
Cluster Autoscaler
ACK provides the auto scaling component (Cluster Autoscaler) to automatically scale nodes. Regular instances, GPU-accelerated instances, and preemptible instances can be automatically added to or removed from an ACK cluster to meet your business requirements. This component supports multiple scaling modes, various instance types, and instances that are deployed across zones. This component is applicable to diverse scenarios. Cluster Autoscaler is applicable to online workloads, deep learning tasks, and large-scale computing tasks.
Observability
The observability capability of Kubernetes includes monitoring and logging. Monitoring allows developers to keep track of the operations of a system. Logging facilitates diagnostics and troubleshooting.
Helm
Helm is a package management platform for Kubernetes. A Helm chart is a package of the configurations of the resources required by an application.
nodeAffinity
You can configure nodeAffinity settings to schedule pods to worker nodes with matching labels.
Taint
Taints are opposite to nodeAffinity and allow nodes to repel specific pods.
Toleration
Tolerations are applied to pods and allow (but do not require) pods to be scheduled to nodes with matching taints.
podAffinity
You can configure podAffinity settings to schedule pods to the same topological domain as the pods that meet the specified podAffinity settings. For example, you can configure podAffinity settings to schedule applications that communicate with each other to the same topological domain, such as a host. This reduces the network latency between these applications.
podAntiAffinity
You can configure podAntiAffinity settings to not schedule pods to the same topological domain as pods that meet the podAffinity settings. For example, you can configure podAntiAffinity settings to schedule the pods of an application to different topological domains, such as multiple hosts. This helps enhance the stability of the application.
Service mesh (Istio)
Istio is an open platform that allows you to connect, protect, control, and observe microservices. Service Mesh (ASM) is a fully managed service mesh platform. ASM is compatible with the open source Istio service mesh. ASM allows you to manage services in a simplified manner. For example, you can use ASM to route and split inter-service traffic, secure inter-service communication with authentication, and observe the behavior of services in meshes.
References
For more information about Kubernetes terms, see Kubernetes concepts.