A Deployment is a key workload type in Kubernetes, often referred to as a "stateless workload." It maintains a specified number of pods running in a desired state within the cluster. This topic guides you through creating stateless applications in an ACK cluster using the console and kubectl.
Reading tips
Before creating a workload, it is advisable to review Workloads to understand the fundamentals and considerations of workloads. This topic is divided into two main sections:
-
Create Deployment: Outlines a streamlined process for creating a deployment using both the console and kubectl.
-
Configuration Parameters: Details the console configuration options and provides YAML examples for kubectl usage.
Create Deployment
Create by using the console
The steps below provide a simplified process for creating a workload. You can follow this guide for a quick deployment and verification. Once you're comfortable with the basics, you can customize your workload by referring to Configuration Parameters.
-
Configure Basic Application Information
-
Log on to the Container Service Management Console, and in the left-side navigation pane, select Cluster List.On the Cluster List page, click the target cluster name. Then, in the left-side navigation pane, select .On the Stateless page, click Create With Image.
-
On the Basic Application Information configuration wizard page, enter the application's basic details. Then, click Next to proceed to the Container Configuration wizard page.
-
-
Configure Container
In the Container Configuration section, specify the Image Name and Port for the container. You may leave other settings at their default values. Then, click Next to move to the Advanced Configuration wizard page. The image address is provided below.
ImportantBefore pulling the image, ensure the cluster has public network access. If you chose the default Configure SNAT For VPC option when creating the cluster, it already has public network access. If not, refer to Enable public network access for an existing cluster.
registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
-
Complete Advanced Configuration
On the Advanced Configuration wizard page, configure access, scaling, scheduling, and label annotations. In the Access Settings section, set the method to expose the backend pods, click OK, and then click Create at the bottom.
ImportantThis step will create a Service of type LoadBalancer to expose the workload, which will incur costs associated with the CLB instance. For billing details, see Pay-as-you-go. If you do not intend to use the CLB instance later, please release it promptly.
-
View Application
On the Creation Completed wizard page, review the application task. In the Application Task Submitted panel, click View Application Details. Click the Access Method tab, locate the newly created service (e.g., nginx-test-svc), and click the link in the External Endpoint column to access it.
You can View, Edit, Redeploy, and perform other operations on the created workload through the console.
Create by using kubectl
Before creating a workload, ensure you have connected to the cluster using kubectl. For instructions, see Obtain the cluster KubeConfig and connect to the cluster using the kubectl tool.
-
Execute the following command to create a workload. This command specifies the container image, while other configurations are set to default.
-
For clusters with version 1.18 and above, use the command below to start.
kubectl create deployment nginx --image=registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
-
For clusters with version 1.18 and below, use the command below to start.
kubectl run -it nginx --image=registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
-
-
Execute the following command to create a
LoadBalancer
type Service to expose the workload using a SLB instance.kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer
-
Run the command below to view the public IP address of the Service.
kubectl get svc
Expected output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 172.16.**.*** <none> 443/TCP 4h47m nginx LoadBalancer 172.16.**.*** 106.14.**.*** 80:31130/TCP 1h10m
-
Enter the public IP of nginx (
106.14.**.***
) in the browser to access the Nginx container associated with the workload.
Configuration parameters
Console configuration parameters
Basic application information
Configuration item | Description |
Application Name | The name of the workload. The name of the pods belonging to the workload will be generated based on this. |
Number Of Replicas | The number of pods contained in the workload. The default number is 2. |
Type | The type of workload. In this topic, select Stateless (deployment). For workload selection, see Create Workloads. |
Label | The label of the workload. |
Annotation | The annotation of the workload. |
Time Zone Synchronization | Whether the container and the node it resides on use the same time zone. |
Container configuration
Advanced configuration
Configuration Card | Configuration Item | Description |
Access Settings | Service | A Service provides a fixed and unified Layer 4 (transport layer) entry for a group of pods. It is a resource that must be configured when exposing workloads externally. Service supports multiple types, including Virtual Cluster IP, Node Port, Load Balancer, and more. Before configuring a Service, see Service Management to understand the basic knowledge of Service. |
Ingress | An Ingress provides a Layer 7 (application layer) entry for multiple services in the cluster and forwards requests to different services based on domain name matching. Before using Ingress, you need to install an Ingress Controller. ACK provides multiple options suitable for different scenarios. See Comparison of Nginx Ingress, ALB Ingress, and MSE Ingress for selection. | |
Scaling Configuration | Metric Scaling | Trigger automatic scaling by monitoring the performance metrics of the container. Metric scaling can help you automatically adjust the total resources used by the workload when business load fluctuates, scaling out to relieve pressure during high loads and scaling in to save resources during low loads. For more information, see Use Container Horizontal Pod Autoscaling (HPA). |
Scheduled Scaling | Trigger workload scaling at regular intervals, suitable for scenarios where business load has periodic changes, such as social media's periodic traffic peaks after lunch and dinner. For more information, see Use Container Cron Horizontal Pod Autoscaling (CronHPA). | |
Scheduling Settings | Upgrade Method | The mechanism used by the workload to replace old pods with new pods when the pod configuration changes.
|
| Affinity, anti-affinity, and toleration configurations are used for scheduling, even if the pod runs on specific nodes. The scheduling operation is relatively complex and requires you to plan according to your needs in advance. For detailed operations, see Scheduling. | |
Labels And Annotations | Pod Labels | Add labels (Label) to each pod belonging to the workload. Various resources in the cluster, including workloads and services, will match with pods through labels. ACK adds a label in the format of |
Pod Annotations | Add annotations (Annotation) to each pod belonging to the workload. Some features in ACK will use annotations, and you can edit them when using these features. |
Workload YAML Example
apiVersion: apps/v1
kind: Deployment # Workload type
metadata:
name: nginx-test
namespace: default # Change the namespace as needed
labels:
app: nginx
spec:
replicas: 2 # Specify the number of pods
selector:
matchLabels:
app: nginx
template: # Pod configuration
metadata:
labels: # Pod labels
app: nginx
annotations: # Pod annotations
description: "This is an application deployment"
spec:
containers:
- name: nginx # Image name
image: nginx:1.7.9 # Use a specific version of the Nginx image
ports:
- name: nginx # name
containerPort: 80 # Port exposed by the container
protocol: TCP # Specify the protocol as TCP/UDP, default is TCP
command: ["/bin/sh"] # Container startup items
args: [ "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY) && exec nginx -g 'daemon off;'"] # Output variables, add command to start nginx
stdin: true # Enable standard input
tty: true # Allocate a virtual terminal
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config # Name of the configuration item
key: SPECIAL_LEVEL # Key name of the configuration item
securityContext:
privileged: true # true to enable privileged mode, false to disable privileged mode, default is false
resources:
limits:
cpu: "500m" # Maximum CPU usage, 500 millicores
memory: "256Mi" # Maximum memory usage, 256 MiB
ephemeral-storage: "1Gi" # Maximum ephemeral storage usage, 1 GiB
requests:
cpu: "200m" # Minimum requested CPU usage, 200 millicores
memory: "128Mi" # Minimum requested memory usage, 128 MiB
ephemeral-storage: "500Mi" # Minimum requested ephemeral storage usage, 500 MiB
livenessProbe: # Liveness probe configuration
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe: # Readiness probe configuration
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 10
volumeMounts:
- name: tz-config
mountPath: /etc/localtime
readOnly: true
volumes:
- name: tz-config
hostPath:
path: /etc/localtime # Mount the /etc/localtime file from the host to the same path inside the container through the volumeMounts and volumes fields.
---
# service
apiVersion: v1
kind: Service
metadata:
name: nginx-test-svc
namespace: default # Change the namespace as needed
labels:
app: nginx
spec:
selector:
app: nginx # Match label to ensure the service points to the correct pods
ports:
- port: 80 # Port provided by the service within the cluster
targetPort: 80 # Port that the internal application listens to (containerPort)
protocol: TCP # Protocol, default is TCP
type: ClusterIP # Service type, default is ClusterIP, internal access
---
# ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
namespace: default # Change the namespace as needed
annotations:
kubernetes.io/ingress.class: "nginx" # Specify the type of Ingress controller
# If using Alibaba Cloud SLB Ingress controller, you can specify as follows:
# service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: "lb-xxxxxxxxxx"
# service.beta.kubernetes.io/alibaba-cloud-loadbalancer-spec: "slb.spec.s1.small"
spec:
rules:
- host: foo.bar.com # Replace with your domain name
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service # Backend service name
port:
number: 80 # Backend service port
tls: # Optional, used to enable HTTPS
- hosts:
- foo.bar.com # Replace with your domain name
secretName: tls-secret # TLS certificate secret name
References
-
For applications requiring stable persistent storage, such as databases, consider using StatefulSet. For details, see Create a StatefulSet.
-
If you encounter issues when creating workloads, consult Workload FAQ.
-
For troubleshooting abnormal pod issues, refer to Pod Abnormal Issue Troubleshooting.