All Products
Search
Document Center

Container Service for Kubernetes:create a stateless application by using a deployment

Last Updated:Feb 11, 2025

A Deployment is a key workload type in Kubernetes, often referred to as a "stateless workload." It maintains a specified number of pods running in a desired state within the cluster. This topic guides you through creating stateless applications in an ACK cluster using the console and kubectl.

Reading tips

Before creating a workload, it is advisable to review Workloads to understand the fundamentals and considerations of workloads. This topic is divided into two main sections:

  • Create Deployment: Outlines a streamlined process for creating a deployment using both the console and kubectl.

  • Configuration Parameters: Details the console configuration options and provides YAML examples for kubectl usage.

Create Deployment

Create by using the console

Important

The steps below provide a simplified process for creating a workload. You can follow this guide for a quick deployment and verification. Once you're comfortable with the basics, you can customize your workload by referring to Configuration Parameters.

  1. Configure Basic Application Information

    1. Log on to the Container Service Management Console, and in the left-side navigation pane, select Cluster List.On the Cluster List page, click the target cluster name. Then, in the left-side navigation pane, select Workloads > Stateless.On the Stateless page, click Create With Image.

      image

    2. On the Basic Application Information configuration wizard page, enter the application's basic details. Then, click Next to proceed to the Container Configuration wizard page.

      image

  2. Configure Container

    In the Container Configuration section, specify the Image Name and Port for the container. You may leave other settings at their default values. Then, click Next to move to the Advanced Configuration wizard page. The image address is provided below.

    Important

    Before pulling the image, ensure the cluster has public network access. If you chose the default Configure SNAT For VPC option when creating the cluster, it already has public network access. If not, refer to Enable public network access for an existing cluster.

    registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest

    image

  3. Complete Advanced Configuration

    On the Advanced Configuration wizard page, configure access, scaling, scheduling, and label annotations. In the Access Settings section, set the method to expose the backend pods, click OK, and then click Create at the bottom.

    Important

    This step will create a Service of type LoadBalancer to expose the workload, which will incur costs associated with the CLB instance. For billing details, see Pay-as-you-go. If you do not intend to use the CLB instance later, please release it promptly.

    image

  4. View Application

    On the Creation Completed wizard page, review the application task. In the Application Task Submitted panel, click View Application Details. Click the Access Method tab, locate the newly created service (e.g., nginx-test-svc), and click the link in the External Endpoint column to access it. image

    image

    You can View, Edit, Redeploy, and perform other operations on the created workload through the console. image

Create by using kubectl

Important

Before creating a workload, ensure you have connected to the cluster using kubectl. For instructions, see Obtain the cluster KubeConfig and connect to the cluster using the kubectl tool.

  1. Execute the following command to create a workload. This command specifies the container image, while other configurations are set to default.

    • For clusters with version 1.18 and above, use the command below to start.

      kubectl create deployment nginx --image=registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
    • For clusters with version 1.18 and below, use the command below to start.

      kubectl run -it nginx --image=registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
  2. Execute the following command to create a LoadBalancer type Service to expose the workload using a SLB instance.

    kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer
  3. Run the command below to view the public IP address of the Service.

    kubectl get svc

    Expected output:

    NAME         TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
    kubernetes   ClusterIP      172.16.**.***    <none>          443/TCP        4h47m
    nginx        LoadBalancer   172.16.**.***    106.14.**.***   80:31130/TCP   1h10m
  4. Enter the public IP of nginx (106.14.**.***) in the browser to access the Nginx container associated with the workload. image

Configuration parameters

Console configuration parameters

Basic application information

image

Configuration item

Description

Application Name

The name of the workload. The name of the pods belonging to the workload will be generated based on this.

Number Of Replicas

The number of pods contained in the workload. The default number is 2.

Type

The type of workload. In this topic, select Stateless (deployment). For workload selection, see Create Workloads.

Label

The label of the workload.

Annotation

The annotation of the workload.

Time Zone Synchronization

Whether the container and the node it resides on use the same time zone.

Container configuration

Basic Configuration

image

Configuration item

Description

Image Name

  • Select Image

    You can click Select Image to choose the desired image. The following three types of images are supported.

    • Container Registry Enterprise Edition: You can select enterprise edition images hosted in Container Registry ACR. You need to select the region to which the image belongs and the Container Registry instance. For more information about ACR, see What is Container Registry ACR.

    • Container Registry Personal Edition: You can select personal edition images hosted in Container Registry ACR. You need to select the region to which the image belongs and the Container Registry instance.

    • Artifact Center: Some common images provided by Alibaba Cloud and the OpenAnolis community. When using the Artifact Center, you need to enable public network access for the cluster. For more information about the Artifact Center, see Artifact Center.

    When using images from other sources, you can directly enter the image address in the format of domainname/namespace/imagename:tag. If domainname is not specified, for example, if you enter nginx:1.7.9, the image will be pulled from DockerHub.

  • Select Image Pull Policy

    ACK supports the following three image pull policies (imagePullPolicy):

    • Use Local Image First (ifnotpresent) (default): If there is a local image on the worker node, the local image is used. If not, the image is pulled.

    • Always Pull Image (always): Indicates that the image will be pulled from the container image service every time it is deployed or scaled out, rather than from the local image.

    • Use Local Image Only (never): Only use the local image. If there is no local image, the pull fails.

  • Set Image Key

    When using ACR or third-party repositories, you may need to configure a key to pull images.

    Note

    For Container Registry Enterprise Edition instances, you can pull images without using Secrets. For specific operations, see Install and use the non-managed version of the password-free component.

Resource Limits

Container resources .resources.limits. For more information, see Requests and Limits.

Requested Resources

Container resources .resources.requests. For more information, see Requests and Limits.

Container Startup Items

  • stdin: Indicates that standard input is enabled for the container.

  • tty: Indicates that a virtual terminal is allocated for the container to facilitate sending signals to the container.

These two options are usually used together, indicating that the terminal (tty) is bound to the standard input (stdin) of the container. For example, an interactive program receives the standard input from the user and displays it in the terminal.

Privileged Container

  • If the privileged container is checked, then privileged=true, and the privileged mode is enabled.

  • If the privileged container is not checked, then privileged=false, and the privileged mode is not used.

The privileged mode allows the container to have permissions similar to the operating system of the worker node it resides on, such as accessing hardware devices and mounting file systems.

Init Container

Selecting this option indicates that an init container is created.

Init containers provide a mechanism to block or delay the startup of application containers. After the init container is successfully executed, other containers in the pod will start in parallel. For example, to verify the availability of a dependent service. Init containers can include utilities and installation scripts that are not present in the application image to initialize the runtime environment for application containers, such as configuring kernel parameters and generating configuration files. For more information, see Init Containers.

Port Settings

image

Configuration item

Description

Name

The name of the container port, used only to distinguish ports, has no actual function.

Container Port

Set the port exposed by the container, which must be between 1 and 65535. The container must expose the port to achieve exposure outside the pod and communication between containers within the pod.

All containers in a pod share the network protocol stack of the pod, so when configuring multiple containers in a pod, the ports cannot be duplicated.

Protocol

The Layer 4 (transport layer) protocol used by the container port, supporting TCP and UDP.

Environment Variables

image

Configuration item

Description

Type

Specify the type of environment variable that you want to add. Valid values:

  • Custom

    Use env to hard code environment variables directly in the workload.

  • Configuration Item

    Use envFrom to obtain non-sensitive configuration data stored in ConfigMap.

  • Secret

    Use envFrom to obtain sensitive information stored in ConfigMap, such as passwords and API keys.

  • Variable/variable Reference

    Use value/valueFrom to obtain other environment variables or predefined values.

  • Resource Reference

    Use resourceFieldRef to obtain resource information of the node where the pod resides.

Configuration items and secrets support referencing all files. Taking secrets as an example, select the Secret type, and only select the target secret, then all files are referenced by default. 环境变量

The corresponding YAML also references the entire secret. yaml

Select resource reference, mainly using the resourceFieldRef parameter to reference the resource values already declared by the container from the pod specification, and then pass these values as environment variables to the container. The corresponding YAML is as follows:

image

Variable Name

Set the name of the environment variable in the pod.

Variable/variable Reference

Specify the value of the environment variable or the value obtained from other sources.

Health Check

image

Configuration Item

Description

Liveness Probe: Checks if the container is functioning properly. If not, the container is restarted.

Request Type: HTTP Request

Sends an HTTP request to the container to periodically check its functionality.

  • Protocol: HTTP / HTTPS.

  • Path: The path for the HTTP server access.

  • Port: The access port or port name exposed by the container. Must be between 1 and 65535.

  • HTTP Header: Custom request headers for the HTTP request. Supports key-value pairs.

  • Initial Delay Time (seconds): This refers to initialDelaySeconds, which is the wait time in seconds before the first probe is conducted following the container's startup. By default, this is set to 3 seconds.

  • Probe Frequency (seconds): This refers to periodSeconds, the time interval between consecutive probe executions. By default, it is set to 10 seconds, with the minimum allowable interval being 1 second.

  • Timeout (seconds): Refers to timeoutSeconds, the duration before a probe times out. The default and minimum value is 1 second.

  • Healthy Threshold: The number of consecutive successful probes required after a failure. Default is 1, minimum is 1.

  • Unhealthy Threshold: The number of consecutive failed probes required after a success. Default is 3, minimum is 1.

Request Type: TCP Connection

Sends a TCP socket to the container. If a connection is established, the container is considered healthy.

  • Port: The access port or port name exposed by the container. Must be between 1 and 65535.

  • Initial Delay Time (seconds) refers to initialDelaySeconds, which is the wait time in seconds before the first probe is conducted after the container has started. By default, this is set to 15 seconds.

  • Probe Frequency (seconds): Refers to periodSeconds, which is the time interval between consecutive probe executions. The default interval is 10 seconds, with a minimum allowable interval of 1 second.

  • Timeout (seconds): This refers to timeoutSeconds, the duration allowed for the probe to time out. The default setting is 1 second, with the minimum also being 1 second.

  • Healthy Threshold: The number of consecutive successful probes required after a failure. Default is 1, minimum is 1.

  • Unhealthy Threshold: The number of consecutive failed probes required after a success. Default is 3, minimum is 1.

Request Type: Command Line

Runs a command in the container to check its health status.

  • Command Line: The command used to check the container's health.

  • Initial Delay Time (seconds): This refers to initialDelaySeconds, which is the wait time in seconds before the first probe is executed following the container's startup. By default, this is set to 5 seconds.

  • Probe Frequency (seconds): Refers to periodSeconds, the time interval between consecutive probe executions. By default, it is set to 10 seconds, with a minimum allowable interval of 1 second.

  • Timeout (seconds): Refers to timeoutSeconds, which is the duration allowed for the probe to time out. The default setting is 1 second, with the minimum also being 1 second.

  • Healthy Threshold: The number of consecutive successful probes required after a failure. Default is 1, minimum is 1.

  • Unhealthy Threshold: The number of consecutive failed probes required after a success. Default is 3, minimum is 1.

Readiness Probe: Determines if the container is ready to accept traffic.

Startup Probe: Checks if the application within the container has started.

Note

Startup probes are supported in Kubernetes 1.18 and later.

Lifecycle

image

Configuration item

Description

Start Execution

Specify a command and parameter that take effect before the container starts. The start command and parameters define the operations performed when the container starts to initialize the application service. This is suitable for application deployment scenarios that require specific environment variables, mount targets, or port mappings.

Post Start

Specify a command that takes effect after the container starts. The post-start command is used to perform specific tasks after the container starts, such as initializing configurations and running scripts. This is suitable for scenarios where preparation work needs to be completed before the main process.

Pre Stop

Specify a command that takes effect before the container stops. The pre-stop command is used to shut down the application process within the container, ensuring data consistency and normal termination of services. This is suitable for scenarios where safe shutdown is required to avoid data loss or service interruption.

You can specify the following parameters to configure the lifecycle of the container: Start, Post Start, and Pre Stop. For specific operations, see Configure Lifecycle.

Volume

Configuration item

Description

Add Local Storage

Mount the local storage volume of the node where the pod resides. The data of the local storage volume is stored on the node. When the node is shut down, the data is no longer available. Local storage also supports Secret, ConfigMap, and other temporary volume types. The storage function is relatively complex. Before using storage volumes, it is recommended to read Storage to understand the basic knowledge of storage in ACK.

Add Cloud Storage Declaration (persistentvolumeclaim)

Mount the cloud storage volume for the pod to persistently store important data within the container. The cloud storage volume is a remote storage service located outside the cluster, completely independent of the worker node, and not affected by node changes. When using ACK, cloud storage volumes are usually cloud disks, NAS, OSS, and other storage services provided by Alibaba Cloud. The storage function is relatively complex. Before using storage volumes, it is recommended to read Storage to understand the basic knowledge of storage in ACK.

Log Configuration

Collection Configuration

  • Logstore: Generate a corresponding Logstore in the log service project associated with the cluster to store the collected logs. Before using logs, it is recommended to read Log Management to understand the basic knowledge of logs in ACK.

  • Container Log Path: The log path within the container that needs to be collected. When set to Stdout, it indicates that the standard output log of the container is collected.

Custom Tag

Trigger workload scaling at regular intervals, suitable for scenarios where business load has periodic changes, such as social media's periodic traffic peaks after lunch and dinner. For more information, see Use Container Cron Horizontal Pod Autoscaling (CronHPA).

Advanced configuration

Configuration Card

Configuration Item

Description

Access Settings

Service

A Service provides a fixed and unified Layer 4 (transport layer) entry for a group of pods. It is a resource that must be configured when exposing workloads externally. Service supports multiple types, including Virtual Cluster IP, Node Port, Load Balancer, and more. Before configuring a Service, see Service Management to understand the basic knowledge of Service.

Ingress

An Ingress provides a Layer 7 (application layer) entry for multiple services in the cluster and forwards requests to different services based on domain name matching. Before using Ingress, you need to install an Ingress Controller. ACK provides multiple options suitable for different scenarios. See Comparison of Nginx Ingress, ALB Ingress, and MSE Ingress for selection.

Scaling Configuration

Metric Scaling

Trigger automatic scaling by monitoring the performance metrics of the container. Metric scaling can help you automatically adjust the total resources used by the workload when business load fluctuates, scaling out to relieve pressure during high loads and scaling in to save resources during low loads. For more information, see Use Container Horizontal Pod Autoscaling (HPA).

Scheduled Scaling

Trigger workload scaling at regular intervals, suitable for scenarios where business load has periodic changes, such as social media's periodic traffic peaks after lunch and dinner. For more information, see Use Container Cron Horizontal Pod Autoscaling (CronHPA).

Scheduling Settings

Upgrade Method

The mechanism used by the workload to replace old pods with new pods when the pod configuration changes.

  • Rolling Upgrade: Replaces a portion of the pods at a time, and only performs the next replacement after the new pods are successfully running. This method ensures that the service is not interrupted, but users may access different versions of pods simultaneously.

  • Recreate Upgrade: Replaces all pods at once, which may cause service interruption but ensures consistency of all pod versions.

  • Node Affinity

  • Pod Affinity

  • Pod Anti-affinity

  • Scheduling Toleration

Affinity, anti-affinity, and toleration configurations are used for scheduling, even if the pod runs on specific nodes. The scheduling operation is relatively complex and requires you to plan according to your needs in advance. For detailed operations, see Scheduling.

Labels And Annotations

Pod Labels

Add labels (Label) to each pod belonging to the workload. Various resources in the cluster, including workloads and services, will match with pods through labels. ACK adds a label in the format of app: (application name) to the pod by default.

Pod Annotations

Add annotations (Annotation) to each pod belonging to the workload. Some features in ACK will use annotations, and you can edit them when using these features.

Workload YAML Example

apiVersion: apps/v1
kind: Deployment    # Workload type
metadata:
  name: nginx-test
  namespace: default  # Change the namespace as needed
  labels:
    app: nginx
spec:
  replicas: 2  # Specify the number of pods
  selector:
    matchLabels:
      app: nginx
  template: # Pod configuration
    metadata:
      labels: # Pod labels
        app: nginx 
      annotations: # Pod annotations
        description: "This is an application deployment"
    spec:
      containers:
      - name: nginx  # Image name
        image: nginx:1.7.9  # Use a specific version of the Nginx image
        ports:
        - name: nginx  # name
          containerPort: 80  # Port exposed by the container
          protocol: TCP  # Specify the protocol as TCP/UDP, default is TCP
        command: ["/bin/sh"]  # Container startup items
        args: [ "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY) && exec nginx -g 'daemon off;'"] # Output variables, add command to start nginx
        stdin: true  # Enable standard input
        tty: true    # Allocate a virtual terminal
        env:
          - name: SPECIAL_LEVEL_KEY
            valueFrom:
              configMapKeyRef:
                name: special-config  # Name of the configuration item
                key: SPECIAL_LEVEL    # Key name of the configuration item
        securityContext:
          privileged: true  # true to enable privileged mode, false to disable privileged mode, default is false
        resources:
          limits:
            cpu: "500m"               # Maximum CPU usage, 500 millicores
            memory: "256Mi"           # Maximum memory usage, 256 MiB
            ephemeral-storage: "1Gi"  # Maximum ephemeral storage usage, 1 GiB
          requests:
            cpu: "200m"               # Minimum requested CPU usage, 200 millicores
            memory: "128Mi"           # Minimum requested memory usage, 128 MiB
            ephemeral-storage: "500Mi" # Minimum requested ephemeral storage usage, 500 MiB
        livenessProbe:  # Liveness probe configuration
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:  # Readiness probe configuration
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 10
        volumeMounts:
        - name: tz-config
          mountPath: /etc/localtime
          readOnly: true
      volumes:
      - name: tz-config
        hostPath:
          path: /etc/localtime  # Mount the /etc/localtime file from the host to the same path inside the container through the volumeMounts and volumes fields.
---
# service
apiVersion: v1
kind: Service
metadata:
  name: nginx-test-svc
  namespace: default  # Change the namespace as needed
  labels:
    app: nginx
spec:
  selector:
    app: nginx  # Match label to ensure the service points to the correct pods
  ports:
    - port: 80           # Port provided by the service within the cluster
      targetPort: 80     # Port that the internal application listens to (containerPort)
      protocol: TCP      # Protocol, default is TCP
  type: ClusterIP        # Service type, default is ClusterIP, internal access
---
# ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  namespace: default  # Change the namespace as needed
  annotations:
    kubernetes.io/ingress.class: "nginx"  # Specify the type of Ingress controller
    # If using Alibaba Cloud SLB Ingress controller, you can specify as follows:
    # service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: "lb-xxxxxxxxxx"
    # service.beta.kubernetes.io/alibaba-cloud-loadbalancer-spec: "slb.spec.s1.small"
spec:
  rules:
    - host: foo.bar.com  # Replace with your domain name
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginx-service  # Backend service name
                port:
                  number: 80         # Backend service port
  tls:  # Optional, used to enable HTTPS
    - hosts:
        - foo.bar.com  # Replace with your domain name
      secretName: tls-secret  # TLS certificate secret name

References