Create a stateless application by using a Deployment

Updated at: 2025-03-05 06:17

Deployment is one of the most commonly used workload types in Kubernetes. It is also called a stateless workload. Its goal is to ensure that a fixed number of pods in the cluster always run continuously in the state you specify. This topic describes how to create a stateless application in an ACK cluster by using the ACK console and kubectl.

Before you begin

Before you create a workload, we recommend that you familiarize yourself with the basic knowledge and usage notes of workloads. For more information, see Workloads. This topic is divided into the following two parts:

  • Create a Deployment: This part describes how to create a Deployment by using the ACK console or kubectl.

  • Parameters: This part provides documentation on console configurations and YAML templates for using kubectl.

Create a Deployment

Use the ACK console
Use kubectl

Important

The following section describes a simplified workload creation process. You can refer to this process to quickly deploy and validate workloads. After you are familiar with the basic operations, you can configure custom workloads. For more information, see Parameters.

  1. Configure basic information for the application

    1. Log on to the ACK console. In the left-side navigation pane, click Clusters. On the Clusters page, click the name of the cluster that you want to manage. In the left-side navigation pane of the cluster details page, choose Workloads > Deployments. On the Deployments page, click Create from Image in the upper-right corner.

      image

    2. On the Basic Information wizard page, configure the basic settings of the application. Click Next to go to the Container page.

      image

  2. Configure a container

    In the Container section, configure the Image Name and Port parameters. Other parameters are optional. Keep the default settings. Click Next to go to the Advanced page. The following section describes details of the container images.

    Important

    Before pulling this image, you need to enable Internet access for the cluster. If you keep the default value for the Configure SNAT for VPC parameter when you create a cluster, the cluster can access the Internet. For more information about how to enable Internet access for an existing cluster, see Enable an existing ACK cluster to access the Internet.

    registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest

    image

  3. Configure advanced settings

    On the Advanced wizard page, configure the following settings: access control, scaling, scheduling, labels, and annotations. In the Access Control section, configure the method to expose backend pods. In the dialog box that appears, configure the parameters, and click OK. Then, click Create.

    Important

    In this step, a LoadBalancer Service is created to expose the workload. You are charged for the CLB instance. For more information about the detailed billing items, see Pay-as-you-go. If you do not plan to use the CLB in the future, promptly release the CLB.

    image

  4. View the application

    On the Complete wizard page, you can view the created application. Click View Details below Creation Task Submitted. Click the Access Method tab. Select the nginx-test-svc Service and click the hyperlink in the External Endpoint column to start the magic cube game.image

    image

    You can view, edit, and redeploy the created workload in the console. image

Important

Before you create a workload, make sure that you have connected to the cluster by using kubectl. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.

  1. Copy the following cometent to the deployment.yaml file: In the following YAML file, a Deployment is defined and the LoadBalancer type Service is used to expose the Deployment.

    apiVersion: apps/v1
    kind: Deployment    # The type of the workload.
    metadata:
      name: nginx-test
      namespace: default  # Change the namespace based on your business requirements.
      labels:
        app: nginx
    spec:
      replicas: 2  # Specify the number of pods.
      selector:
        matchLabels:
          app: nginx
      template: # Pod configurations.
        metadata:
          labels: # Pod labels.
            app: nginx 
        spec:
          containers:
          - name: nginx  # The name of the container.
            image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6  # Specify the version of the NGINX image.
            ports:
            - containerPort: 80  # The port exposed by the container.
              protocol: TCP  # Set the protocol to TCP or UDP. The default protocol is TCP.
    ---
    # service
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-test-svc
      namespace: default  # Change the namespace based on your business requirements.
      labels:
        app: nginx
    spec:
      selector:
        app: nginx  # Match the tag to ensure that the service points to the correct pods.
      ports:
        - port: 80           # The port provided by the Service in the cluster.
          targetPort: 80     # The port on which the application in the container listens (containerPort).
          protocol: TCP      # The protocol. Default value: TCP.
      type: LoadBalancer      # The type of the Service. Default value: ClusterIP. This Service can only be accessed by other Services or pods within the cluster.
  2. Run the following command to create the Deployment and Service:

    kubectl apply -f deployment.yaml

    Expected output:

    deployment.apps/nginx-test created
    service/nginx-test-svc created
  3. Run the following command to query the public IP address of the Service:

    kubectl get svc

    Expected output:

    NAME            TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
    kubernetes      ClusterIP      172.16.**.***    <none>          443/TCP        4h47m
    nginx-test-svc  LoadBalancer   172.16.**.***    106.14.**.***   80:31130/TCP   1h10m
  4. Enter the public IP address (106.14.**.***) of NGINX in the address bar of your browser to access the NGINX container to which the workload belongs. image

Parameters

Parameters on the ACK console

Basic information

image

Parameter

Description

Parameter

Description

Name

The name of the workload. The name of the pod to which the workload belongs is generated based on this parameter.

Namespace

The namespace to which the workload belongs.

Replicas

The number of pods that are provisioned for the workload. Default value: 2.

Type

The type of the workload. For more information about how to select a workload, see Deploy a workload.

Label

The label of the workload.

Annotations

The annotations of the workload.

Synchronize Timezone

Specify whether to synchronize the time zone between nodes and containers.

Container

General

image

Parameter

Description

Image Name

  • Select images

    You can click Select images and select an image. The following types of images are supported:

    • Container Registry Enterprise Edition: Select an image stored on a Container Registry Enterprise Edition instance. You must select the region and the Container Registry instance to which the image belongs. For more information about Container Registry, see What is Container Registry?

    • Container Registry Personal Edition: Select an image stored in a Container Registry Personal Edition instance. You must select the region and the Container Registry instance to which the image belongs.

    • Artifact Center: Common images provided by Alibaba Cloud and the Lizard community. To use images in Artifact Center, you must enable an existing ACK cluster to access the Internet. For more information about the Artifact Center, see Overview of the artifact center.

    If you use an image from another image repository, you can enter the image address in the domainname/namespace/imagename:tag format. If you do not specify the domainname field, the image is pulled from DockerHub. Example: nginx:1.7.9.

  • Image Pull Policy

    You can select the following image pulling policies:

    • ifNotPresent: If an on-premises image exists on the worker node, the on-premises image is used. Otherwise, ACK pulls the image from the corresponding repository.

    • Always: ACK pulls the image from Container Registry each time the application is deployed or expanded.

    • Never: ACK uses only images on your on-premises machine. When no on-premises images exist, the attempt to pull an image fails.

  • Set Image Pull Secret

    If you use Container Registry or a third-party repository, you may need to configure a Secret to pull images.

    Note

    You can pull images without using Secrets from Container Registry Enterprise Edition instances. For more information, see Use the aliyun-acr-credential-helper component to pull images without using a secret.

Resource Limit

The resource.limits of the container resources. For more information, see Requests and limits.

Required Resources

The resources.requests of the container resource. For more information, see Requests and Limits.

Container Start Parameter

  • stdin: specifies that start parameters are sent to the container as standard input (stdin).

  • tty: specifies that start parameters defined in a virtual terminal are sent to the container.

The two options are usually used together. In this case, the virtual terminal (tty) is associated with the stdin of the container. For example, an interactive program receives the stdin from the user and displays the content in the terminal.

Privileged Container

  • If you select Privileged Container, privileged=true is set for the container and the privilege mode is enabled.

  • If you do not select Privileged Container, privileged=false is set for the container and the privilege mode is disabled.

The privilege mode allows a container to have permissions similar to those of the operating system of the worker node, such as the permissions to access hardware devices and mount file systems.

Init Containers

Select this option to create an init container.

Init containers can be used to block or postpone the startup of application containers. Other containers in a pod concurrently start only after init containers start. For example, you can use init containers to verify the availability of a service on which the application depends. You can run tools or scripts that are not provided by an application image in init containers to initialize the runtime environment for application containers. For example, you can run tools or scripts to configure kernel parameters or generate configuration files. For more information, see Init containers.

Ports

image

Parameter

Description

Name

The name of the container port. It is only used to distinguish ports and has no actual function.

Container Port

The number or name of the container port that you want to expose. Valid values of the port number: 1 to 65535. A container must expose its ports to communicate with other containers in the pod.

All containers in a pod share the network protocol stack of the pod. Therefore, when you configure multiple containers in a pod, the ports cannot be duplicated.

Protocol

Container ports use Layer 4 protocols. Valid values: TCP and UDP.

Environments

image

Parameter

Description

Type

The type of the environment variable that you want to add. Valid values:

  • Custom

    Use env to hardcode environment variables in your workload.

  • ConfigMaps

    Use envFrom to obtain non-sensitive configuration data stored in a ConfigMap.

  • Secrets

    Use envFrom to obtain sensitive information, such as passwords and API keys, stored in a ConfigMap.

  • Value/ValueFrom

    Use value/valueFrom to get additional environment variables or predefined values.

  • ResourceFieldRef

    Use resourceFieldRef to obtain the resource information of the node where the pod is located.

If you select ConfigMaps or Secrets, you can pass all data in the selected ConfigMap or Secret to the container environment variables. In this example, Secrets is selected. Select Secrets from the Type drop-down list and select a Secret from the Value/ValueFrom drop-down list. By default, all data in the selected Secret is passed to the environment variable.环境变量

In this case, the YAML file that is used to deploy the application contains the settings that reference all data in the selected Secret.yaml

If you select ResourceFieldRef, the resourceFieldRef parameter is specified to reference the resource values from the pod specifications and then pass the resource values to the container as environment variables. The following YAML file provides an example:

image

Variable Key

The name of the environment variable in the pod.

Value/ValueFrom

The value of an environment variable or a value obtained from another source.

Health Check

image

Parameter

Description

Liveness: Check whether the container is running as expected. If not, restart the container.

Request type: HTTP

Sends an HTTP request to the container to periodically check whether the container is running as expected.

  • Protocol: HTTP or HTTPS.

  • Path: the requested HTTP path on the server.

  • Port: the number or name of the port exposed by the container. The port number must be from 1 to 65535.

  • HTTP Header: the custom headers in the HTTP request. Duplicate headers are allowed. You can specify HTTP headers in key-value pairs.

  • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the wait time before the first probe is performed after the container is started. Default value: 3. Unit: seconds.

  • Period (s): the periodSeconds field in the YAML file. This field specifies the interval at which probes are performed. Default value: 10. Minimum value: 1. Unit: seconds.

  • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time after which a probe times out. Default value: 1. Minimum value: 1.

  • Healthy Threshold: the minimum number of consecutive successes that must occur before a container is considered healthy after a failed probe. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.

  • Unhealthy Threshold: the minimum number of consecutive failures that must occur before a container is considered unhealthy after a success. Default value: 3. Minimum value: 1.

Request type: TCP

Sends a TCP socket to the container. kubelet attempts to open the socket on the specified port. If the connection can be established, the container is considered healthy. Otherwise, the container is considered unhealthy.

  • Port: the number or name of the port exposed by the container. The port number must be from 1 to 65535.

  • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the wait time before the first probe is performed after the container is started. Default value: 15. Unit: seconds.

  • Period (s): the periodSeconds field in the YAML file. This field specifies the interval at which probes are performed. Default value: 10. Minimum value: 1. Unit: seconds.

  • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time after which a probe times out. Default value: 1. Minimum value: 1.

  • Healthy Threshold: the minimum number of consecutive successes that must occur before a container is considered healthy after a failed probe. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.

  • Unhealthy Threshold: the minimum number of consecutive failures that must occur before a container is considered unhealthy after a success. Default value: 3. Minimum value: 1.

Request type: command

Runs a probe command in the container to check the health status of the container.

  • Command: the probe command that is run to check the health status of the container.

  • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the wait time before the first probe is performed after the container is started. Default value: 5. Unit: seconds.

  • Period (s): the periodSeconds field in the YAML file. This field specifies the interval at which probes are performed. Default value: 10. Minimum value: 1. Unit: seconds.

  • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time after which a probe times out. Default value: 1. Minimum value: 1.

  • Healthy Threshold: the minimum number of consecutive successes that must occur before a container is considered healthy after a failed probe. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.

  • Unhealthy Threshold: the minimum number of consecutive failures that must occur before a container is considered unhealthy after a success. Default value: 3. Minimum value: 1.

Readiness: Readiness probes are used to determine whether a container is ready to receive traffic.

Startup: Startup probes are used to check whether applications in a container are started.

Note

Startup probes are supported only in Kubernetes 1.18 and later.

Lifecycle

image

Parameter

Description

Start

Specify a command and parameter that takes effect before the container starts. The startup command and parameter define the operations that are performed when a container is started and are used to initialize application services. This parameter is suitable for application deployment scenarios that require specific environment variables, mount points, or port mapping.

Post Start

Specify a command that takes effect after the container starts. The post-start command is used to perform specific tasks after a container is started, such as initializing configurations and running scripts. This parameter is suitable for scenarios where preparations need to be completed before the main process.

Pre Stop

Specify a command that takes effect before the container stops. The pre-stop command is used to close application processes in containers to ensure data consistency and normal service termination. This parameter is suitable for scenarios where data loss or service exceptions need to be prevented.

You can specify the following parameters to configure the lifecycle of the container: Start, Post Start, and Pre Stop. For more information, see Configure the lifecycle of a container.

Volume

Parameter

Description

Add Local Storage

Mount the on-premises storage volume of the node to the pod. The data in the local storage volume is stored on the node, and the data becomes unavailable when the node is shut down. Secrets, ConfigMaps, and other temporary volume types are also supported for local storage. The storage feature is complex. Before you use volumes, we recommend that you read Storage to get a basic understanding of storage in ACK.

Add PVC

Mount cloud storage volumes to the pod for persistent storage of important data in containers. Cloud storage volumes are remote storage services located outside the cluster. They are completely independent of worker nodes and are not affected by node changes. When you use ACK, cloud storage volumes include disks, NAS, OSS, and other storage services provided by Alibaba Cloud. The storage feature is complex. Before you use volumes, we recommend that you read Storage to get a basic understanding storage in ACK.

Log

Collection Configuration

  • Logstore: creates a Logstore in the Simple Log Service (SLS) project associated with the cluster to store the collected logs. Before you use logs, we recommend that you get a basic understanding of logs in ACK. For more information, see Log management.

  • Log Path in Container: the log path in the container to be collected. If this parameter is set to Stdout, the standard output logs of the container are collected.

Custom Tag

You can configure this parameter to trigger workload scaling at a scheduled time. This method is suitable for scenarios where workloads change periodically. For example, social media has periodic traffic peaks after lunch and after dinner. For more information, see Use CronHPA for scheduled horizontal scaling.

Advanced settings

Section

Parameter

Description

Section

Parameter

Description

Access Control

Service

A Service provides a fixed and unified Layer 4 (transport layer) entry for a group of pods. It is a resource that must be configured when a workload is exposed. Multiple Service types are supported. Valid values: Cluster IP, Node Port, and SLB. Before you configure a Service, get a basic understanding of Services. For more information, see Service management.

Ingress

Ingresses provide Layer 7 (application layer) entry for multiple Services in a cluster and forward requests to different Services based on domain names. Before you use an Ingress, you must install an Ingress controller. ACK provides multiple options for different scenarios. For more information, see Comparison among Nginx Ingresses, ALB Ingresses, and MSE Ingresses.

Scaling

HPA

The auto scaling is triggered by monitoring the performance metrics of the container. Horizontal Pod Autoscaler (HPA) can help you automatically adjust the total amount of resources used by your workloads when your workloads fluctuate. You can scale out resources to relieve pressure during high workloads and scale down resources during low workloads. For more information, see Implement HPA.

CronHPA

You can configure this parameter to trigger workload scaling at a scheduled time. This method is suitable for scenarios where workloads change periodically. For example, social media has periodic traffic peaks after lunch and after dinner. For more information, see Implement CronHPA.

Scheduling

Update Method

The mechanism by which workloads replace old pods with new pods when the pod configuration changes.

  • Rolling Update: Only some pods are replaced at a time. The next replacement is performed after the new pods are successfully run. This method ensures that the service is not interrupted, and users may access pods of different versions at the same time.

  • On-Delete: Replaces all pods at once. This may cause service interruptions, but ensures the consistency of all pods.

  • Node Affinity

  • Pod Affinity

  • Pod Anti-affinity

  • Toleration

Affinity, anti-affinity, and tolerations are used for scheduling, even if the pod is running on a specific node. Scheduling operations are complex and need to be planned in advance. For more information, see Scheduling.

Labels, Annotations

Pod Labels

Add a label to each pod to which the workload belongs. All resources in the cluster, including workloads and services, are matched with pods by using labels. By default, ACK adds labels in the format of app:(application name) to pods.

Pod Annotations

Add an annotation to each pod to which the workload belongs. Some features in ACK use annotations. You can configure the annotations when you use these features.

Workload YAML example

apiVersion: apps/v1
kind: Deployment    # The type of the workload.
metadata:
  name: nginx-test
  namespace: default  # Change the namespace based on your business requirements.
  labels:
    app: nginx
spec:
  replicas: 2  # Specify the number of pods.
  selector:
    matchLabels:
      app: nginx
  template: # Pod configurations.
    metadata:
      labels: # Pod labels.
        app: nginx 
      annotations: # Pod annotations.
        description: "This is an application deployment"
    spec:
      containers:
      - name: nginx  # The name of the image.
        image: nginx:1.7.9  #Specify the version of the NGINX image.
        ports:
        - name: nginx  # name
          containerPort: 80  # The port exposed by the container.
          protocol: TCP  # Set the protocol to TCP or UDP. The default protocol is TCP.
        command: ["/bin/sh"]  # Container startup item.
        args: [ "-c", "echo $(SPECIAL_LEVEL_KEY) $(SPECIAL_TYPE_KEY) && exec nginx -g 'daemon off;'"] #Add the output variable and the command to start NGINX.
        stdin: true  # Enable standard input.
        tty: true    # Assign a virtual terminal.
        env:
          - name: SPECIAL_LEVEL_KEY
            valueFrom:
              configMapKeyRef:
                name: special-config  # The name of the ConfigMap. 
                key: SPECIAL_LEVEL    # The key name of the ConfigMap.
        securityContext:
          privileged: true  # Set the parameter to true to enable privileged mode. Set the parameter to false to disable privileged mode. Default value: false.
        resources:
          limits:
            cpu: "500m"               # The maximum CPU usage, which is set to 500 millicores.
            memory: "256Mi"           # The maximum memory usage, which is set to 256 MiB.
            ephemeral-storage: "1Gi"  # The maximum temporary storage usage, which is set to 1 GiB.
          requests:
            cpu: "200m"               # The minimum requested CPU usage, which is set to 200 millicores.
            memory: "128Mi"           # The minimum requested memory usage, which is set to 128 MiB.
            ephemeral-storage: "500Mi" # The minimum temporary storage usage requested, which is set to 500 MiB.
        livenessProbe:  # Configure container liveness probes.
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:  # Configure container readiness probes.
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 10
        volumeMounts:
        - name: tz-config
          mountPath: /etc/localtime
          readOnly: true
      volumes:
      - name: tz-config
        hostPath:
          path: /etc/localtime  # Mount the /etc/localtime files of the host to the same path in the container by using the volumeMounts and volume fields. 
---
# service
apiVersion: v1
kind: Service
metadata:
  name: nginx-test-svc
  namespace: default  # Change the namespace based on your business requirements.
  labels:
    app: nginx
spec:
  selector:
    app: nginx  # Match the tag to ensure that the service points to the correct pods.
  ports:
    - port: 80           # The port provided by the Service in the cluster.
      targetPort: 80     # The port on which the application in the container listens (containerPort).
      protocol: TCP      # The protocol. Default value: TCP.
  type: ClusterIP        # The type of the Service. Default value: ClusterIP. This Service can only be accessed by other Services or pods within the cluster.
---
# ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  namespace: default  # Change the namespace based on your business requirements.
  annotations:
    kubernetes.io/ingress.class: "nginx"  # Specify the Ingress controller type.
    # If you use the SLB Ingress controller, you can specify the following parameters:
    # service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: "lb-xxxxxxxxxx"
    # service.beta.kubernetes.io/alibaba-cloud-loadbalancer-spec: "slb.spec.s1.small"
spec:
  rules:
    - host: foo.bar.com  # Replace this parameter with your domain name
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginx-service  # The name of the backend service.
                port:
                  number: 80         # The backend service port.
  tls:  # Optional. This parameter is used to enable HTTPS.
    - hosts:
        - foo.bar.com  # Replace this parameter with your domain name
      secretName: tls-secret  # The secret name of the TLS certificate.

References

  • On this page (1, M)
  • Before you begin
  • Create a Deployment
  • Parameters
  • Parameters on the ACK console
  • Workload YAML example
  • References
Feedback
phone Contact Us

Chat now with Alibaba Cloud Customer Service to assist you in finding the right products and services to meet your needs.

alicare alicarealicarealicare