All Products
Search
Document Center

Container Compute Service:Create a stateless application by using a Deployment

Last Updated:Oct 22, 2024

Container Compute Service (ACS) allows you to create stateless applications by using an image, a YAML template, or kubectl. This topic describes how to create a stateless NGINX application in an ACS cluster.

Prerequisites

Create Deployments

Create a Deployment from an image

Step 1: Configure basic settings

  1. Log on to the ACS console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the cluster that you want to manage and click its ID. In the left-side pane, choose Workloads > Deployments.

  3. On the Deployments tab, click Create from Image.

  4. On the Basic Information wizard page, configure the basic settings of the application.

    Parameter

    Description

    Name

    The name of the application.

    Replicas

    The number of pods that you want to provision for the application. Default value: 2.

    Type

    The type of the resource object. In this example, Deployment is selected.

    Label

    Add labels to the application. The labels are used to identify the application.

    Annotations

    Add annotations to the application.

    Instance Type

    The instance type that you want to use. For more information, see ACS pod overview.

    QoS Type

    Select a QoS class. You can set QoS class to default or best-effort. If you set Instance Type to general-purpose, you can set QoS Type to default or best-effort. If you set Instance Type to performance, you can set QoS Type only to default. For more information, see Compute QoS.

  5. Click Next to go to the Container wizard page.

Step 2: Configure containers

On the Container wizard page, configure the container image, resources, ports, environment variables, health checks, lifecycle, volumes, and logs.

Note

Click Add Container to the right of the Container1 tab to add more containers.

  1. In the General section, configure the basic container settings.

    Parameter

    Description

    Image Name

    • Click Select images and select a container image.

      • Container Registry Enterprise Edition: Select an image stored in a Container Registry Enterprise Edition instance. You must select the region and the Container Registry instance to which the image belongs. For more information about Container Registry, see What is Container Registry?

      • Container Registry Personal Edition: Select an image stored in a Container Registry Personal Edition instance. Make sure that Container Registry Personal Edition is already activated. You must select the region and the Container Registry instance to which the image belongs.

      • Artifact Center: The artifact center contains base operating system images, base language images, and AI- and big data-related images for application containerization. In this example, an NGINX image is selected. For more information, see Overview of the artifact center.

    • ACS supports only the Always option. This means that the image is pulled from Container Registry each time you deploy an application or scale out the cluster. The image is not pulled from the local environment.

    • Click Set Image Pull Secret to set a Secret used to pull the private image.

    CPU

    You can configure the CPU request and CPU limit of the container. By default, the CPU request equals the CPU limit. CPU resources are billed on a pay-as-you-go basis. If you use a YAML template to set a resource limit that differs from the resource request, the resource request is automatically overridden to the value of the resource limit. For more information, see Resource specifications.

    Memory

    You can configure the memory request and memory limit of the container. By default, the memory request equals the memory limit. Memory resources are billed on a pay-as-you-go basis. If you use a YAML template to set a resource limit that differs from the resource request, the resource request is automatically overridden to the value of the resource limit. For more information, see Resource specifications.

    Container Start Parameter

    • stdin: specifies that start parameters are sent to the container as standard input (stdin).

    • tty: specifies that start parameters defined in a virtual terminal are sent to the container.

    The two options are usually used together. In this case, the virtual terminal (tty) is associated with the stdin of the container. For example, an interactive program receives the stdin from the user and displays the content in the terminal.

    Init Containers

    If you select Init Containers, an init container is created.

    Init containers provide a mechanism to block or delay the startup of application containers. Application containers in a pod are started in parallel after init containers are started. Init containers can contain utilities or setup scripts that are not included in an application image. Therefore, init containers can be used to initialize the runtime environment of application containers. For example, you can use init containers to configure kernel parameters or generate configuration files. For more information, see Init Containers.

  2. Optional: In the Ports section, you can click Add to add container ports.

    Parameter

    Description

    Name

    Enter a name for the container port.

    Container Port

    Specify the container port that you want to expose. The port number must be from 1 to 65535.

    Protocol

    Valid values: TCP and UDP.

  3. Optional: In the Environments section, you can click Add to add environment variables.

    You can add environment variables in key-value pairs to a pod in order to add environment labels or pass configurations. For more information, see Expose Pod Information to Containers Through Environment Variables.

    Parameter

    Description

    Type

    Select the type of environment variable. Valid values:

    • Custom

    • ConfigMaps

    • Secrets

    • Value/ValueFrom

    • ResourceFieldRef

    If you select ConfigMaps or Secrets, you can pass all data in the selected ConfigMap or Secret to the container environment variables.

    In this example, Secrets is selected. Select Secrets from the Type drop-down list and select a Secret from the Value/ValueFrom drop-down list. By default, all data in the selected Secret is passed to the environment variable. 环境变量

    In this case, the YAML file that is used to deploy the application contains the settings that reference all data in the selected Secret. yaml

    Variable Key

    The name of the environment variable.

    Value/ValueFrom

    The value of the environment variable.

  4. Optional: In the Health Check section, you can enable liveness probes, readiness probes, and startup probes on demand.

    For more information, see Configure Liveness, Readiness and Startup Probes.

    Parameter

    Request type

    Description

    • Liveness: Liveness probes are used to determine when to restart a container.

    • Readiness: Readiness probes are used to determine whether a container is ready to receive traffic.

    • Startup: Startup probes are used to determine when to start a container.

    HTTP

    Sends an HTTP GET request to the container. You can set the following parameters:

    • Protocol: the protocol over which the request is sent. Valid values: HTTP and HTTPS.

    • Path: the requested HTTP path on the server.

    • Port: the number or name of the port exposed by the container. The port number must be from 1 to 65535.

    • HTTP Header: the custom headers in the HTTP request. Duplicate headers are allowed. You can specify HTTP headers in key-value pairs.

    • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the waiting time (in seconds) before the first probe is performed after the container is started. Default value: 3.

    • Period (s): the periodSeconds field in the YAML file. This field specifies the time interval (in seconds) at which probes are performed. Default value: 10. Minimum value: 1.

    • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time (in seconds) after which a probe times out. Default value: 1. Minimum value: 1.

    • Healthy Threshold: the minimum number of consecutive successes that must occur before a container is considered healthy after a failed probe. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.

    • Unhealthy Threshold: the minimum number of consecutive failures that must occur before a container is considered unhealthy after a success. Default value: 3. Minimum value: 1.

    TCP

    Sends a TCP socket to the container. kubelet attempts to open the socket on the specified port. If the connection can be established, the container is considered healthy. Otherwise, the container is considered unhealthy. You can configure the following parameters:

    • Port: the number or name of the port exposed by the container. The port number must be from 1 to 65535.

    • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the wait time (in seconds) before the first probe is performed after the container is started. Default value: 15.

    • Period (s): the periodSeconds field in the YAML file. This field specifies the time interval (in seconds) at which probes are performed. Default value: 10. Minimum value: 1.

    • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time (in seconds) after which a probe times out. Default value: 1. Minimum value: 1.

    • Healthy Threshold: the minimum number of consecutive successes that must occur before a container is considered healthy after a failed probe. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.

    • Unhealthy Threshold: the minimum number of consecutive failures that must occur before a container is considered unhealthy after a success. Default value: 3. Minimum value: 1.

    Command

    Runs a probe command in the container to check the health status of the container. You can configure the following parameters:

    • Command: the probe command that is run to check the health status of the container.

    • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the wait time (in seconds) before the first probe is performed after the container is started. Default value: 5.

    • Period (s): the periodSeconds field in the YAML file. This field specifies the time interval (in seconds) at which probes are performed. Default value: 10. Minimum value: 1.

    • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time (in seconds) after which a probe times out. Default value: 1. Minimum value: 1.

    • Healthy Threshold: the minimum number of consecutive successes that must occur before a container is considered healthy after a failed probe. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.

    • Unhealthy Threshold: the minimum number of consecutive failures that must occur before a container is considered unhealthy after a success. Default value: 3. Minimum value: 1.

  5. Optional: In the Lifecycle section, you can configure the lifecycle of the container.

    You can specify the following parameters to configure the lifecycle of the container: Start, Post Start, and Pre Stop. For more information, see Attach Handlers to Container Lifecycle Events.

    Parameter

    Description

    Start

    Specify a command and parameter that takes effect before the container starts.

    Post Start

    Specify a command that takes effect after the container starts.

    Pre Stop

    Specify a command that terminates the container.

  6. Optional: In the Volume section, you can add local volumes or Persistent Volume Claims (PVCs).

    Parameter

    Description

    Add Local Storage

    Local volumes include ConfigMaps, Secrets, and EmptyDirs. Local volumes mount the specified data sources to containers. For more information, see Volumes.

    Add PVC

    You can mount persistent volumes (PVs) by using PVCs. You must create a PVC before you can select the PVC to mount a PV.

    Add NAS File System

    You can add PVs that use NAS file systems. Before you start, you must log on to the NAS console and create a container mount target. For more information, see NAS volumes.

    In this example, a PVC named disk-ssd is mounted to the /tmp path of the container.

  7. In the Log section, you can specify logging configurations and add custom tags to the collected log.

    Parameter

    Description

    Collection Configuration

    Logstore: Create a Logstore in Simple Log Service to store the collected log data.

    Log Path in Container: Specify stdout or a container path to collect log data.

    Collect stdout files: If you specify stdout, the stdout files are collected.

    Text Logs: specifies that the logs in the specified path of the container are collected. In this example, /var/log/nginx is specified as the path. Wildcard characters can be used in the path.

    Custom Tag

    You can also add custom tags. The tags are added to the log of the container when the log is collected. You can add custom tags to container logs for log analysis and filtering.

  8. Click Next to go to the Advanced wizard page.

Step 3: Configure advanced settings

On the Advanced wizard page, configure the following settings: access control, scaling, scheduling, annotations, and labels.

  1. In the Access Control section, you can configure access control settings for exposing backend pods.

    You can also specify how backend pods are exposed to the Internet. In this example, a ClusterIP Service and an Ingress are created to expose the NGINX application to the Internet.

    • To create a Service, click Create on the right side of Services. In the Create dialog box, set the parameters.

      View Service parameters

      Parameter

      Description

      Name

      The name of the Service. In this example, nginx-svc is used.

      Type

      The type of Service. This parameter specifies how the Service is accessed. Cluster IP is selected in this example.

      • Cluster IP: the ClusterIP Service. This type of Service exposes the Service through an internal IP address of the cluster. If you select this option, the Service is accessible only within the cluster. This is the default value.

        Note

        The Headless Service parameter is available only when you set Type to Cluster IP.

      • Server Load Balancer: The LoadBalancer type Service. This type of Service exposes the Service by using an Server Load Balancer (SLB) instance. If you select this type, you can enable internal or external access to the Service. SLB instances can be used to route requests to ClusterIP Services.

        • Create SLB Instance: You can click Modify to change the specification of the SLB instance.

        • Use Existing SLB Instance: Select an SLB instance type from the list.

        Note

        You can create an SLB instance or use an existing SLB instance. You can also associate an SLB instance with multiple Services. However, you must take note of the following limits:

        • If you use an existing SLB instance, the listeners of the SLB instance overwrite the listeners of the Service.

        • If an SLB instance is created along with a Service, you cannot reuse this SLB instance when you create other Services. Otherwise, the SLB instance may be deleted. Only SLB instances that are manually created in the console or by calling the API can be used to expose multiple Services.

        • Kubernetes Services that share the same SLB instance must use different listening ports. Otherwise, port conflicts may occur.

        • If multiple Services share the same SLB instance, you must use the listener names and the vServer group names as unique identifiers in Kubernetes. Do not modify the names of listeners or vServer groups.

        • You cannot share SLB instances across clusters.

      Port Mapping

      Specify a Service port and a container port. The container port must be the same as the one that is exposed in the backend pod.

      External Traffic Policy

      • Local: Traffic is routed only to pods on the node where the ingress gateway is deployed.

      • Cluster: This policy can route traffic to pods on other nodes.

      Note

      The External Traffic Policy parameter is available only if you set Type to Server Load Balancer.

      Annotations

      Add one or more annotations to the SLB instance. For example, service.beta.kubernetes.io/alicloud-loadbalancer-bandwidth:20 specifies that the maximum bandwidth of the Service is 20 Mbit/s. This limits the amount of traffic that flows through the Service.

      Label

      The label to be added to the Service, which identifies the Service.

    • To create an Ingress, click Create to the right side of Ingresses. In the Create dialog box, set the parameters.

      View Ingress parameters

      Note

      When you create an application from an image, you can create an Ingress only for one Service. In this example, the name of a virtual host is used as the test domain name. You must add the following mapping rule to the hosts file to map the domain name to the IP address of the Ingress. The entry is in the format of <Ingress external endpoint> + <Ingress domain name>. In actual scenarios, use a domain name that has an Internet Content Provider (ICP) number.

      101.37.xx.xx   foo.bar.com    # The IP address of the Ingress.

      Parameter

      Description

      Name

      Enter the name of the Ingress. In this example, alb-ingress is entered.

      Rules

      Ingress rules are used to enable access to specific Services in a cluster. For more information, see Getting started with ALB Ingresses.

      • Domain: Enter the domain name of the Ingress.

      • Path: Enter the Service URL. The default path is the root path /. The default path is used in this example. Each path is associated with a backend Service. SLB forwards traffic to a backend Service only when inbound requests match the domain name and path.

      • Service: Select a Service and a Service port.

      • TLS Settings: Select this check box to enable TLS.

      The test domain name foo.bar.com is used in this example. The nginx-svc Service is set as the backend of the Ingress.

      Canary Release

      Enable or disable the canary release feature. We recommend that you select Open Source Solution because the canary release feature provided by Alibaba Cloud is discontinued.

      Ingress Class

      Specify the class of the Ingress.

      Annotations

      You can add custom annotations or select existing annotations. Click Add and enter a key and a value. For more information about Ingress annotations, see Annotations.

      Labels

      Click +Add to add labels in key-value pairs to the Ingress in order to identify the Ingress.

  2. Optional: In the Scaling section, you can enable HPA to handle fluctuating workloads.

    • HPA can automatically scale the number of pods in an ACS cluster based on the CPU and memory usage metrics.

      Note

      To enable HPA, you must configure the resources required by the container. Otherwise, HPA does not take effect.

      Parameter

      Description

      Metric

      Select CPU Usage or Memory Usage. The selected resource type must be the same as that specified in the Required Resources field.

      Condition

      Specify the resource usage threshold. HPA triggers scale-out events when the threshold is exceeded.

      Max. Replicas

      The maximum number of replicated pods to which the application can be scaled.

      Min. Replicas

      The minimum number of replicated pods that must run.

    • CronHPA can scale an ACS cluster at a scheduled time. Before you enable CronHPA, you must first install ack-kubernetes-cronhpa-controller. For more information about CronHPA, see CronHPA.

  1. Optional: In the Labels,Annotations section, you can click Add to add pod labels and annotations.

  2. After you complete the configuration, click Create.

Step 4: Check the application

On the Complete wizard page, you can view the application.

  1. Click View Details to go to the details page of the Deployment.

  2. In the left-side navigation pane, choose Network > Ingresses. In the Rules column of the Deployment, you can view the Ingress rules.

  3. Enter the test domain name into the address bar of your web browser to go to the NGINX welcome page.

Use a YAML template to create an application

In an ACS orchestration template, you must define the resource objects that are required for running an application and configure mechanisms such as label selectors to orchestrate the resource objects into an application.

This section describes how to use an orchestration template to create an NGINX application that consists of a Deployment and a Service. The Deployment provisions pods for the application and the Service manages access to the backend pods.

  1. Log on to the ACS console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the cluster that you want to manage and click its ID. In the left-side pane, choose Workloads > Deployments.

  3. On the Deployments page, click Create from YAML in the upper-right corner.

  4. On the Create page, configure the template and click Create.

    • Sample Template: ACS provides YAML templates for various Kubernetes resource objects. You can also create a custom template based on YAML syntax to define the resources that you want to create.

    • Create Workload: You can quickly define a YAML template.

    • Use Existing Template: You can import an existing template.

    • Save Template: You can save the template that you have configured.

    The following sample template is based on an orchestration template provided by ACS. You can use this template to create a Deployment to run an NGINX application. By default, a Classic Load Balancer (CLB) instance is created.

    Note
    • ACS supports Kubernetes YAML orchestration. You can use --- to separate resource objects. This allows you to define multiple resource objects in one YAML template.

    • Optional: By default, when you mount a volume to an application, the files in the mount target are overwritten. To avoid overwriting the existing files, you can add the subPath parameter.

    View NGINX YAML content

    apiVersion: apps/v1 
    kind: Deployment
    metadata:
        name: nginx-deployment-basic
        labels:
          app: nginx
    
    
    spec:
        replicas: 2
        selector:
          matchLabels:
            app: nginx
        template:
          metadata:
            labels:
              app: nginx
              alibabacloud.com/compute-class: general-purpose 
              alibabacloud.com/compute-qos: default
          spec:
            containers:
            - name: nginx
              image: nginx:1.7.9 #replace it with your exactly <image_name:tags>
              ports:
              - containerPort: 80
              volumeMounts:
              - name: nginx-config
                mountPath: /etc/nginx/nginx.conf 
                subPath: nginx.conf   # Set the subPath parameter.     
            volumes:
            - name: nginx-config
              configMap:
                name: nginx-conf
    ---
    apiVersion: v1     
    kind: Service
    metadata:
       name: my-service1        #to specify your service name
       labels:
         app: nginx
    spec:
       selector:
         app: nginx             #change label selector to match your backend pod
       ports:
       - protocol: TCP
         name: http
         port: 30080          
         targetPort: 80
       type: LoadBalancer       
    ---
    # The ConfigMap of the mounted volume.
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: nginx-conf
      namespace: default
    data:
      nginx.conf: |-
       user  nginx;
       worker_processes  1;
       error_log  /var/log/nginx/error.log warn;
       pid        /var/run/nginx.pid;
       events {
            worker_connections  1024;
        }
        http {
            include       /etc/nginx/mime.types;
            default_type  application/octet-stream;
            log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                              '$status $body_bytes_sent "$http_referer" '
                              '"$http_user_agent" "$http_x_forwarded_for"';
            access_log  /var/log/nginx/access.log  main;
            sendfile        on;
            #tcp_nopush     on;
            keepalive_timeout  65;
            #gzip  on;
            include /etc/nginx/conf.d/*.conf;
        } 
  5. After you click Create, a message that indicates the deployment status appears.

Use kubectl to manage applications

You can use kubectl to create applications or view application pods.

Use kubectl to create an application

  1. Run the following command to start a pod. An NGINX Service is created in this example.

     kubectl create deployment nginx --image=registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
  2. Run the following command to create an Ingress for the pod and specify --type=LoadBalancer to use a load balancer provided by Alibaba Cloud.

    kubectl expose deployment nginx --port=80 --target-port=80 --type=LoadBalancer

Use kubectl to view pods

Run the following command to query the pod of the NGINX Service:

kubectl get pod |grep nginx

Expected output:

NAME                                   READY     STATUS    RESTARTS   AGE
nginx-2721357637-d****                 1/1       Running   1          9h