All Products
Search
Document Center

Container Compute Service:Use a StatefulSet to create a stateful application

Last Updated:Aug 30, 2024

You can use a StatefulSet to create a stateful application in the Alibaba Cloud Container Compute Service (ACS) console. This allows you to quickly create stateful applications. This topic describes how to create a stateful NGINX application and the features of StatefulSets.

Prerequisites

A kubectl client is connected to the cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.

Background information

StatefulSets provide the following features:

Feature

Description

Pod consistency

Pod consistency ensures that pods are started and terminated in the specified order and ensures network consistency. Pod consistency is determined by pod configurations, regardless of the node to which a pod is scheduled.

Stable and persistent storage

VolumeClaimTemplate allows you to mount a persistent volume (PV) to each pod. The PVs mounted to replicated pods are not deleted after you delete the replicated pods or scale in the number of the replicated pods.

Stable network identifiers

Each pod in a StatefulSet derives its hostname from the name of the StatefulSet and the ordinal of the pod. The pattern of the hostname is StatefulSet name-pod ordinal.

Stable orders

For a StatefulSet with N replicated pods, each pod is assigned an integer ordinal from 0 to N-1. The ordinals assigned to pods within the StatefulSet are unique.

Procedure

  1. Log on to the ACS console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the cluster that you want to manage and click its ID. In the left-side pane, choose Workloads > StatefulSets.

  3. In the upper-right corner of the StatefulSets page, click Create from Image.

  4. On the Basic Information wizard page, configure the basic settings.

    Parameter

    Description

    Name

    Enter a name for the application.

    Replicas

    Specify the number of pods that are provisioned for the application.

    Workload

    The type of the resource object. StatefulSet is selected.

    Label

    Add labels to the application. The labels are used to identify the application.

    Annotations

    Add annotations to the application.

    Instance Type

    Select an instance type. For more information, see ACS pod overview.

    QoS Type

    Select a QoS class. General-purpose instances support the default (Guaranteed) and Best-effort QoS classes. Performance-enhanced instances support only the default (Guaranteed) QoS class. For more information, see Compute quality.

  5. Click Next to proceed to the Container wizard page.

  6. Configure containers.

    Note

    In the upper part of the Container wizard page, click Add Container to add more containers for the application.

    The following table describes the parameters that are used to configure the containers.

    • Basic Configurations

      Parameter

      Description

      Image Name

      • Click Select images and select a container image.

        • Container Registry Enterprise Edition: Select an image stored in a Container Registry Enterprise Edition instance. You must select the region and the Container Registry instance to which the image belongs. For more information about Container Registry, see What is Container Registry?

        • Container Registry Personal Edition: Select an image stored in a Container Registry Personal Edition instance. Make sure that Container Registry Personal Edition is already activated. You must select the region and the Container Registry instance to which the image belongs.

        • Artifact Center: The artifact center contains base operating system images, base language images, and AI- and big data-related images for application containerization. In this example, an NGINX image is selected. For more information, see Overview of the artifact center.

      • ACS supports only the Always option. This means that the image is pulled from Container Registry each time you deploy an application or scale out the cluster. The image is not pulled from the local environment.

      • Click Set Image Pull Secret to set a Secret used to pull the private image.

      CPU

      You can configure the CPU request and CPU limit of the container. By default, the CPU request equals the CPU limit. CPU resources are billed on a pay-as-you-go basis. If you use a YAML template to set a resource limit that differs from the resource request, the resource request is automatically overridden to the value of the resource limit. For more information, see Resource specifications.

      Memory

      You can configure the memory request and memory limit of the container. By default, the memory request equals the memory limit. Memory resources are billed on a pay-as-you-go basis. If you use a YAML template to set a resource limit that differs from the resource request, the resource request is automatically overridden to the value of the resource limit. For more information, see Resource specifications.

      Container Start Parameter

      • stdin: specifies that start parameters are sent to the container as standard input (stdin).

      • tty: specifies that start parameters defined in a virtual terminal are sent to the container.

      The two options are usually used together. In this case, the virtual terminal (tty) is associated with the stdin of the container. For example, an interactive program receives the stdin from the user and displays the content in the terminal.

      Init Container

      If you select Init Containers, an init container is created.

      Init containers provide a mechanism to block or delay the startup of application containers. Application containers in a pod are started in parallel after init containers are started. Init containers can contain utilities or setup scripts that are not included in an application image. Therefore, init containers can be used to initialize the runtime environment of application containers. For example, you can use init containers to configure kernel parameters or generate configuration files. For more information, see Init Containers.

    • Optional: In the Ports section, you can click Add to add container ports.

      Parameter

      Description

      Name

      Enter a name for the container port.

      Container Port

      Specify the container port that you want to expose. The port number must be from 1 to 65535.

      Protocol

      Valid values: TCP and UDP.

    • Optional: In the Environments section, you can click Add to add environment variables.

      You can add environment variables in key-value pairs to a pod in order to add environment labels or pass configurations. For more information, see Pod variables.

      Parameter

      Description

      Type

      Select the type of environment variable. Valid values:

      • Custom

      • Parameter

      • Secrets

      • Value/ValueFrom

      • ResourceFieldRef

      If you select ConfigMaps or Secrets, you can pass all data in the selected ConfigMap or Secret to the container environment variables.

      In this example, Secrets is selected. Select Secrets from the Type drop-down list and select a Secret from the Value/ValueFrom drop-down list. By default, all data in the selected Secret is passed to the environment variable. 环境变量

      In this case, the YAML file that is used to deploy the application contains the settings that reference all data in the selected Secret. yaml

      Variable Key

      The name of the environment variable.

      Value/ValueFrom

      The value of the environment variable.

    • Optional: In the Health Check section, you can enable liveness probes, readiness probes, and startup probes on demand.

      For more information, see Configure Liveness, Readiness and Startup Probes.

      Parameter

      Request type

      Description

      • Liveness: Liveness probes are used to determine when to restart a container.

      • Readiness: Readiness probes are used to determine whether a container is ready to receive traffic.

      • Startup: Startup probes are used to determine when to start a container.

      HTTP

      Sends an HTTP GET request to the container. You can set the following parameters:

      • Protocol: the protocol over which the request is sent. Valid values: HTTP and HTTPS.

      • Path: the requested HTTP path on the server.

      • Port: the number or name of the port exposed by the container. The port number must be from 1 to 65535.

      • HTTP Header: the custom headers in the HTTP request. Duplicate headers are allowed. You can specify HTTP headers in key-value pairs.

      • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the wait time (in seconds) before the first probe is performed after the container is started. Default value: 3.

      • Period (s): the periodSeconds field in the YAML file. This field specifies the time interval (in seconds) at which probes are performed. Default value: 10. Minimum value: 1.

      • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time (in seconds) after which a probe times out. Default value: 1. Minimum value: 1. Unit: seconds.

      • Healthy Threshold: the minimum number of consecutive successes that must occur before a container is considered healthy after a failed probe. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.

      • Unhealthy Threshold: the minimum number of consecutive failures that must occur before a container is considered unhealthy after a success. Default value: 3. Minimum value: 1.

      TCP

      Sends a TCP socket to the container. kubelet attempts to open the socket on the specified port. If the connection can be established, the container is considered healthy. Otherwise, the container is considered unhealthy. You can configure the following parameters:

      • Port: the number or name of the port exposed by the container. The port number must be from 1 to 65535.

      • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the wait time (in seconds) before the first probe is performed after the container is started. Default value: 15.

      • Period (s): the periodSeconds field in the YAML file. This field specifies the time interval (in seconds) at which probes are performed. Default value: 10. Minimum value: 1.

      • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time (in seconds) after which a probe times out. Default value: 1. Minimum value: 1. Unit: seconds.

      • Healthy Threshold: the minimum number of consecutive successes that must occur before a container is considered healthy after a failed probe. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.

      • Unhealthy Threshold: the minimum number of consecutive failures that must occur before a container is considered unhealthy after a success. Default value: 3. Minimum value: 1.

      Command

      Runs a probe command in the container to check the health status of the container. You can configure the following parameters:

      • Command: the probe command that is run to check the health status of the container.

      • Initial Delay (s): the initialDelaySeconds field in the YAML file. This field specifies the wait time (in seconds) before the first probe is performed after the container is started. Default value: 5.

      • Period (s): the periodSeconds field in the YAML file. This field specifies the time interval (in seconds) at which probes are performed. Default value: 10. Minimum value: 1.

      • Timeout (s): the timeoutSeconds field in the YAML file. This field specifies the time (in seconds) after which a probe times out. Default value: 1. Minimum value: 1. Unit: seconds.

      • Healthy Threshold: the minimum number of consecutive successes that must occur before a container is considered healthy after a failed probe. Default value: 1. Minimum value: 1. For liveness probes, this parameter must be set to 1.

      • Unhealthy Threshold: the minimum number of consecutive failures that must occur before a container is considered unhealthy after a success. Default value: 3. Minimum value: 1.

    • Optional: In the Lifecycle section, you can configure the lifecycle of the container.

      You can specify the following parameters to configure the lifecycle of the container: Start, Post Start, and Pre Stop. For more information, see Configure the lifecycle of a container.

      Parameter

      Description

      Start

      Specify a command and parameter that takes effect before the container starts.

      Post Start

      Specify a command that takes effect after the container starts.

      Pre Stop

      Specify a command that terminates the container.

    • Optional: In the Volume section, you can add local volumes or Persistent Volume Claims (PVCs).

      Parameter

      Description

      Add Local Storage

      Local volumes include ConfigMaps, Secrets, and EmptyDirs. Local volumes mount the specified data sources to containers. For more information, see Volumes.

      Add PVC

      You can mount PVs by using PVCs. You must create a PVC before you can select the PVC to mount a PV.

      Add NAS File System

      You can add PVs that use NAS file systems. Before you start, you must log on to the NAS console and create a container mount target. For more information, see Mount a file system on a container.

      In this example, a PVC named disk-ssd is mounted to the /tmp path of the container.

  7. In the Log section, you can specify logging configurations and add custom tags to the collected log.

    Parameter

    Description

    Collection Configuration

    Logstore: Create a Logstore in Simple Log Service to store the collected log data.

    Log Path in Container: Specify stdout or a container path to collect log data.

    Collect stdout files: If you specify stdout, the stdout files are collected.

    Text Logs: specifies that the logs in the specified path of the container are collected. In this example, /var/log/nginx is specified as the path. Wildcard characters can be used in the path.

    Custom Tag

    You can also add custom tags. The tags are added to the log of the container when the log is collected. You can add custom tags to container logs for log analysis and filtering.

  8. Click Next to go to the Advanced wizard page.

  9. Optional: On the Advanced wizard page, you can configure access control, scaling, scheduling, labels, and annotations.

    • In the Access Control section, you can configure access control settings for exposing backend pods.

      You can also specify how backend pods are exposed to the Internet. In this example, a ClusterIP Service and an Ingress are created to expose the NGINX application to the Internet.

    • To create a Service, click Create on the right side of Services. In the Create dialog box, set the parameters.

      View Service parameters

      Parameter

      Description

      Name

      The name of the Service. In this example, nginx-svc is used.

      Type

      The type of Service. This parameter specifies how the Service is accessed. Cluster IP is selected in this example.

      • Cluster IP: the ClusterIP Service. This type of Service exposes the Service through an internal IP address of the cluster. If you select this option, the Service is accessible only within the cluster. This is the default value.

        Note

        The Headless Service parameter is available only when you set Type to Cluster IP.

      • Server Load Balancer: The LoadBalancer type Service. This type of Service exposes the Service by using an Server Load Balancer (SLB) instance. If you select this type, you can enable internal or external access to the Service. SLB instances can be used to route requests to ClusterIP Services.

        • Create SLB Instance: You can click Modify to change the specification of the SLB instance.

        • Use Existing SLB Instance: Select an SLB instance type from the list.

        Note

        You can create an SLB instance or use an existing SLB instance. You can also associate an SLB instance with multiple Services. However, you must take note of the following limits:

        • If you use an existing SLB instance, the listeners of the SLB instance overwrite the listeners of the Service.

        • If an SLB instance is created along with a Service, you cannot reuse this SLB instance when you create other Services. Otherwise, the SLB instance may be deleted. Only SLB instances that are manually created in the console or by calling the API can be used to expose multiple Services.

        • Kubernetes Services that share the same SLB instance must use different listening ports. Otherwise, port conflicts may occur.

        • If multiple Services share the same SLB instance, you must use the listener names and the vServer group names as unique identifiers in Kubernetes. Do not modify the names of listeners or vServer groups.

        • You cannot share SLB instances across clusters.

      Port Mapping

      Specify a Service port and a container port. The container port must be the same as the one that is exposed in the backend pod.

      External Traffic Policy

      • Local: Traffic is routed only to pods on the node where the ingress gateway is deployed.

      • Cluster: This policy can route traffic to pods on other nodes.

      Note

      The External Traffic Policy parameter is available only if you set Type to Server Load Balancer.

      Annotations

      The annotations to be added to the Service to configure the SLB instance. For example, service.beta.kubernetes.io/alicloud-loadbalancer-bandwidth:20 specifies that the maximum bandwidth of the Service is 20 Mbit/s. This limits the amount of traffic that flows through the Service.

      Labels

      The label to be added to the Service, which identifies the Service.

    • To create an Ingress, click Create to the right side of Ingresses. In the Create dialog box, set the parameters.

      View Ingress parameters

      Note

      When you create an application from an image, you can create an Ingress only for one Service. In this example, the name of a virtual host is used as the test domain name. You must add the following mapping rule to the hosts file to map the domain name to the IP address of the Ingress. The entry is in the format of <Ingress external endpoint> + <Ingress domain name>. In actual scenarios, use a domain name that has an Internet Content Provider (ICP) number.

      101.37.xx.xx   foo.bar.com    # The IP address of the Ingress.

      Parameter

      Description

      Name

      The name of the route. In this example, nginx-ingress is used.

      Rules

      Ingress rules are used to enable access to specific Services in a cluster.

      • Domain: Enter the domain name of the Ingress.

      • Path: Enter the Service URL. The default path is the root path /. The default path is used in this example. Each path is associated with a backend Service. SLB forwards traffic to a backend Service only when inbound requests match the domain name and path.

      • Service: Select a Service and a Service port.

      • TLS Settings: Select this check box to enable TLS.

      The test domain name foo.bar.com is used in this example. The nginx-svc Service is set as the backend of the Ingress.

      Canary Release

      Enable or disable the canary release feature. We recommend that you select Open Source Solution because the canary release feature provided by Alibaba Cloud is discontinued.

      Ingress Class

      Specify the class of the Ingress.

      Annotations

      You can add custom annotations or select existing annotations. Click Add and enter a key and a value. For more information about Ingress annotations, see Annotations.

      Labels

      Click +Add to add labels in key-value pairs to the Ingress in order to identify the Ingress.

    1. Optional: In the Scaling section, you can enable HPA to handle fluctuating workloads.

      • HPA can automatically scale the number of pods in an ACK cluster based on the CPU and memory usage metrics.

        Note

        To enable HPA, you must configure the resources required by the container. Otherwise, HPA does not take effect.

        Parameter

        Description

        Metrics

        Select CPU Usage or Memory Usage. The selected resource type must be the same as that specified in the Required Resources field.

        Condition

        Specify the resource usage threshold. HPA triggers scale-out events when the threshold is exceeded.

        Max. Replicas

        The maximum number of replicated pods to which the application can be scaled.

        Min. Replicas

        The minimum number of replicated pods that must run.

      • CronHPA can scale an ACK cluster at a scheduled time. Before you enable CronHPA, you must first install ack-kubernetes-cronhpa-controller. For more information about CronHPA, see CronHPA.

    1. Optional: In the Labels,Annotations section, you can click Add to add pod labels and annotations.

    2. After you complete the configuration, click Create.

  10. On the Complete wizard page, you can view the application.

    1. Click View Details to go to the details page of the Deployment.

    2. In the left-side navigation pane, choose Network > Ingresses. In the Rules column of the Deployment, you can view the Ingress rules.

    3. Enter the test domain name into the address bar of your web browser to go to the NGINX welcome page.

What to do next

Log on to a master node and run the following commands to test persistent storage.

  1. Run the following command to create a temporary file in the disk:

    kubectl exec nginx-1 ls /tmp            # Query files (including lost+found) in the /tmp directory. 
    kubectl exec nginx-1 -- touch /tmp/statefulset         # Create a file named statefulset. 
    kubectl exec nginx-1 -- ls /tmp

    Expected output:

    lost+found
    statefulset
  2. Run the following command to delete the pod to verify data persistence:

    kubectl delete pod nginx-1

    Expected output:

    pod"nginx-1" deleted
  3. After the system recreates and starts a new pod, query the files in the /tmp directory. The following result shows that the statefulset file still exists. This shows the high availability of the stateful application.

    kubectl exec nginx-1 -- ls /tmp   # Query files (including lost+found) in the /tmp directory to verify data persistence.

    Expected output:

    statefulset