All Products
Search
Document Center

Container Service for Kubernetes:Use Kruise Rollout to perform canary releases and A/B testing

Last Updated:Jul 10, 2024

Kruise Rollout is a Kubernetes-based standard extension that you can use to perform canary releases, A/B testing, and blue-green deployments for Kubernetes workloads such as Deployments and StatefulSets. You can also use it for OpenKruise workloads such as CloneSets and Advanced StatefulSets. This topic describes how to use Kruise Rollout to perform canary releases.

Table of contents

Prerequisites

  • A Container Service for Kubernetes (ACK) cluster is created. For more information, see Create an ACK managed cluster.

    • To perform A/B testing or canary releases, you must create a cluster that runs Kubernetes 1.19 or later.

    • To perform phased releases, you must create a cluster that runs Kubernetes 1.16 or later.

  • kubectl-kruise is installed. For more information about the installation path of kubectl-kruise, see kubectl-kruise.

Introduction to Kruise Rollout

Kruise Rollout is an open source progressive rollout framework provided by OpenKruise. You can use Kruise Rollout to perform canary releases, blue-green deployments, and A/B testing. You can also use Kruise Rollout to control the canary traffic and pods. The release process can be automated in batches and paused based on Prometheus metrics. Kruise Rollout also provides imperceptible connection of bypasses and is compatible with various workloads such as Deployments, CloneSets, and StatefulSets. For more information, see Kruise Rollout. For more information, see Kruise Rollout.

Kruise Rollout is a bypass component. You need to only create a Rollout in your cluster to automate application releases and updates. Kruise Rollout supports seamless integration with Helm and PaaS platforms at low costs. The following figure shows how canary releases are performed by using Kruise Rollout.

image

Before you begin

Install Kruise Rollout.

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Operations > Add-ons.

  3. On the Add-ons page, click the Manage Applications tab. In the lower-right corner of the ack-kruise card, click Install.

  4. In the message that appears, confirm the information and click OK.

The following examples show how to use Kruise Rollout to perform canary releases, A/B testing, and blue-green deployments.

Example 1: Use Kruise Rollout to perform canary releases or A/B testing based on NGINX Ingresses

NGINX Ingresses are commonly used to expose applications. By default, ACK supports NGINX Ingresses. The following example shows how to use Kruise Rollout and an NGINX Ingress to perform canary releases.

  1. Install the NGINX Ingress controller.

  2. Create a Deployment, a Service, and an Ingress for an echoserver service.

    In this example, a Deployment is created to run an echoserver service and an NGINX Ingress is created to expose the service.

    1. Create a file named echoserver.yaml and copy the following content to the file:

      Show the complete content of the echoserver.yaml file:

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: echoserver
        labels:
          app: echoserver
      spec:
        replicas: 6
        selector:
          matchLabels:
            app: echoserver
        template:
          metadata:
            labels:
              app: echoserver
          spec:
            containers:
            - name: echoserver
               # Mac M1 should choics image can support arm64,such as image e2eteam/echoserver:2.2-linux-arm64
              image: openkruise-registry.cn-shanghai.cr.aliyuncs.com/openkruise/demo:1.10.2
              imagePullPolicy: IfNotPresent
              ports:
              - containerPort: 8080
              env:
              - name: NODE_NAME
                value: version1
              - name: PORT
                value: '8080'
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: POD_NAMESPACE
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.namespace
              - name: POD_IP
                valueFrom:
                  fieldRef:
                    fieldPath: status.podIP
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: echoserver
        labels:
          app: echoserver
      spec:
        ports:
        - port: 80
          targetPort: 8080
          protocol: TCP
          name: http
        selector:
          app: echoserver
      ---
      apiVersion: networking.k8s.io/v1
      kind: Ingress
      metadata:
        name: echoserver
        annotations:
          kubernetes.io/ingress.class: nginx
      spec:
        rules:
        - http:
            paths:
            - backend:
                service:
                  name: echoserver
                  port:
                    number: 80
              path: /apis/echo
              pathType: Exact
    2. Run the following command to deploy the application:

      kubectl apply -f echoserver.yaml
    3. Run the following command to query the IP address of the Ingress for external access:

      kubectl get ingress

      Expected output:

      NAME         CLASS    HOSTS   ADDRESS        PORTS   AGE
      echoserver   <none>   *       EXTERNAL_IP     80     12m
    4. Run the following command to access the Ingress:

      Replace <EXTERNAL_IP> with the IP address that you obtained in the previous step.

      curl http://<EXTERNAL_IP>/apis/echo

      Expected output:

      Hostname: echoserver-75d49c475c-ls2bs
      
      Pod Information:
          node name:    version1
          pod name:    echoserver-75d49c475c-ls2bs
          pod namespace:    default
      
      Server values:
          server_version=nginx: 1.13.3 - lua: 10008
  3. Create a Rollout to define the canary release rules of Kruise Rollout.

    In this example, the release is performed based on three batches of pods:

    • The first batch: A/B testing is performed. Requests that contain header[User-Agent]=Andriod are forwarded to the canary version. Other requests are forwarded to the earlier version.

    • The second batch: 50% of the replicated pods run the canary version and 50% of requests are forwarded to the canary version.

    • The third batch: 100% of the replicated pods run the canary version and 100% of requests are forwarded to the canary version.

    1. Create a file named rollout.yaml and copy the following content to the file:

      Show the complete content of the rollout.yaml file:

      # Copy the following content to the rollout.yaml file. 
      apiVersion: rollouts.kruise.io/v1alpha1
      kind: Rollout
      metadata:
        name: rollouts-demo
        annotations:
          rollouts.kruise.io/rolling-style: partition
      spec:
        objectRef:
          workloadRef:
            apiVersion: apps/v1
            kind: Deployment
            # Deployment Name
            name: echoserver
        strategy:
          canary:
            steps:
            # If you want to perform canary releases, you can delete the matches field and set only the weight field. 
            - matches:
              # The first batch of pods is used to perform A/B testing. Traffic forwarding is based on request headers. 
              - headers:
                - type: Exact
                  name: User-Agent
                  value: Andriod
              # You need to manually confirm whether to release the second batch of pods. 
              pause: {}
              # Specifies that one pod runs the canary version. You can also specify the percentage of pods that run the canary version. 
              replicas: 1
              # Specifies that 50% of requests are forwarded to the canary version. 
            - weight: 50
              replicas: 50%
              # The next batch of pods is automatically released after a 60-second pause. If you want to manually release the next batch of pods, specify pause: {}. 
              pause: {duration: 60}
            - weight: 100
              replicas: 100%
              pause: {duration: 60}
            trafficRoutings:
              # Service Name
            - service: echoserver
              ingress:
              # Ingress Name
                name: echoserver
    2. Run the following command to deploy the Rollout in your cluster:

      kubectl apply -f rollout.yaml
    3. Run the following command to query the status of the Rollout:

      kubectl get rollout

      Expected output:

      NAME            STATUS    CANARY_STEP   CANARY_STATE   MESSAGE                            AGE
      rollouts-demo   Healthy   3             Completed      workload deployment is completed   7s                              rollout is healthy   32s

      If STATUS=Healthy is returned, the Rollout runs as expected.

  4. Update the application.

    Kruise Rollout is a common configuration. After you distribute Kruise Rollout to your cluster, if you want to release new application versions, you need to only update the Deployment. You do not need to configure Kruise Rollout again. For example, you can directly update the image version of the echoserver service to 1.10.3 and then run the apply -f echoserver.yaml command to deploy the echoserver Deployment to your cluster. In addition to kubectl, you can use Helm or Vela to deploy the Deployment to your cluster.

    1. Modify the echoserver.yaml file by updating the image version of the echoserver service to 1.10.3.

      # echoserver.yaml
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: echoserver
      ...
      spec:
        ...
        containers:
        - name: echoserver
          # mac m1 can choice image e2eteam/echoserver:2.2-linux-arm
          image: openkruise-registry.cn-shanghai.cr.aliyuncs.com/openkruise/demo:1.10.3
          imagePullPolicy: IfNotPresent
          env:
          - name: NODE_NAME
            # This field is optional. In this example, the value is set to version2 to identify the node to which the container is scheduled. 
            value: version2
                                      
    2. Run the following command to query the status of the Rollout:

      kubectl get rollouts rollouts-demo -n default

      Expected output:

      NAME            STATUS        CANARY_STEP   CANARY_STATE   MESSAGE                                                                         AGE
      rollouts-demo   Progressing   1             StepPaused     Rollout is in step(1/1), and you need manually confirm to enter the next step   41m

      You can check the STATUS and CANARY columns to view the status of the Rollout and the release stage.

      • If STATUS=Progressing is returned, the canary release stage is in progress.

      • If CANARY_STEP=1 is returned, the first batch of pods is being released.

      • If CANARY_STATE=StepPaused is returned, the current batch of pods is released. Manual configuration is required for releasing the next batch of pods.

    3. Run the following command to forward a request that contains header[User-Agent]=Andriod to the canary version. Other requests are forwarded to the earlier version.

      # Requests that contain the specified header are forwarded to the canary version that is deployed on the version2 node. 
      curl -H "User-Agent: Andriod" http://<EXTERNAL_IP>/apis/echo
      
      Hostname: echoserver-869877fc87-6bb5h
      
      Pod Information:
          node name: version2
          pod name: echoserver-869877fc87-6bb5h
          pod namespace: default
      
      Server values:
          server_version=nginx: 1.13.3 - lua: 10008
      
      Other requests are forwarded to the earlier version that is deployed on the version1 node. 
      curl http://<EXTERNAL_IP>/apis/echo
      
      Hostname: echoserver-869877fc87-6bb5h
      
      Pod Information:
          node name: version1
          pod name: echoserver-869877fc87-6bb5h
          pod namespace: default
      
      Server values:
          server_version=nginx: 1.13.3 - lua: 10008
  5. You can release the remaining pods after you verify that the canary version runs as expected.

    The preceding steps release only some pods of the canary version and forward a proportion of requests to the canary version. After you verify that the canary version runs as expected based on application logs and monitoring information, you can run the rollout.rollouts.kruise.io/rollouts-demo approved command to release the remaining pods of the canary version and forward all requests to the canary version. rollouts-demo indicates the name of the Rollout.

    kubectl get rollouts rollouts-demo -n default
    NAME            STATUS        CANARY_STEP   CANARY_STATE   MESSAGE                                                                         AGE
    rollouts-demo   Progressing   1             StepPaused     Rollout is in step(1/1), and you need manually confirm to enter the next step   41m
    
    # Use kubectl-kruise to release the remaining pods of the canary version. 
    kubectl-kruise rollout approve rollouts/rollouts-demo -n default
    rollout.rollouts.kruise.io/rollouts-demo approved
    
    # If CANARY_STEP=2 is returned, the second batch of pods is being released. 
    kubectl get rollout
    NAME            STATUS        CANARY_STEP   CANARY_STATE   MESSAGE                                                        AGE
    rollouts-demo   Progressing   2             StepUpgrade    Rollout is in step(2/3), and upgrade workload to new version   141m
    
    If STATUS=Healthy and CANARY_STATE=Completed are returned, the application is updated by performing canary releases. 
    kubectl get rollout
    NAME            STATUS    CANARY_STEP   CANARY_STATE   MESSAGE                                  AGE
    rollouts-demo   Healthy   3             Completed      Rollout progressing has been completed   144m
  6. Optional:If the new application version does not run as expected, you can roll back the application version.

    If the canary version does not run as expected during the release process, modify the Deployment configurations and then run the kubectl apply -f echoserver.yaml command to roll back the application version.

    # echoserver.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: echoserver
    ...
    spec:
      ...
      containers:
      - name: echoserver
        # mac m1 can choice image e2eteam/echoserver:2.2-linux-arm.
        image: openkruise-registry.cn-shanghai.cr.aliyuncs.com/openkruise/demo:1.10.2
        imagePullPolicy: IfNotPresent
            env:
        - name: NODE_NAME
          value: version1

Example 2: Use Kruise Rollout to perform canary releases or A/B testing based on Microservices Engine (MSE) Ingresses

  1. Install the MSE Ingress controller. For more information, see Manage the MSE Ingress controller.

  2. Grant permissions to the MSE Ingress controller. For more information, see Authorize the MSE Ingress controller to access MSE.

  3. Create an MseIngressConfig and an IngressClass. For more information, see Use MSE Ingresses to access applications in ACK clusters.

  4. Create a Deployment, a Service, and an Ingress for an echoserver service.

    Perform steps similar to Step2 in Example 1 to configure the application. In this example, specify ingressClassName=mse in the configurations of the Ingress.

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: echoserver
    spec:
      # Set ingressClassName to mse. 
      ingressClassName: mse
      rules:
        - http:
            paths:
              - backend:
                  service:
                    name: echoserver
                    port:
                      number: 80
                path: /apis/echo
                pathType: Exact
  5. The canary release steps and rollback operations are the same as those described in Example 1: Use Kruise Rollout to perform canary releases or A/B testing based on NGINX Ingresses. For more information, see Step3 to 66 in Example 1.

Example 3: Use Kruise Rollout to perform phased releases for applications that use a microservices architecture such as Naco

When you deploy applications that use a microservices architecture such as Naco, you do not need to configure Services and Ingresses for the applications. This is because the microservices architecture is integrated with traffic scheduling capabilities. You need to only apply the phase releases feature of Kruise Rollout to such applications. Traffic scheduling is managed by the microservices architecture.

  1. Deploy a Deployment for an echoserver service.

    In this example, only one Deployment is deployed and no Service or Ingress is deployed. The following steps show only how to configure phase releases based on Kruise Rollout.

    1. Create a file named echoserver.yaml and copy the following content to the file:

      Show the complete content of the echoserver.yaml file:

      # Copy the following content to the echoserver.yaml file. 
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: echoserver
        labels:
          app: echoserver
      spec:
        replicas: 6
        selector:
          matchLabels:
            app: echoserver
        template:
          metadata:
            labels:
              app: echoserver
          spec:
            containers:
            - name: echoserver
               # mac m1 should choics image can support arm64,such as image e2eteam/echoserver:2.2-linux-arm64
              image: openkruise-registry.cn-shanghai.cr.aliyuncs.com/openkruise/demo:1.10.2
              imagePullPolicy: IfNotPresent
              ports:
              - containerPort: 8080
              env:
              - name: NODE_NAME
                value: version1
              - name: PORT
                value: '8080'
              - name: POD_NAME
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.name
              - name: POD_NAMESPACE
                valueFrom:
                  fieldRef:
                    fieldPath: metadata.namespace
              - name: POD_IP
                valueFrom:
                  fieldRef:
                    fieldPath: status.podIP
    2. Run the following command to deploy the application:

      kubectl apply -f echoserver.yaml
  2. Create a Rollout YAML file that defines the canary release rules of Kruise Rollout and run the kubectl apply -f rollout.yaml command to deploy the Rollout in the cluster.

    In this example, the release is performed based on three batches of pods. You do not need to set the trafficRoutings field.

    • The first batch: Only one replicated pod runs the canary version.

    • The second batch: 50% of the replicated pods run the canary version.

    • The third batch: 100% of the replicated pods run the canary version.

    # Copy the following content to the rollout.yaml file. 
    apiVersion: rollouts.kruise.io/v1alpha1
    kind: Rollout
    metadata:
      name: rollouts-demo
      annotations:
        rollouts.kruise.io/rolling-style: partition
    spec:
      objectRef:
        workloadRef:
          apiVersion: apps/v1
          kind: Deployment
          # Deployment Name
          name: echoserver
      strategy:
        canary:
          steps:
          # You need to manually confirm whether to release the second batch of pods. 
          - pause: {}
            # Specifies that one pod runs the canary version. You can also specify the percentage of the pods that run the canary version. 
            replicas: 1
            In this batch, 50% of the replicated pods run the canary version. 
          - replicas: 50%
            # The next batch of pods is automatically released after a 60-second pause. If you want to manually release the next batch of pods, specify pause: {}. 
            pause: {duration: 60}
          - weight: 100
            replicas: 100%
            pause: {duration: 60}
  3. The canary release steps and rollback operations are the same as those described in Example 1: Use Kruise Rollout to perform canary releases or A/B testing based on NGINX Ingresses. For more information, see Step 4 to Step 6 in Example 1.