All Products
Search
Document Center

Container Compute Service:Mount a statically provisioned CPFS for LINGJUN volume

Last Updated:Jan 26, 2025

CPFS is a fully managed and expandable parallel file system solution provided by Alibaba Cloud to meet the requirements in high-performance computing scenarios. CPFS allows concurrent access from thousands of servers. CPFS provides tens of GB/s of throughput and millions of IOPS at an extremely low latency of sub-milliseconds. This topic describes how to mount a statically provisioned CPFS volume in an ACS cluster and hwo to verity that the volume can be used to share and persist data.

Background information

CPFS for LINGJUN is suitable for intelligent computing scenarios such as AI generated content (AIGC) and autonomous driving. For more information, see What is CPFS for Lingjun (invitational preview)?

Prerequisites

The managed-csiprovisioner component is installed in the ACS cluster.

Note

Go to the ACS cluster management page in the ACS console. In the left-side navigation pane of the cluster management page, choose Operations > Add-ons. On the Storage tab, you can check whether managed-csiprovisioner is installed.

Limits

  • You cannot mount CPFS for LINGJUN volumes to GPU-accelerated ACS pods.

    The following table describes the compute classes of ACS pods supported by CPFS and the details.

    Edition

    CPU-accelerated

    GPU-accelerated

    CPFS for LINGJUN

    对 (VPC mount target)

    错

  • Only some regions and zones support CPFS for LINGJUN. For more information, see CPFS for LINGJUN.

Usage notes

  • CPFS is a shared storage file system. A CPFS volume can be mounted to multiple pods.

  • CPFS for LINGJUN uses the pay-as-you-go billing method. For more information, see CPFS for LINGJUN.

Create a CPFS file system

CPFS for LINGJUN

  1. Create a CPFS for Lingjun file system. For more information, see the "Create a CPFS for Lingjun file system" section of the Create a file system topic.

    CPFS for LINGJUN is in invitational preview. To use it, submit a ticket.

  2. Create a VPC mount target based on the VPC and vSwitch of the ACS cluster and generate a mount path. For more information, see Manage VPC mount targets.

    Note

    You can mount CPFS for LINGJUN volumes only to CPU-accelerated ACS pods.

Mount CPFS volumes

Step 1: Create a PV and a PVC

kubectl

  1. Connect to your ACS cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster and Use kubectl on Cloud Shell to manage ACS clusters.

  2. Create a file named pv-pvc.yaml based on the following content.

    CPFS for LINGJUN (VPC mount target)

    You can mount CPFS for LINGJUN volumes only to CPU-accelerated ACS pods.

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: cpfs-test
      labels:
        alicloud-pvname: cpfs-test
    spec:
      accessModes:
      - ReadWriteMany
      capacity:
        storage: 10Ti
      csi:
        driver: nasplugin.csi.alibabacloud.com
        volumeAttributes:
          mountProtocol: efc
          server: cpfs-***-vpc-***.cn-wulanchabu.cpfs.aliyuncs.com
          path: /
        volumeHandle: bmcpfs-*****
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: cpfs-test
    spec:
      accessModes:
      - ReadWriteMany
      selector:
        matchLabels:
          alicloud-pvname: cpfs-test
      resources:
        requests:
          storage: 10Ti

    The following table describes the parameters.

    Parameter

    Description

    labels

    Add the alicloud-pvname: cpfs-test label to use the selector to bind the PVC to the PV.

    accessModes

    The access mode.

    capacity

    Declare the size of the volume.

    csi.driver

    The driver type. Set it to povplugin.csi.alibabacloud.com.

    csi.volumeAttributes

    • Set mountProtocol to efc.

    • Set server to the VPC mount target of the CPFS for LINGJUN volume. Example: cpfs-***-vpc-***.cn-wulanchabu.cpfs.aliyuncs.com.

    • path defaults to /, which indicates the root directory of the CPFS file system. You can specify a subdirectory, such as /dir.

    csi.volumeHandle

    Specify the ID of the CPFS for LINGJUN file system.

    selector

    Use the PV label to bind the PVC to the PV.

    resources.requests

    The capacity must not be greater than that of the PV.

  3. Create a PV and a PVC.

    kubectl create -f pv-pvc.yaml
  4. Check the PVC and bind it to the PV.

    kubectl get pvc cpfs-test

    Expected output:

    NAME        STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS    VOLUMEATTRIBUTESCLASS   AGE
    cpfs-test   Bound    cpfs-test        10Ti       RWX            <unset>         <unset>                 10s

Console

  1. Log on to the ACS console.

  2. On the Clusters, click the name of the cluster to go to the cluster management page.

  3. In the left-side navigation pane of the cluster management page, choose Volumes > Persistent Volume Claims.

  4. On the Persistent Volume Claims page, click Create.

  5. In the Create PVC dialog box, configure the parameters and click Create.

    The following parameters are configured to create the PVC and PV at the same time. You can also create the PVC after the PV is created.

    Parameter

    Description

    Example

    PVC Type

    Select CPFS.

    CPFS

    Application Name

    Enter a custom name for the PVC. The name must follow the format requirements displayed on the UI.

    cpfs-test

    Allocation Mode

    Select Existing Volumes or Create Volume on demand.

    Create Volume

    CPFS Type

    Select CPFS General-purpose or CPFS for LINGJUN on demand.

    CPFS General-purpose

    Capacity

    The storage space allocated to the pod, which is the size of the CPFS volume.

    Note

    The actual capacity of a statically provisioned CPFS volume is unlimited. The usage in the CPFS console shall prevail.

    20Gi

    Access Mode

    You can select ReadWriteMany or ReadWriteOnce.

    ReadWriteMany

    Mount Target Domain Name

    The directory of the CPFS file system to be mounted.

    • If you set the value to the domain name of a mount target, such as cpfs-***-***.cn-wulanchabu.cpfs.aliyuncs.com, it indicates that the root directory (/) of the CPFS file system is mounted.

    • If you specify a value that includes the domain name of a mount target and a subdirectory, such as cpfs-***-***.cn-wulanchabu.cpfs.aliyuncs.com:/dir, it indicates that the /dir directory of the CPFS file system is mounted. If /dir does not exist, the system creates the directory.

    cpfs-***-***.cn-wulanchabu.cpfs.aliyuncs.com

    File System ID

    The ID of the CPFS file system. This parameter is available only when CPFS Type is set to CPFS for LINGJUN.

    bmcpfs-0115******13q5

  6. View the PVC and PV.

    • You can find the PVC on the Persistent Volume Claims page. The PVC is bound to the PV.

      image

    • You can find the PV on the Persistent Volumes page. The PV is bound to the PVC.

      image

Step 2: Create an application and mount the CPFS volume

  1. Create a file named cpfs-test.yaml based on the following cotnent.

    The YAML file creates a Deployment of two pods. The two pods use alibabacloud.com/compute-class: gpu to request GPU compute power and use the cpfs-pvc PVC to request storage resources. The mount path is /data for both pods.

    Note

    For more information about the GPU model used in this example, see GPU models.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: cpfs-test
      labels:
        app: cpfs-test
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: cpfs-test
      template:
        metadata:
          labels:
            app: cpfs-test
            alibabacloud.com/compute-class: gpu
                    # Set the GPU model to example-model. The value is for reference only.
            alibabacloud.com/gpu-model-series: example-model
            alibabacloud.com/compute-qos: default
        spec:
          containers:
          - name: nginx
            image: registry.cn-hangzhou.aliyuncs.com/acs-sample/nginx:latest
            ports:
            - containerPort: 80
            volumeMounts:
              - name: pvc-cpfs
                mountPath: /data
          volumes:
            - name: pvc-cpfs
              persistentVolumeClaim:
                claimName: cpfs-test
  2. Create a Deployment and mount the CPFS volume.

    kubectl create -f cpfs-test.yaml
  3. View the status of the pods created by the Deployment.

    kubectl get pod | grep cpfs-test

    If the following output is returned, the pods are created.

    cpfs-test-****-***a   1/1     Running   0          45s
    cpfs-test-****-***b   1/1     Running   0          45s
  4. View the mount path.

    Run the following command. Data in the mount path of the CPFS for LINGJUN file system will be returned. By default, no data is returned.

    kubectl exec cpfs-test-****-***a -- ls /data

Verify shared storage and persistent storage

The CPFS for LINGJUN file system is mounted to the pods created by the Deployment. You can use the following method to verify it:

  • Create a file in one pod and view the file from the other pod in order to verify shared storage.

  • Recreate the Deployment. Then, check whether data stored in the file system exists from the newly created pod in order to verify persistent storage.

  1. View the pod information.

    kubectl get pod | grep cpfs-test

    Expected output:

    cpfs-test-****-***a   1/1     Running   0          45s
    cpfs-test-****-***b   1/1     Running   0          45s
  2. Verify shared storage.

    1. Create a file in the pod.

      The pod cpfs-test-****-***a is used as an example.

      kubectl exec cpfs-test-****-***a -- touch /data/test.txt
    2. View the file from the other pod.

      The pod cpfs-test-****-***b is used as an example.

      kubectl exec cpfs-test-****-***b -- ls /data

      The expected output is as follows. The file test.txt is shared.

      test.txt
  3. Verify persistent storage.

    1. Recreate the Deployment.

      kubectl rollout restart deploy cpfs-test
    2. Wait until the pods are recreated.

      kubectl get pod | grep cpfs-test

      Expected output:

      cpfs-test-****-***c   1/1     Running   0          78s
      cpfs-test-****-***d   1/1     Running   0          52s
    3. Check whether data stored in the file system exists from the newly created pod.

      The pod cpfs-test-c*** is used as an example.

      kubectl exec cpfs-test-****-***c -- ls /data

      The expected output is as follows. Data stored in the CPFS for LINGJUN file system can be obtained from the mount path of the pod.

      test.txt