All Products
Search
Document Center

Container Service for Kubernetes:Mount a statically provisioned OSS volume

Last Updated:Dec 20, 2024

Object Storage Service (OSS) is a secure, cost-effective, high-capacity, and high-durability cloud storage service provided by Alibaba Cloud. An OSS bucket can be mounted to multiple pods. OSS is suitable for data that is frequently read but does not require high disk IOPS, such as configuration files, video files, and images. You can use RAM Roles for Service Accounts (RRSA) authentication or AccessKey pair authentication to mount statically provisioned OSS volumes to your applications. If you need to regularly rotate AccessKey pairs in clusters that run Kubernetes 1.26 or later, we recommend that you use RRSA authentication. This prevents ossfs from remounting OSS volumes and your business from being restarted when AccessKey pairs are rotated.

Prerequisites

Usage notes

  • We recommend that you do not use OSS buckets across accounts.

  • We recommend that you add health check settings to the YAML file of the pod to which the OSS bucket is mounted to restart the pod and remount the OSS volume when the OSS directory becomes unavailable.

    Note

    When the system updates a Container Service for Kubernetes (ACK) cluster, kubelet is restarted. In this case, ossfs is also restarted, which causes the OSS directory to be unavailable. The preceding issue is fixed in csi-plugin 1.18.8.45 and later and csi-provisioner 1.18.8.45 and later. We recommend that you update csi-plugin and csi-provisioner to 1.18.8.45 or later at the earliest opportunity.

  • If the securityContext.fsgroup parameter is specified in the application template, the kubelet performs the chmod or chown operation after the volume is mounted, which increases the time consumption. For more information about how to speed up the mounting process when the securityContext.fsgroup parameter is specified, see Why does it require a long period of time to mount an OSS volume?

  • When you use ossfs to perform List operations, HTTP requests are sent to OSS to retrieve the metadata of the requested files. If the listed directory contains large numbers of files, such as more than 100,000 files (the actual number depends on the memory of the node), ossfs will occupy large amounts of system memory. As a result, Out of Memory (OOM) errors may occur in pods. To resolve this issue, divide the directory or mount a subdirectory in the OSS bucket.

  • ossfs is applicable to concurrent read scenarios. When you use persistent volume claims (PVCs) and persistent volumes (PVs) to mount OSS volumes in concurrent read scenarios, we recommend that you set the access modes of the PVCs and PVs to ReadOnlyMany. To handle write operations, we recommend that you use the OSS SDKs or ossutil to split reads and writes and set the access mode of the OSS volume to ReadWriteMany. For more information, see Best practice for OSS read/write splitting.

    Important
    • ossfs does not guarantee the consistency of data written by concurrent write operations.

    • When the OSS volume is mounted to a pod, if you log on to the pod or the host of the pod and delete or modify a file in the mounted path, the source file in the OSS bucket is also deleted or modified. To avoid accidentally deleting important data, you can enable version control for the OSS bucket. For more information, see Versioning.

  • If you want to upload a file larger than 10 MB to OSS, you can split the file into multiple parts and separately upload the parts. After a multipart upload task is interrupted, you can delete the parts that are no longer needed. For more information about how to delete parts, see Manually delete parts or Configure lifecycle rules to delete parts.

Use RRSA authentication to mount a statically provisioned OSS volume

You can use the RRSA feature to enforce access control on different PVs that are deployed in an ACK cluster. This implements fine-grained API permission control on PVs and reduces security risks. For more information, see Use RRSA to authorize different pods to access different cloud services.

Important

This mount method supports only ACK managed clusters and ACK Serverless clusters that run Kubernetes 1.26 or later and have Container Storage Interface (CSI) 1.30.4 or later installed. If RRSA is enabled for your cluster and a CSI version earlier than 1.30.4 is installed in the cluster, refer to [Product Changes] ossfs version upgrade and mounting process optimization in CSI to grant permissions to the RAM role used by your cluster.

Step 1: Create a RAM role

This step is required if this is the first time you enable RRSA for your cluster. Skip this step if you have previously used RRSA authentication to mount OSS volumes in the cluster.

  1. Log on to the ACK console and enable RRSA. For more information, see Enable RRSA.

  2. Create a RAM role for mounting OSS volumes by using RRSA authentication. The RAM role is assumed by your cluster when it uses RRSA to mount OSS volumes.

    When you create the RAM role, select IdP for the Select Trusted Entity parameter. In the following example, a RAM role named demo-role-for-rrsa is created.

    1. Log on to the RAM console with your Alibaba Cloud account.

    2. In the left-side navigation pane, choose Identities > Roles. On the Roles page, click Create Role.

    3. In the Create Role panel, select IdP for Select Trusted Entity and click Next.

    4. On the Configure Role wizard page, set the following parameters and click OK.

      The following table describes the parameters.

      Parameter

      Description

      RAM Role Name

      Set the value to demo-role-for-rrsa.

      Note

      Enter the description of the RAM role. This parameter is optional.

      IdP Type

      Select OIDC.

      Select IdP

      Select an identity provider (IdP). The IdP is named in the ack-rrsa-<cluster_id> format. <cluster_id> indicates the ID of your cluster.

      Conditions

      • oidc:iss: Use the default value.

      • oidc:aud: Select sts.aliyuncs.com.

      • oidc:sub: Select StringEquals and enter system:serviceaccount:ack-csi-fuse:csi-fuse-ossfs.

Step 2: Grant permissions to the demo-role-for-rrsa role

  1. Create a custom policy to grant OSS access permissions to the RAM user. For more information, see Create custom policies.

    Select the read-only policy or read-write policy based on your business requirements. Replace mybucket with the name of the bucket you created.

    • Policy that provides read-only permissions on OSS

      Click to view policy content

      {
          "Statement": [
              {
                  "Action": [
                      "oss:Get*",
                      "oss:List*"
                  ],
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ],
              }
          ],
          "Version": "1"
      }
    • Policy that provides read-write permissions on OSS

      Click to view policy content

      {
          "Statement": [
              {
                  "Action": "oss:*",
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ],
              }
          ],
          "Version": "1"
      }
  2. Optional. If the objects in the OSS bucket are encrypted by using a specified customer master key (CMK) in Key Management Service (KMS), you need to grant KMS access permissions to the RAM user. For more information, see Encrypt an OSS volume.

  3. Grant the required permissions to the demo-role-for-rrsa role. For more information, see Grant permissions to a RAM role.

    Note

    If you want to use an existing RAM role that has OSS access permissions, you can modify the trust policy of the RAM role. For more information, see Use an existing RAM role and grant the required permissions to the RAM role.

Step 3: Create a PV and a PVC

  1. Create a PV that uses RRSA authentication.

    1. Create a file named pv-rrsa.yaml. In this file, RRSA authentication is enabled.

      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv-oss
        labels:    
          alicloud-pvname: pv-oss
      spec:
        capacity:
          storage: 5Gi
        accessModes:
          - ReadOnlyMany
        persistentVolumeReclaimPolicy: Retain
        csi:
          driver: ossplugin.csi.alibabacloud.com
          volumeHandle: pv-oss # Specify the name of the PV. 
          volumeAttributes:
            bucket: "oss" # Replace the value with the name of the OSS bucket you created. 
            path: /
            url: "oss-cn-hangzhou.aliyuncs.com"
            otherOpts: "-o umask=022 -o max_stat_cache_size=0 -o allow_other"
            authType: "rrsa"
            roleName: "demo-role-for-rrsa"

      Parameter

      Description

      name

      The name of the PV.

      labels

      The labels that are added to the PV.

      storage

      The available storage of the OSS bucket.

      accessModes

      The access mode. Valid values: ReadOnlyMany and ReadWriteMany.

      If you select ReadOnlyMany, ossfs mounts the OSS bucket in read-only mode.

      persistentVolumeReclaimPolicy

      The reclaim policy of the PV. OSS volumes support only the Retain policy, which indicates that the PV and data in the bucket are not deleted when you delete the PVC.

      driver

      The type of volume driver. In this example, the value is set to ossplugin.csi.alibabacloud.com. This indicates that the OSS CSI plug-in provided by Alibaba Cloud is used.

      volumeHandle

      The name of the PV.

      bucket

      The OSS bucket that you want to mount.

      path

      The path relative to the root directory of the OSS bucket to be mounted. The default value is /. This parameter is supported by CSI 1.14.8.32-c77e277b-aliyun and later.

      If you use an ossfs version earlier than 1.91, you must create the path in the OSS bucket in advance. For more information, see Features of ossfs 1.91 and later and ossfs performance benchmarking.

      url

      The endpoint of the OSS bucket you want to mount. You can retrieve the endpoint from the Overview page of the bucket in the OSS console.

      • If the bucket is mounted to a node in the same region as the bucket or the bucket can be connected to the node through a virtual private cloud (VPC), use the internal endpoint of the bucket.

      • If the bucket is mounted to a node in a different region, use the public endpoint of the bucket.

      Public endpoints and internal endpoints have different formats:

      • Format of internal endpoints: http://oss-{{regionName}}-internal.aliyuncs.com or https://oss-{{regionName}}-internal.aliyuncs.com.

      • Format of public endpoints: http://oss-{{regionName}}.aliyuncs.com or https://oss-{{regionName}}.aliyuncs.com.

      Important

      The vpc100-oss-{{regionName}}.aliyuncs.com format for internal endpoints is deprecated.

      otherOpts

      You can configure custom parameters in the -o *** -o *** format for the OSS volume. Example: -o umask=022 -o max_stat_cache_size=0 -o allow_other.

      umask: modifies the permission mask of files in ossfs. For example, if you specify umask=022, the permission mask of files in ossfs changes to 755. By default, the permission mask of files uploaded by using the OSS SDK or the OSS console is 640 in ossfs. Therefore, we recommend that you specify the umask parameter if you want to split reads and writes.

      max_stat_cache_size: The maximum number of files whose metadata can be cached in the metadata cache. Metadata caching can accelerate List operations. However, if you modify files by using methods other than ossfs, such as the OSS console, OSS SDK, or ossutil, the metadata of the files may not be updated in real time.

      allow_other: allows other users to access the mounted directory. However, these users cannot access the files in the directory.

      For more information, see Options supported by ossfs.

      authType

      The authentication method. Set this parameter to rrsa, which indicates that RRSA authentication is used.

      roleName

      Set the value to the name of the RAM role you created or modified in Step 1: Create a RAM role. If you need to configure different permissions for different PVs, you can create multiple RAM roles and specify the RAM roles in the roleName parameter.

      Note

      For more information about how to use specified Alibaba Cloud Resource Names (ARNs) or ServiceAccounts in RRSA authentication, see How do I use the specified ARNs or ServiceAccount in RRSA authentication?

    2. Run the following command to create a PV that uses RRSA authentication:

      kubectl create -f pv-rrsa.yaml
  2. Run the following command to create a PVC:

    kubectl create -f pvc-oss.yaml

    The following pvc-oss.yaml file is used to create the PVC:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-oss
    spec:
      accessModes:
        - ReadOnlyMany
      resources:
        requests:
          storage: 5Gi
      selector:
        matchLabels:
          alicloud-pvname: pv-oss

    Parameter

    Description

    name

    The name of the PVC.

    accessModes

    The access mode. Valid values: ReadOnlyMany and ReadWriteMany.

    If you select ReadOnlyMany, ossfs mounts the OSS bucket in read-only mode.

    storage

    The capacity claimed by the PVC. The claimed capacity cannot exceed the capacity of the PV that is bound to the PVC.

    alicloud-pvname

    The labels that are used to select and bind a PV to the PVC. The labels must be the same as those of the PV to be bound to the PVC.

    In the left-side navigation pane, choose Volumes > Persistent Volume Claims. On the Persistent Volume Claims page, you can find the created PVC.

Step 4: Create an application

Create an application named oss-static and mount the PV to the application.

Run the following command to create a file named oss-static.yaml:

kubectl create -f oss-static.yaml

The following oss-static.yaml file is used to create the application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: oss-static
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
        ports:
        - containerPort: 80
        volumeMounts:
          - name: pvc-oss
            mountPath: "/data"
          - name: pvc-oss
            mountPath: "/data1"
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - cd /data
          initialDelaySeconds: 30
          periodSeconds: 30
      volumes:
        - name: pvc-oss
          persistentVolumeClaim:
            claimName: pvc-oss
  • livenessProbe: Configure health checks. For more information, see OSS volumes.

  • mountPath: the path where the OSS bucket is mounted in the container.

  • claimName: the name of the PVC that the application uses.

Use AccessKey pair authentication to mount a statically provisioned OSS volume

Important
  • If the AccessKey pair referenced by the OSS volume is revoked or the required permissions are revoked, the application to which the volume is mounted fails to access OSS and a permission error is reported. To resolve the issue, you need to modify the Secret that stores the AccessKey pair and mount the OSS volume to the application again. In this case, the application is restarted. For more information about how to remount an OSS volume by using ossfs after the AccessKey pair is revoked, see the solution for Scenario 4 in How do I manage the permissions related to OSS volume mounting?

  • If you need to regularly rotate AccessKey pairs, we recommend that you use RRSA authentication.

Step 1: Create a RAM user that has OSS access permissions and obtain the AccessKey pair of the RAM user

Create a RAM user that has OSS access permissions and obtain the AccessKey pair of the RAM user. Then, grant the RAM user the permissions to access the OSS bucket you created.

  1. Create a RAM user. You can skip this step if you have an existing RAM user. For more information about how to create a RAM user, see Create a RAM user.

  2. Create a custom policy to grant OSS access permissions to the RAM user. For more information, see Create custom policies.

    Select the read-only policy or read-write policy based on your business requirements. Replace mybucket with the name of the bucket you created.

    • Policy that provides read-only permissions on OSS

      Click to view policy content

      {
          "Statement": [
              {
                  "Action": [
                      "oss:Get*",
                      "oss:List*"
                  ],
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ],
              }
          ],
          "Version": "1"
      }
    • Policy that provides read-write permissions on OSS

      Click to view policy content

      {
          "Statement": [
              {
                  "Action": "oss:*",
                  "Effect": "Allow",
                  "Resource": [
                      "acs:oss:*:*:mybucket",
                      "acs:oss:*:*:mybucket/*"
                  ],
              }
          ],
          "Version": "1"
      }
  3. Optional. If the objects in the OSS bucket are encrypted by using a specified customer master key (CMK) in Key Management Service (KMS), you need to grant KMS access permissions to the RAM user. For more information, see Encrypt an OSS volume.

  4. Grant OSS access permissions to the RAM user. For more information, see Grant permissions to a RAM user.

  5. Create an AccessKey pair for the RAM user. For more information, see Create an AccessKey pair.

Step 2: Create a PV and a PVC and use the PV and PVC to mount the OSS bucket to an application

You can create a PV and a PVC, and then use the PV and PVC to mount the OSS bucket to an application in the ACK console or by using kubectl.

Console

1. Create a PV

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Volumes > Persistent Volumes.

  3. In the upper-right corner of the Persistent Volumes page, click Create.

  4. In the Create PV dialog box, configure the parameters that are described in the following table.

    Parameter

    Description

    Example

    PV Type

    Select OSS.

    OSS

    Volume Name

    The name of the PV. The name must be unique in the cluster.

    pv-oss

    Capacity

    The capacity of the PV that you created.

    20GiB

    Access Mode

    The access mode of the PV. Valid values: ReadOnlyMany and ReadWriteMany. Default value: ReadOnlyMany.

    If you select ReadOnlyMany, ossfs mounts the OSS bucket in read-only mode.

    ReadOnlyMany

    Access Certificate

    The Secret that is used to access the OSS bucket. In this example, the AccessKey pair that you obtained in Step 1 is used. Valid values:

    • Select Existing Secret: If you select this option, you must also configure the Namespace and Secret parameters.

    • Create Secret: If you select this option, you must also configure the Namespace, Name, AccessKey ID, and AccessKey Secret parameters.

    Select Existing Secret

    Optional Parameters

    You can configure custom parameters in the -o *** -o *** format for the OSS volume. Example: -o umask=022 -o max_stat_cache_size=0 -o allow_other.

    umask: modifies the permission mask of files in ossfs. For example, if you specify umask=022, the permission mask of files in ossfs changes to 755. By default, the permission mask of files uploaded by using the OSS SDK or the OSS console is 640 in ossfs. Therefore, we recommend that you specify the umask parameter if you want to split reads and writes.

    max_stat_cache_size: The maximum number of files whose metadata can be cached in the metadata cache. Metadata caching can accelerate List operations. However, if you modify files by using methods other than ossfs, such as the OSS console, OSS SDK, or ossutil, the metadata of the files may not be updated in real time.

    allow_other: allows other users to access the mounted directory. However, these users cannot access the files in the directory.

    For more information, see Options supported by ossfs.

    -o umask=022 -o max_stat_cache_size=0 -o allow_other

    Bucket ID

    The name of the OSS bucket that you want to mount. Click Select Bucket. In the dialog box that appears, select the OSS bucket that you want to mount and click Select.

    Note

    The buckets that are retrieved by using the AccessKey pair you specified are displayed.

    Select Bucket

    OSS Path

    The path relative to the root directory of the OSS bucket to be mounted. The default value is /. This parameter is supported by CSI 1.14.8.32-c77e277b-aliyun and later.

    If you use an ossfs version earlier than 1.91, you must create the path in the OSS bucket in advance. For more information, see Features of ossfs 1.91 and later and ossfs performance benchmarking.

    /

    Endpoint

    The endpoint of the OSS bucket that you want to mount.

    • If the OSS bucket and ECS instance reside in different regions, select Public Endpoint.

    • If the OSS bucket and ECS instance reside in the same region, select Internal Endpoint.

      Note

      By default, HTTP is used when you access the OSS bucket over an internal network. If you want to use HTTPS, use kubectl to create a statically provisioned PV.

    Public Endpoint

    Label

    The labels that you want to add to the PV.

    pv-oss

  5. After you configure the parameters, click Create.

2. Create a PVC

  1. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Volumes > Persistent Volume Claims.

  2. In the upper-right corner of the Persistent Volume Claims page, click Create.

  3. In the Create PVC dialog box, configure the parameters that are described in the following table.

    Parameter

    Description

    Example

    PVC Type

    Select OSS.

    OSS

    Name

    The name of the PVC. The name must be unique in the cluster.

    pvc-oss

    Allocation Mode

    The allocation mode of the PVC. In this example, Existing Volumes is selected.

    Note

    If no PV is created, you can set Allocation Mode to Create Volume and configure the required parameters to create a PV.

    pv-oss

    Existing Volumes

    Click Select PV. Find the PV that you want to use and click Select in the Actions column.

    Capacity

    The capacity of the PV that you created.

    Note

    The capacity of the PV cannot exceed the capacity of the OSS bucket that is associated with the PV.

    20GiB

  4. Click Create.

    After the PVC is created, you can find the PVC named csi-oss-pvc in the PVC list. A PV is bound to the PVC.

3. Create an application

  1. On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose Workloads > Deployments.

  2. In the upper-right corner of the Deployments page, click Create from Image.

  3. Configure the parameters of the application. Then, click Create.

    The following table describes the key parameters. Use default settings for other parameters. For more information, see Create a stateless application by using a Deployment.

    Wizard page

    Parameter

    Description

    Example

    Basic Information

    Name

    Enter a custom name for the Deployment. The name must meet the format requirements displayed in the console.

    test-oss

    Replicas

    The number of pod replicas provisioned by the Deployment.

    2

    Container

    Image Name

    The address of the image used to deploy the application.

    anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6

    Required Resources

    Specify the number of vCores and the amount of memory required by the application.

    0.25 vCores and 0.5 GiB of memory

    Volume

    Click Add PVC and configure the following parameters:

    • Mount Source: Select the PVC you created.

    • Container Path: Specify the container path to which you want to mount the OSS bucket.

    • Mount Source: pvc-oss.

    • Container Path: /data.

    image.png

  4. Run the following command to query the deployment progress of the application:

    1. On the Deployments page, click the name of the application you created.

    2. On the Pods tab, check whether the pod is in the Running state.

kubectl

1. Create a PV and a PVC

You can use one of the following methods to create a PV and a PVC:

  • Method 1: Create a PV and a PVC by using a Secret

    Use a Secret to provide your AccessKey pair to the CSI component.

    Important
    • If the AccessKey pair referenced by the OSS volume is revoked or the required permissions are revoked, the application to which the volume is mounted fails to access OSS and a permission error is reported. To resolve the issue, you need to modify the Secret that stores the AccessKey pair and mount the OSS volume to the application again. In this case, the application is restarted. For more information about how to remount an OSS volume by using ossfs after the AccessKey pair is revoked, see the solution for Scenario 4 in How do I manage the permissions related to OSS volume mounting?

    • If you need to regularly rotate AccessKey pairs, we recommend that you use RRSA authentication.

  • Method 2: Specify your AccessKey pair in the PV configuration

    Important
    • If the AccessKey pair referenced by the OSS volume is revoked or the required permissions are revoked, the application to which the volume is mounted fails to access OSS and a permission error is reported. To resolve this issue, you need to recreate the specified AccessKey and redeploy the application.

    • If you need to regularly rotate AccessKey pairs, we recommend that you use RRSA authentication.

Use a Secret

  1. Create a Secret.

    The following YAML template provides an example on how to specify your AccessKey pair in a Secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: oss-secret
      namespace: default
    stringData:
      akId: <yourAccessKey ID>
      akSecret: <yourAccessKey Secret>
    Note

    The Secret must be created in the namespace in which the application that uses the PV is deployed.

    In this example, akId and akSecret are replaced with the AccessKey ID and AccessKey secret obtained in Step 1.

  2. Run the following command to create a statically provisioned PV:

    kubectl create -f pv-oss.yaml

    The following pv-oss.yaml file is used to create the statically provisioned PV:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-oss
      labels:
        alicloud-pvname: pv-oss
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadOnlyMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: ossplugin.csi.alibabacloud.com
        volumeHandle: pv-oss # Specify the name of the PV. 
        nodePublishSecretRef:
          name: oss-secret
          namespace: default
        volumeAttributes:
          bucket: "oss" # Replace the value with the name of the OSS bucket you want to mount. 
          url: "oss-cn-hangzhou.aliyuncs.com"
          otherOpts: "-o umask=022 -o max_stat_cache_size=0 -o allow_other"
          path: "/"

    Parameter

    Description

    name

    The name of the PV.

    labels

    The labels that are added to the PV.

    storage

    The available storage of the OSS bucket.

    accessModes

    The access mode. Valid values: ReadOnlyMany and ReadWriteMany.

    If you select ReadOnlyMany, ossfs mounts the OSS bucket in read-only mode.

    persistentVolumeReclaimPolicy

    The reclaim policy of the PV. OSS volumes support only the Retain policy, which indicates that the PV and data in the bucket are not deleted when you delete the PVC.

    driver

    The type of volume driver. In this example, the value is set to ossplugin.csi.alibabacloud.com. This indicates that the OSS CSI plug-in provided by Alibaba Cloud is used.

    nodePublishSecretRef

    The Secret from which the AccessKey pair that the system uses to mount PV is retrieved.

    volumeHandle

    The name of the PV.

    bucket

    The OSS bucket that you want to mount.

    url

    The endpoint of the OSS bucket you want to mount. You can retrieve the endpoint from the Overview page of the bucket in the OSS console.

    • If the bucket is mounted to a node in the same region as the bucket or the bucket can be connected to the node through a virtual private cloud (VPC), use the internal endpoint of the bucket.

    • If the bucket is mounted to a node in a different region, use the public endpoint of the bucket.

    Public endpoints and internal endpoints have different formats:

    • Format of internal endpoints: http://oss-{{regionName}}-internal.aliyuncs.com or https://oss-{{regionName}}-internal.aliyuncs.com.

    • Format of public endpoints: http://oss-{{regionName}}.aliyuncs.com or https://oss-{{regionName}}.aliyuncs.com.

    Important

    The vpc100-oss-{{regionName}}.aliyuncs.com format for internal endpoints is deprecated.

    otherOpts

    You can configure custom parameters in the -o *** -o *** format for the OSS volume. Example: -o umask=022 -o max_stat_cache_size=0 -o allow_other.

    umask: modifies the permission mask of files in ossfs. For example, if you specify umask=022, the permission mask of files in ossfs changes to 755. By default, the permission mask of files uploaded by using the OSS SDK or the OSS console is 640 in ossfs. Therefore, we recommend that you specify the umask parameter if you want to split reads and writes.

    max_stat_cache_size: The maximum number of files whose metadata can be cached in the metadata cache. Metadata caching can accelerate List operations. However, if you modify files by using methods other than ossfs, such as the OSS console, OSS SDK, or ossutil, the metadata of the files may not be updated in real time.

    allow_other: allows other users to access the mounted directory. However, these users cannot access the files in the directory.

    For more information, see Options supported by ossfs.

    path

    The path relative to the root directory of the OSS bucket to be mounted. The default value is /. This parameter is supported by CSI 1.14.8.32-c77e277b-aliyun and later.

    If you use an ossfs version earlier than 1.91, you must create the path in the OSS bucket in advance. For more information, see Features of ossfs 1.91 and later and ossfs performance benchmarking.

    1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

    2. On the Clusters page, click the name of the cluster that you want to manage and choose Volumes > Persistent Volumes in the left-side navigation pane.

      On the Persistent Volumes page, you can find the PV you created.

  3. Run the following command to create a PVC:

    kubectl create -f pvc-oss.yaml

    The following pvc-oss.yaml file is used to create the PVC:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-oss
    spec:
      accessModes:
        - ReadOnlyMany
      resources:
        requests:
          storage: 5Gi
      selector:
        matchLabels:
          alicloud-pvname: pv-oss

    Parameter

    Description

    name

    The name of the PVC.

    accessModes

    The access mode. Valid values: ReadOnlyMany and ReadWriteMany.

    If you select ReadOnlyMany, ossfs mounts the OSS bucket in read-only mode.

    storage

    The capacity claimed by the PVC. The claimed capacity cannot exceed the capacity of the PV that is bound to the PVC.

    alicloud-pvname

    The labels that are used to select and bind a PV to the PVC. The labels must be the same as those of the PV to be bound to the PVC.

    In the left-side navigation pane, choose Volumes > Persistent Volume Claims. On the Persistent Volume Claims page, you can find the created PVC.

Specify an AccessKey pair in the PV configuration

  1. Run the following command to specify your AccessKey pair in the PV configuration:

    kubectl create -f pv-accesskey.yaml

    The following pv-accesskey.yaml sample file shows how to specify an AccessKey pair in a PV configuration file:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-oss
      labels:
        alicloud-pvname: pv-oss
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadOnlyMany
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: ossplugin.csi.alibabacloud.com
        volumeHandle: pv-oss # Specify the name of the PV. 
        volumeAttributes:
          bucket: "oss" # Replace the value with the name of the OSS bucket you want to mount. 
          url: "oss-cn-hangzhou.aliyuncs.com"
          otherOpts: "-o umask=022 -o max_stat_cache_size=0 -o allow_other"
          akId: "***"
          akSecret: "***"

    Parameter

    Description

    name

    The name of the PV.

    labels

    The labels that are added to the PV.

    storage

    The available storage of the OSS bucket.

    accessModes

    The access mode. Valid values: ReadOnlyMany and ReadWriteMany.

    If you select ReadOnlyMany, ossfs mounts the OSS bucket in read-only mode.

    persistentVolumeReclaimPolicy

    The reclaim policy of the PV. OSS volumes support only the Retain policy, which indicates that the PV and data in the bucket are not deleted when you delete the PVC.

    driver

    The type of volume driver. In this example, the value is set to ossplugin.csi.alibabacloud.com. This indicates that the OSS CSI plug-in provided by Alibaba Cloud is used.

    volumeHandle

    The name of the PV.

    bucket

    The OSS bucket that you want to mount.

    url

    The endpoint of the OSS bucket you want to mount. You can retrieve the endpoint from the Overview page of the bucket in the OSS console.

    • If the bucket is mounted to a node in the same region as the bucket or the bucket can be connected to the node through a virtual private cloud (VPC), use the internal endpoint of the bucket.

    • If the bucket is mounted to a node in a different region, use the public endpoint of the bucket.

    Public endpoints and internal endpoints have different formats:

    • Format of internal endpoints: http://oss-{{regionName}}-internal.aliyuncs.com or https://oss-{{regionName}}-internal.aliyuncs.com.

    • Format of public endpoints: http://oss-{{regionName}}.aliyuncs.com or https://oss-{{regionName}}.aliyuncs.com.

    Important

    The vpc100-oss-{{regionName}}.aliyuncs.com format for internal endpoints is deprecated.

    otherOpts

    You can configure custom parameters in the -o *** -o *** format for the OSS volume. Example: -o umask=022 -o max_stat_cache_size=0 -o allow_other.

    umask: modifies the permission mask of files in ossfs. For example, if you specify umask=022, the permission mask of files in ossfs changes to 755. By default, the permission mask of files uploaded by using the OSS SDK or the OSS console is 640 in ossfs. Therefore, we recommend that you specify the umask parameter if you want to split reads and writes.

    max_stat_cache_size: The maximum number of files whose metadata can be cached in the metadata cache. Metadata caching can accelerate List operations. However, if you modify files by using methods other than ossfs, such as the OSS console, OSS SDK, or ossutil, the metadata of the files may not be updated in real time.

    allow_other: allows other users to access the mounted directory. However, these users cannot access the files in the directory.

    For more information, see Options supported by ossfs.

    path

    The path relative to the root directory of the OSS bucket to be mounted. The default value is /. This parameter is supported by CSI 1.14.8.32-c77e277b-aliyun and later.

    If you use an ossfs version earlier than 1.91, you must create the path in the OSS bucket in advance. For more information, see Features of ossfs 1.91 and later and ossfs performance benchmarking.

    akId

    The AccessKey ID that you obtained in the previous step.

    akSecret

    The AccessKey secret that you obtained in the previous step.

  2. Run the following command to create a PVC:

    kubectl create -f pvc-oss.yaml

    The following pvc-oss.yaml file is used to create the PVC:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-oss
    spec:
      accessModes:
        - ReadOnlyMany
      resources:
        requests:
          storage: 5Gi
      selector:
        matchLabels:
          alicloud-pvname: pv-oss

    Parameter

    Description

    name

    The name of the PVC.

    accessModes

    The access mode. Valid values: ReadOnlyMany and ReadWriteMany.

    If you select ReadOnlyMany, ossfs mounts the OSS bucket in read-only mode.

    storage

    The capacity claimed by the PVC. The claimed capacity cannot exceed the capacity of the PV that is bound to the PVC.

    alicloud-pvname

    The labels that are used to select and bind a PV to the PVC. The labels must be the same as those of the PV to be bound to the PVC.

    In the left-side navigation pane, choose Volumes > Persistent Volume Claims. On the Persistent Volume Claims page, you can find the created PVC.

2. Create an application.

Create an application named oss-static and mount the PV to the application.

Run the following command to create a file named oss-static.yaml:

kubectl create -f oss-static.yaml

The following oss-static.yaml file is used to create the application:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: oss-static
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
        ports:
        - containerPort: 80
        volumeMounts:
          - name: pvc-oss
            mountPath: "/data"
          - name: pvc-oss
            mountPath: "/data1"
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - cd /data
          initialDelaySeconds: 30
          periodSeconds: 30
      volumes:
        - name: pvc-oss
          persistentVolumeClaim:
            claimName: pvc-oss
  • livenessProbe: Configure health checks. For more information, see OSS volumes.

  • mountPath: the path where the OSS bucket is mounted in the container.

  • claimName: the name of the PVC that the application uses to mount the NAS file system.

Check whether the OSS volume can persist and share data

  1. Run the following command to query the pods provisioned by the oss-static application.

    kubectl get pod

    Expected output:

    NAME                             READY   STATUS    RESTARTS   AGE
    oss-static-66fbb85b67-d****      1/1     Running   0          1h
    oss-static-66fbb85b67-l****      1/1     Running   0          1h
  2. Select a pod and create a file named tempfile in the pod. In this example, the oss-static-66fbb85b67-d**** pod is used.

    • If the OSS volume is mounted in ReadWriteMany mode, run the following command to create a file named tmpfile in the /data path:

      kubectl exec oss-static-66fbb85b67-d**** -- touch /data/tmpfile
    • If the OSS volume is mounted in ReadOnlyMany mode, use the OSS console or cp command (upload objects) to upload a file named tmpfile to the corresponding path in the OSS bucket.

  3. Log on to each pod and access the file in the mount path.

    In this example, the mount path in the oss-static-66fbb85b67-d**** is /data and the mount path in the oss-static-66fbb85b67-l**** pod is /data1.

    kubectl exec oss-static-66fbb85b67-d**** -- ls /data | grep tmpfile
    kubectl exec oss-static-66fbb85b67-l**** -- ls /data1 | grep tmpfile

    Expected output:

    tmpfile

    The output indicates that the file can be accessed from both pods. This means that the pods share the data stored in the OSS volume.

    Note

    If no output is returned, check whether the version of the CSI plug-in is 1.20.7 or later. For more information, see csi-plugin.

  4. Recreate a pod.

    kubectl delete pod oss-static-66fbb85b67-d****

    Expected output:

    pod "oss-static-66fbb85b67-d****" deleted
  5. Verify that the file still exists in the OSS bucket after the pod is deleted.

    1. Run the following command to query the pod that is recreated:

      kubectl get pod

      Expected output:

      NAME                             READY   STATUS    RESTARTS   AGE
      oss-static-66fbb85b67-l****      1/1     Running   0          1h
      oss-static-66fbb85b67-z****      1/1     Running   0          40s
    2. Run the following command to query files in the /data path of the pod:

      kubectl exec oss-static-66fbb85b67-z**** -- ls /data | grep tmpfile

      Expected output:

      tmpfile

      The output indicates that the tmpfile file still exists. This means that the OSS volume can persist data.

References