Disks are suitable for applications that do not require data sharing but require high IOPS and low latency. You can mount existing disks to pods as statically provisioned disk volumes to meet persistent storage requirements. This topic describes how to use a statically provisioned disk volume and verify that the volume can be used to persist data.
Scenarios
Disks are suitable for the following scenarios:
You want to create applications that require high disk I/O throughput and do not require data sharing. The applications can use storage services such as MySQL and Redis.
You want to write logs at high speeds.
You want to persist data in a way that is independent of the pod lifecycle.
You can mount existing disks to pods as statically provisioned disk volumes. In this mode, you must manually create a persistent volume (PV) and a persistent volume claim (PVC). This ensures that the PV is ready before the container is started. For more information, see Disk volumes.
Prerequisites
The Container Storage Interface (CSI) plug-in is installed in the cluster.
In the left-side navigation pane of the cluster management page, choose Update csi-plugin and csi-provisioner.
. On the Storage tab, you can check whether csi-plugin and csi-provisioner are installed. For more information about how to update CSI plug-ins to use specific capabilities, seeIf your cluster uses FlexVolume, you must migrate the cluster to the CSI plug-in because FlexVolume is no longer available. For more information, see Upgrade from FlexVolume to CSI.
The disk that you want to mount meets the following requirements:
The billing method of the disk is pay-as-you-go and the disk is in the Pending state.
The disk is in the same zone as the Elastic Compute Service (ECS) instance, and the disk type is compatible with the ECS instance type.
Disks cannot be mounted across zones, and specific disk types cannot be attached to specific ECS instances. Make sure that the zone and specifications of the ECS instance match the existing disk. Otherwise, the disk fails to be mounted. For more information about the matching rules between disk types and ECS instance types, see Overview of instance families.
Usage notes
Disks cannot be shared. If multi-attach is not enabled for disks, each disk can be mounted to only one pod. For information about the multi-attach feature, see Use the multi-attach and NVMe reservation features of NVMe disks.
You can mount a disk only to a pod that resides in the same zone as the disk.
When a pod is rebuilt, the original disk is mounted to the pod. If the pod cannot be scheduled to the original zone due to specific limits, the pod remains in the Pending state because the disk cannot be attached.
We recommend that you mount disks to pods or StatefulSets instead of Deployments.
If multi-attach is disabled, a disk can be mounted to only one pod. If you want to mount a disk to a Deployment, you must set the number of pod replicas to one for the Deployment. If you configure multiple pods, you cannot mount a separate disk volume to each pod. In addition, you cannot specify the volume mount and unmount priorities of pods. When you restart a pod in a Deployment, the disk may fail to be mounted to the restarted pod due to the update policy used by Deployments. We recommend that you do not mount disks to Deployments.
If the application configuration includes the
securityContext.fsgroup
parameter when you use a disk volume, kubelet automatically runs thechmod
andchown
commands after the volume is mounted, which may slow down the volume mount process.After you add the
securityContext.fsgroup
parameter to the application configuration, Container Service for Kubernetes (ACK) automatically modifies the ownership of files in the volume when the disk is mounted to the application. The time required for ownership modification varies based on the number of files in the volume. If a large number of files exist in the volume, the modification process require a long period of time. For clusters that run Kubernetes 1.20 or later, you can set thefsGroupChangePolicy
parameter in the pod configuration toOnRootMismatch
. This ensures that ACK modifies file ownership only during the first time the pod is started. When you update or recreate the pod after creation, the volume mount process does not involve ownership modification. If the preceding setting does not meet your business requirements, we recommend that you create init containers and grant the init containers the permissions to perform relevant operations.
Mount a statically provisioned disk volume by using kubectl
Step 1: Create a PV
Connect to the cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster or Manage Kubernetes clusters via kubectl in Cloud Shell.
Modify the following YAML file and save the file as disk-pv.yaml:
Replace the following parameters in the YAML file:
<YOUR-DISK-ID>
: Existing disk ID. Example:d-uf628m33r5rsbi******
<YOUR-DISK-SIZE>
: The size of the existing disk. Example:20 GiB
.<YOUR-DISK-ZONE-ID>
: The zone where the existing disk is located. Example:cn-shanghai-f
.<YOUR-DISK-CATEGORY>
: The type of the existing disk. Example:cloud_essd
.The following table describes the values of different disk categories.
cloud_essd_entry
: Enterprise SSD (ESSD) Entry disk.cloud_auto
: ESSD AutoPL disk.cloud_essd
: ESSD.cloud_ssd
: standard SSD.cloud_efficiency
: ultra disk.
apiVersion: v1 kind: PersistentVolume metadata: name: "<YOUR-DISK-ID>" annotations: csi.alibabacloud.com/volume-topology: '{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node.csi.alibabacloud.com/disktype.<YOUR-DISK-CATEGORY>","operator":"In","values":["available"]}]}]}' spec: capacity: storage: "<YOUR-DISK-SIZE>" claimRef: apiVersion: v1 kind: PersistentVolumeClaim namespace: default name: disk-pvc accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: diskplugin.csi.alibabacloud.com volumeHandle: "<YOUR-DISK-ID>" nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: topology.diskplugin.csi.alibabacloud.com/zone operator: In values: - "<YOUR-DISK-ZONE-ID>" storageClassName: alicloud-disk-topology-alltype volumeMode: Filesystem
The following table describes the parameters.
Parameter
Description
Parameter
Description
csi.alibabacloud.com/volume-topology
The annotation. This parameter is used to configure the additional node limits required to mount the disk. To ensure that the pod can be scheduled to compatible ECS nodes, we recommend that you specify the type of disk.
claimRef
Specify the PVC that you want to bind to a PV. If you want a PV to any PVC, delete this parameter.
accessModes
The access mode of the PVC. You must select
ReadWriteOnce
, which specifies that the volume is mounted to only one pod in read-write mode.persistentVolumeReclaimPolicy
The reclaim policy of the PV.
Delete
: When PVCs are deleted, the related PVs and disks are also deleted.Retain: When a PVC is deleted, the related PV and disk are retained and can only be manually deleted.
driver
In this example, this parameter is set to
diskplugin.csi.alibabacloud.com
. This value specifies that the Alibaba Cloud CSI plug-in is used.nodeAffinity
The node affinity configuration. Disks cannot be mounted across zones. You can use this parameter to schedule pods to the corresponding ECS node in the zone where the disk is located.
storageClassName
This parameter is unavailable for statically provisioned volumes. You do not need to create a StorageClass in advance. However, make sure that the ConfigMap value is the same in the PV and PVC.
Create a PV.
kubectl create -f disk-pv.yaml
Check the PV.
kubectl get pv
Expected output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE d-uf628m33r5rsbi****** 20Gi RWO Retain Available default/disk-pvc disk <unset> 1m36s
Step 2: Create a PVC
Create a file named disk-pvc.yaml and copy the following content to the file:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: disk-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: "<YOUR-DISK-SIZE>" storageClassName: alicloud-disk-topology-alltype volumeName: "<YOUR-DISK-ID>"
The following table describes the parameters.
Parameter
Description
Parameter
Description
accessModes
The access mode of the PVC. You must select
ReadWriteOnce
, which specifies that the volume is mounted to only one pod in read-write mode.storage
The storage capacity allocated to the pod. The value cannot exceed the capacity of the disk.
storageClassName
This parameter is unavailable for statically provisioned volumes. You do not need to create a StorageClass in advance. However, make sure that the ConfigMap value is the same in the PV and PVC.
volumeName
Specify the PV that you want to bind to a PVC. If you want bind a PVC to any PV, delete this parameter.
Create a PVC.
kubectl create -f disk-pvc.yaml
Check the PVC.
kubectl get pvc
The following output shows that a PV is bound to the PVC.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE disk-pvc Bound d-uf628m33r5rsbi****** 20Gi RWO disk <unset> 64s
Step 3: Create an application and mount a disk to the application
Create a file named disk-test.yaml and copy the following content to the file.
The following code block specifies the configuration of a StatefulSet that provisions one pod. The pod requests storage resources by using the
disk-pvc
PVC, which is mounted to the/data
path of the pod.apiVersion: apps/v1 kind: StatefulSet metadata: name: disk-test spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6 ports: - containerPort: 80 volumeMounts: - name: pvc-disk mountPath: /data volumes: - name: pvc-disk persistentVolumeClaim: claimName: disk-pvc
Create a StatefulSet and mount a disk to the StatefulSet.
kubectl create -f disk-test.yaml
Check whether the pod provisioned by the StatefulSet is deployed.
kubectl get pod -l app=nginx
The following output shows that one pod is deployed for the StatefulSet.
NAME READY STATUS RESTARTS AGE disk-test-0 1/1 Running 0 14s
View files in the mount path to check whether the disk is mounted.
kubectl exec disk-test-0 -- df -h /data
Expected output:
Filesystem Size Used Avail Use% Mounted on /dev/vdb 20G 24K 20G 1% /data
Mount a statically provisioned disk volume in the ACK console
Step 1: Create a PV
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
On the Persistent Volumes page, click Create.
In the Create dialog box, configure the parameters and click Create.
Parameter
Description
Example
Parameter
Description
Example
PV Type
Select Cloud Disk.
Cloud Disk
Access Mode
Only ReadWriteOnce is supported.
ReadWriteOnce
Disk ID
Click Select Disk and select a disk that is in the same region as the node.
d-uf628m33r5rsbi******
File System Type
Select the file system of the disk. Valid values: ext4, ext3, xfs, and vfat. Default value: ext4.
ext4
After you create the PV, you can view the PV on the Persistent Volumes page.
Step 2: Create a PVC
In the left-side navigation pane of the details page, choose .
In the upper-right corner of the Persistent Volume Claims page, click Create.
In the Create dialog box, configure the parameters and click Create.
Parameter
Description
Example
Parameter
Description
Example
PVC Type
Select Cloud Disk.
Cloud Disk
Name
Enter a custom name for the PVC. The name must follow the format requirements displayed on the UI.
diks-pvc
Allocation Mode
Select Existing Volumes.
Existing Volumes
Existing Storage Class
Select the volume that you created in Step 1.
d-uf690053kttkprgx****, 20GiB
Capacity
The storage capacity allocated to the pod. The value cannot exceed the capacity of the disk.
20Gi
After you create the PVC, you can view the PVC on the Persistent Volume Claims page. The PV you created is bound to the PVC.
Step 3: Create an application and mount a disk to the application
In the left-side navigation pane of the details page, choose .
In the upper-right corner of the StatefulSets page, click Create from Image.
Configure the parameters of the StatefulSet and click Create.
The following table describes some of the parameters. Configure other parameters based on your business requirements. For more information, see Use a StatefulSet to create a stateful application.
Wizard page
Parameter
Description
Example
Wizard page
Parameter
Description
Example
Basic Information
Name
Enter a custom name for the StatefulSet. The name must follow the format requirements displayed on the UI.
disk-test
Replicas
The number of pod replicas provisioned by the StatefulSet.
1
Container
Image Name
The address of the image used to deploy the application.
anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
Required Resources
Specify the number of vCores, the amount of memory, and the amount of ephemeral storage required by the application.
CPU: 0.25 vCore
Memory: 512 MiB
Ephemeral-Storage: Skip
Volume
Click Add PVC and configure the parameters.
Mount Source: Select the PVC that you created in Step 2.
Container Path: Specify the container path to which you want to mount the disk.
Mount Source: disk-pvc.
Container Path: /data
Check whether the application is deployed.
On the StatefulSets page, click the name of the application that you created.
On the Pods tab, check whether the pod is in the Running state.
Check whether data persistence is enabled based on the disk by using kubectl
The StatefulSet created in the preceding example provisions one pod, and a disk is mounted to the pod. If you delete the pod, the system automatically recreates the pod. The original disk is mounted to the new pod and data is retained on the disk. To test whether data is persisted to the disk, perform the following steps:
View files in the mount path to check whether files on the disk can be viewed.
kubectl exec disk-test-0 -- ls /data
Expected output:
lost+found
Create a file on the disk.
kubectl exec disk-test-0 -- touch /data/test
Delete the pod.
kubectl delete pod disk-test-0
After you delete the pod, the system automatically recreates the pod.
Check the new pod.
kubectl get pod -l app=nginx
The following output shows that the new pod has the same name as the pod you deleted.
NAME READY STATUS RESTARTS AGE disk-test-0 1/1 Running 0 27s
Check whether the original disk is mounted to the pod and the file is retained on the disk.
kubectl exec disk-test-0 -- ls /data
The following output shows that the
test
file is retained on the disk.lost+found test
References
If errors occur when you use disk volumes, refer to FAQ about disk volumes.
For more information about how to resize a disk when the disk size does not meet your business requirements or the disk is full, see Expand disk volumes.
For more information about real-time disk usage, see Overview of container storage monitoring.