Disk volumes are ideal for scenarios involving non-shared data and applications that demand high I/O performance and low latency. You can mount existing disks to a pod using a statically provisioned volume to fulfill persistent storage requirements. This topic explains how to use a statically provisioned disk volume and verify the persistence of the disk storage.
Scenarios
Disks are optimal for storage scenarios such as:
-
Applications with intensive disk I/O and no shared data requirements, including MySQL, Redis, and other data storage services.
-
Rapid log writing.
-
Data persistence beyond the lifecycle of a pod.
If you already have a disk, you can attach it to a pod by using a statically provisioned volume. This process involves manually creating a PersistentVolume (PV) and a PersistentVolumeClaim (PVC), and ensuring the PV is prepared before the container launches. For more information, see disk volumes.
Prerequisites
-
The CSI component is installed in the cluster.
Note-
You can select upgrade csi-plugin and csi-provisioner.
in the left-side navigation pane of the cluster management page to check the installation status of the csi-plugin and csi-provisioner components on the Storage tab. If you need to upgrade the CSI component to access certain capabilities, see -
If your cluster currently uses the FlexVolume component, migrate to the CSI component because FlexVolume is deprecated. For specific operations, see migrate FlexVolume to CSI.
-
-
The disk to be mounted must meet the following requirements:
-
The billing method for the disk is pay-as-you-go, and the status is Pending Mount.
-
The disk and the ECS node are in the same zone, and the disk type is compatible with the ECS instance type.
ImportantDisks do not support cross-zone mounting, and certain disk types are incompatible with some ECS instance types. Ensure that the zone and type of the ECS node to which the pod is scheduled match the existing disk. Otherwise, the disk will fail to mount. For more information about the compatibility between disk types and ECS instance types, see instance family.
-
Considerations
-
Disks are non-shared storage. Disks that do not support multi-attach can only be mounted by one pod at a time. For more information about multi-attach, see use NVMe disk multi-attach and reservation.
-
Disks can only be mounted to pods in the same zone and do not support cross-zone mounting.
-
When a pod is rebuilt, the original disk will be remounted. If the pod cannot be scheduled to the original zone due to other constraints, the pod will remain in the Pending state because it cannot mount the disk.
-
It is recommended to use StatefulSet to mount disks or to mount disks to pods individually, rather than using Deployment.
NoteWhen multi-attach is not enabled, a disk can only be mounted to one pod. When mounting a disk to a Deployment, the Replica must be set to 1, which means that an independent volume cannot be configured for each pod, and the priority order of mounting and unmounting cannot be guaranteed. Additionally, due to the update policy of Deployment, when restarting a pod, the new pod may fail to mount the disk. Therefore, it is not recommended to mount disks to Deployments.
-
When using disk volumes, if
securityContext.fsgroup
is configured in the application's YAML, kubelet will performchmod
andchown
operations after mounting, which will extend the mounting time.NoteAfter configuring
securityContext.fsgroup
, the owner of the files in the volume will be automatically adjusted when the disk is mounted. Depending on the number of files, this may result in a longer preparation time. For Kubernetes clusters of version 1.20 and above, you can setfsGroupChangePolicy
toOnRootMismatch
to adjust the file owner only when the container is started for the first time. In subsequent scenarios such as pod upgrades or rebuilds, the mounting time will return to normal. If the requirements are still not met, it is recommended to use initContainer to implement permission adjustment operations.
Mount a statically provisioned disk volume (kubectl)
Step 1: Create a PV
-
Connect to the cluster. For specific operations, see obtain the cluster KubeConfig and connect to the cluster using the kubectl tool or manage Kubernetes clusters on CloudShell using kubectl.
-
Modify the following YAML content and save it as disk-pv.yaml.
Replace the following content in the YAML:
-
<YOUR-DISK-ID>
: The ID of the existing disk, for example,d-uf628m33r5rsbi******
-
<YOUR-DISK-SIZE>
: The size of the existing disk, for example,20Gi
-
<YOUR-DISK-ZONE-ID>
: The zone where the existing disk is located, for example,cn-shanghai-f
-
<YOUR-DISK-CATEGORY>
: The type of the existing disk, for example,cloud_essd
The values for each type of disk are as follows:
-
ESSD Entry disk:
cloud_essd_entry
-
ESSD AutoPL disk:
cloud_auto
-
ESSD disk:
cloud_essd
-
SSD disk:
cloud_ssd
-
Ultra disk:
cloud_efficiency
-
apiVersion: v1 kind: PersistentVolume metadata: name: "<YOUR-DISK-ID>" annotations: csi.alibabacloud.com/volume-topology: '{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node.csi.alibabacloud.com/disktype.<YOUR-DISK-CATEGORY>","operator":"In","values":["available"]}]}]}' spec: capacity: storage: "<YOUR-DISK-SIZE>" claimRef: apiVersion: v1 kind: PersistentVolumeClaim namespace: default name: disk-pvc accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: diskplugin.csi.alibabacloud.com volumeHandle: "<YOUR-DISK-ID>" nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: topology.diskplugin.csi.alibabacloud.com/zone operator: In values: - "<YOUR-DISK-ZONE-ID>" storageClassName: alicloud-disk-topology-alltype volumeMode: Filesystem
The following tables describe the parameters.
Parameter
Description
csi.alibabacloud.com/volume-topology
Annotation. Used to configure additional node constraints required for successful mounting of the disk. It is recommended to fill in the disk type to ensure that the pod is scheduled to an ECS node that supports the disk type.
claimRef
Specifies the PVC that the PV can bind to. If you want the PV to be bound by any PVC, delete this configuration.
accessModes
Access mode. Only
ReadWriteOnce
is supported, indicating that the volume can only be mounted by one pod in read-write mode.persistentVolumeReclaimPolicy
The reclaim policy of the PV.
Delete
: When the PVC is deleted, the PV and the disk will be deleted together.Retain
: When the PVC is deleted, the PV and the disk will not be deleted. You need to delete them manually.
driver
The value is
diskplugin.csi.alibabacloud.com
, indicating that the Alibaba Cloud disk CSI plug-in is used.nodeAffinity
Node affinity configuration. Because disks do not support cross-zone mounting, this configuration ensures that the pod is scheduled to the ECS node in the zone where the disk is located.
storageClassName
This configuration is meaningless for static volumes. You do not need to create the corresponding StorageClass in advance, but you need to ensure that the values of this configuration item in the PV and PVC are consistent.
-
-
Create the PV.
kubectl create -f disk-pv.yaml
-
Check the PV.
kubectl get pv
Expected output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE d-uf628m33r5rsbi****** 20Gi RWO Retain Available default/disk-pvc disk <unset> 1m36s
Step 2: Create a PVC
-
Create a file named disk-pvc.yaml and copy the following content to the file:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: disk-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: "<YOUR-DISK-SIZE>" storageClassName: alicloud-disk-topology-alltype volumeName: "<YOUR-DISK-ID>"
The following tables describe the parameters.
Parameter
Description
accessModes
Access mode. Only
ReadWriteOnce
is supported, indicating that the volume can only be mounted by one pod in read-write mode.storage
The storage capacity allocated to the pod. The set value cannot exceed the capacity of the disk itself.
storageClassName
This configuration is meaningless for static volumes. You do not need to create the corresponding StorageClass in advance, but you need to ensure that the values of this configuration item in the PV and PVC are consistent.
volumeName
Specifies the PV that the PVC can bind to. If you want the PVC to be bound by any PV, delete this parameter.
-
Create the Persistent Volume Claim (PVC).
kubectl create -f disk-pvc.yaml
-
Check the PVC.
kubectl get pvc
The following output indicates that the PVC is bound to the disk PV.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE disk-pvc Bound d-uf628m33r5rsbi****** 20Gi RWO disk <unset> 64s
Step 3: Create an application and mount the disk
-
Create a file named disk-test.yaml and copy the following content to the file:
The following YAML example creates a StatefulSet with one pod. The pod requests storage resources through a PVC named
disk-pvc
, and the mount path is/data
.apiVersion: apps/v1 kind: StatefulSet metadata: name: disk-test spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6 ports: - containerPort: 80 volumeMounts: - name: pvc-disk mountPath: /data volumes: - name: pvc-disk persistentVolumeClaim: claimName: disk-pvc
-
Create the StatefulSet and mount the disk.
kubectl create -f disk-test.yaml
-
Check whether the pod provisioned by the StatefulSet is deployed.
kubectl get pod -l app=nginx
The following output shows that one pod is deployed for the StatefulSet.
NAME READY STATUS RESTARTS AGE disk-test-0 1/1 Running 0 14s
-
View files in the mount path to check whether the disk is mounted.
kubectl exec disk-test-0 -- df -h /data
Expected output:
Filesystem Size Used Avail Use% Mounted on /dev/vdb 20G 24K 20G 1% /data
Mount a statically provisioned disk volume (console)
Step 1: Create a volume (PV)
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
-
On the Volumes page, click Create.
-
In the dialog box that appears, complete the parameter configuration and then click Create.
Parameter
Description
Example
Volume Type
Select Disk.
Disk
Access Mode
Only ReadWriteOnce is supported.
ReadWriteOnce
Disk ID
Click Select Disk and select the disk to be mounted in the same region and zone as the node.
d-uf628m33r5rsbi******
File System Type
Select the data type to store data on the disk. Supported types include ext4, ext3, xfs, and vfat. The default is ext4.
ext4
After the creation is complete, you can view the newly created PV on the Volumes page.
Step 2: Create a volume claim (PVC)
In the left-side navigation pane of the details page, choose .
-
On the Volume Claims page, click Create.
-
In the dialog box that appears, complete the parameter configuration and then click Create.
Parameter
Description
Example
Volume Claim Type
Select Disk.
Disk
Name
Specify a custom name for the PVC. The name must meet the format requirements displayed on the UI.
diks-pvc
Allocation Mode
Select Existing Volumes.
Existing Volumes
Existing Volumes
Select the volume created in Step 1.
d-uf690053kttkprgx****, 20Gi
Total
The storage capacity allocated to the pod. The set value cannot exceed the capacity of the disk itself.
20Gi
After the creation is complete, you can view the newly created PVC on the Volume Claims page. The PVC is bound to the PV (that is, the disk volume).
Step 3: Create an application and mount the disk
In the left-side navigation pane of the details page, choose .
In the upper-right corner of the StatefulSets page, click Create from Image.
-
Complete the parameter configuration for the StatefulSet and click Create.
Note the following parameters. Set other parameters as needed. For more information, see create a stateful workload StatefulSet.
Configuration Page
Parameter
Description
Example
Basic Application Information
Application Name
Enter a custom name for the StatefulSet. The name must meet the format requirements displayed on the UI.
disk-test
Number Of Replicas
Configure the number of pod replicas provisioned by the StatefulSet.
1
Container Configuration
Image Name
Enter the address of the image used to deploy the application.
anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6
Required Resources
Set the required vCPU, memory, and ephemeral storage resources.
CPU: 0.25 Core
Memory: 512 MiB
Ephemeral-Storage: Not set
Data Volume
Click Add Cloud Storage Claim and complete the parameter configuration.
Mount Source: Select the PVC created in Step 2.
Container Path: Enter the container path to which the disk is to be mounted.
Mount Source: disk-pvc
Container Path: /data
-
Verify the application deployment.
-
On the Stateful page, click the application name.
-
On the Pods tab, confirm that the pod is running (status is Running).
-
Verify the persistence of the disk storage (kubectl)
The StatefulSet created in the preceding example provisions one pod and a disk is mounted to the pod. If you delete the pod, the system automatically recreates the pod. The original disk is mounted to the new pod and data still exists on the disk. You can perform the following steps to test whether data is persisted to the disk:
-
View files in the mount path to check whether files on the disk can be viewed.
kubectl exec disk-test-0 -- ls /data
Expected output:
lost+found
-
Create a file on the disk.
kubectl exec disk-test-0 -- touch /data/test
-
Delete the pod.
kubectl delete pod disk-test-0
NoteAfter you delete the pod, the system automatically recreates the pod.
-
Check the new pod.
kubectl get pod -l app=nginx
The following output shows that the new pod has the same name as the pod you deleted.
NAME READY STATUS RESTARTS AGE disk-test-0 1/1 Running 0 27s
-
Check whether the original disk is mounted to the pod and the file still exists on the disk.
kubectl exec disk-test-0 -- ls /data
The following output shows that the
test
file previously written to the disk still exists.lost+found test
References
-
If you encounter problems while using disk volumes, see disk volume FAQ for troubleshooting.
-
If the disk size does not meet the requirements or the disk is full, see expand disk volumes for expansion.
-
If you need to monitor the usage of the disk in real-time, see container storage monitoring overview.