When a node that hosts running containers fails, stateful applications may lose the business data stored in the containers. This issue can be resolved by using persistent storage. This topic describes how to use an Object Storage Service (OSS) volume to persist data.
Background information
OSS is a secure, cost-effective, and highly reliable cloud storage service provided by Alibaba Cloud. You can mount an OSS volume on multiple pods of a Container Service for Kubernetes (ACK) cluster.
Scenarios
Average requirements on disk I/O
Sharing of data, including configuration files, images, and small video files
Procedure
Create an OSS bucket.
Obtain an AccessKey ID and AccessKey secret.
Create a persistent volume (PV) and a persistent volume claim (PVC).
Prerequisites
Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
An OSS bucket is created in the OSS console. For more information, see Create buckets.
Precautions
kubelet and the OSSFS driver may be restarted when the ACK cluster is upgraded. As a result, the mounted OSS directory becomes unavailable. In this case, you must recreate the pods on which the OSS volume is mounted. You can add health check settings in the YAML file to restart the pods and remount the OSS volume when the OSS directory becomes unavailable.
If your ACK cluster is of the latest Kubernetes version, the preceding issue is fixed.
Create a PV
Create a file named pv-oss.yaml.
apiVersion: v1 kind: PersistentVolume metadata: name: pv-oss labels: alicloud-pvname: pv-oss spec: capacity: storage: 5Gi accessModes: - ReadWriteMany storageClassName: oss flexVolume: driver: "alicloud/oss" options: bucket: "docker" //Replace the value with the bucket name. path: /path //Replace the value with the relative path of the sub directory. url: "oss-cn-hangzhou.aliyuncs.com" //Replace the value with the endpoint of the OSS bucket. akId: "***" //Replace the value with the AccessKey ID. akSecret: "***" //Replace the value with the AccessKey secret. otherOpts: "-o max_stat_cache_size=0 -o allow_other" //Replace the value with custom parameter values.
Parameter description
alicloud-pvname
: the name of the PV. This value must be used in theselector
field of the PVC that is associated with the PV.bucket
: the name of the OSS bucket. Only OSS buckets can be mounted to the ACK cluster. You cannot mount the subdirectories or files in an OSS bucket to the ACK cluster.path
: the path relative to the root directory of the OSS bucket to be mounted. The default value is /. This parameter is supported by csi-plugin 1.14.8.32-c77e277b-aliyun and later.url
: the endpoint of the OSS bucket. For more information, see Regions and endpoints. To obtain the endpoint, log on to the OSS console. In the left-side navigation pane, find the OSS bucket that you want to manage. On the Overview page, find the Domain Names section and view the endpoint of the OSS bucket in the Endpoint column.akId
: the AccessKey ID. Log on to the ACK console, move the pointer over the icon in the upper-right corner of the page and select AccessKey Management from the shortcut menu. On the page that appears, create an AccessKey ID and an AccessKey secret.akSecret
: the AccessKey secret. To obtain the AccessKey secret, perform the steps described inakId
.otherOpts
: the custom parameters that are used to mount the OSS bucket. The parameters must be in the following format:-o *** -o ***
.
Run the following command to create a PV:
kubectl create -f pv-oss.yaml
Expected result
- Log on to the ACK console.
In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage, and click the name of the cluster or click Details in the Actions column.
In the left-side navigation pane of the details page, choose
. Verify that the newly created PV is displayed.
Create a PVC
Create a PVC of the OSS type. Set the selector
parameter to configure how to select the PV to which the PVC is bound. Set the storageClassName
parameter to bind the PVC with the PV of the OSS type.
Create a file named pvc-oss.yaml.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-oss spec: accessModes: - ReadWriteMany storageClassName: oss resources: requests: storage: 5Gi selector: matchLabels: alicloud-pvname: pv-oss
Run the following command to create a PVC:
kubectl create -f pvc-oss.yaml
Expected result
- Log on to the ACK console.
In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage, and click the name of the cluster or click Details in the Actions column.
In the left-side navigation pane of the details page, choose
. Verify that the newly created PVC is displayed.
Create an application
Create a file named oss-static.yaml.
apiVersion: apps/v1 kind: Deployment metadata: name: oss-static labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: pvc-oss mountPath: "/data" - name: pvc-oss mountPath: "/data1" livenessProbe: exec: command: - sh - -c - cd /data initialDelaySeconds: 30 periodSeconds: 30 volumes: - name: pvc-oss persistentVolumeClaim: claimName: pvc-oss
NoteFor more information about how to set
livenessProbe
to configure health checks, see OSS volume overview.Run the following command to deploy an application:
kubectl create -f oss-static.yaml d
Expected result
- Log on to the ACK console.
In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage, and click the name of the cluster or click Applications in the Actions column.
In the left-side navigation pane of the cluster details page, choose
. Verify that the newly created application is displayed.
Verify data persistence
Run the following command to query the pods that run the application:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE oss-static-66fbb85b67-dqbl2 1/1 Running 0 1h
Run the following command to query the files in the /data path:
kubectl exec oss-static-66fbb85b67-dqbl2 -- ls /data | grep tmpfile
NoteThe output indicates that no file exists in the /data path.
Run the following command to create a file named tmpfile in the /data path:
kubectl exec oss-static-66fbb85b67-dqbl2 -- touch /data/tmpfile
Run the following command to query the files in the /data path:
kubectl exec oss-static-66fbb85b67-dqbl2 -- ls /data | grep tmpfile
Expected output:
tmpfile
Run the following command to delete the pod named oss-static-66fbb85b67-dqbl2:
kubectl delete pod oss-static-66fbb85b67-dqbl2
Expected output:
pod "oss-static-66fbb85b67-dqbl2" deleted
Open another kubectl command-line interface (CLI) and run the following command to view how the pod is deleted and recreated:
kubectl get pod -w -l app=nginx
Expected output:
NAME READY STATUS RESTARTS AGE oss-static-66fbb85b67-dqbl2 1/1 Running 0 78m oss-static-66fbb85b67-dqbl2 1/1 Terminating 0 78m oss-static-66fbb85b67-zlvmw 0/1 Pending 0 <invalid> oss-static-66fbb85b67-zlvmw 0/1 Pending 0 <invalid> oss-static-66fbb85b67-zlvmw 0/1 ContainerCreating 0 <invalid> oss-static-66fbb85b67-dqbl2 0/1 Terminating 0 78m oss-static-66fbb85b67-dqbl2 0/1 Terminating 0 78m oss-static-66fbb85b67-dqbl2 0/1 Terminating 0 78m oss-static-66fbb85b67-zlvmw 1/1 Running 0 <invalid>
Run the following command to query the recreated pod:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE oss-static-66fbb85b67-zlvmw 1/1 Running 0 40s
Run the following command to verify that the tmpfile file exists in the /data path. This indicates that data is persisted to the OSS volume.
kubectl exec oss-static-66fbb85b67-zlvmw -- ls /data | grep tmpfile
Expected output:
tmpfile