When a container experiences a breakdown, there is a risk of data loss and unreliability for the business data stored by stateful service containers. Persistent storage can mitigate this issue. This topic explains how to use OSS for persistent storage.
Background information
Alibaba Cloud Object Storage Service (OSS) is a secure, cost-effective, high-capacity, and highly-reliable cloud storage service. OSS supports mounting by multiple pods simultaneously.
OSS Scenarios:
Average requirements for disk I/O.
Data sharing, including configuration files, images, and small video files.
OSS Usage Method:
Manually create a bucket.
Retrieve AccessKey ID and AccessKey Secret.
Create a PV using the Secret method, and then create a PVC.
Prerequisites
Connect to the cluster using the kubectl tool after obtaining the cluster KubeConfig.
You have created a bucket in the OSS management console. For more information, see how to create a bucket in the console.
Notes
Upgrading the Kubernetes cluster in the Container Service will restart the kubelet, and the ossfs driver will also restart, causing the OSS directory to become unavailable. In this case, you must recreate the pods to which the OSS volume is mounted. You can add health check settings in the YAML file to restart the pods and remount the OSS volume when the OSS directory becomes unavailable.
The latest version of the OSS mount has resolved this issue.
Create a PV
Execute the following command to create a Secret.
Replace
<your AccessKey ID>
and<your AccessKey Secret>
with your actual AccessKey ID and AccessKey Secret. In the Container Service Management Console, hover over the upper right corner, select AccessKey, and retrieve the AccessKey ID and AccessKey Secret.
kubectl create secret generic osssecret --from-literal=akId='<your AccessKey ID>' --from-literal=akSecret='<your AccessKey Secret>' --type=alicloud/oss -n default
osssecret
: The name of the Secret, which you can configure.akId
: AccessKey ID.akSecret
: AccessKey Secret.--type
: The type of Secret. The namespace configured asalicloud/oss
must match the namespace of the application pod.Create a PV using the pv-oss.yaml file.
apiVersion: v1 kind: PersistentVolume metadata: name: pv-oss labels: alicloud-pvname: pv-oss spec: capacity: storage: 5Gi accessModes: - ReadWriteMany storageClassName: oss flexVolume: driver: "alicloud/oss" secretRef: name: "osssecret" # Replace with the name of the Secret created in the previous step. options: bucket: "docker" // Replace with your bucket name. path: /path // Replace with your subdirectory relative path. url: "oss-cn-hangzhou.aliyuncs.com" // Replace with your URL. otherOpts: "-o max_stat_cache_size=0 -o allow_other" // Replace with your otherOpts.
Parameter Explanation:
alicloud-pvname
: The name of the PV, used in conjunction with theselector
in the PVC.bucket
: Bucket name.path
: Represents the directory structure relative to the root file of the bucket when mounted. The default is / (supported in version v1.14.8.32-c77e277b-aliyun and later).url
: The endpoint of the OSS bucket. The method to obtain it is as follows:Log on to the OSS console.
In the left-side navigation pane, click Buckets. On the Buckets page, click the name of the bucket whose internal endpoint you want to obtain.
In the left-side navigation tree, click Overview.
In the Access Port area, view the endpoint of the bucket.
otherOpts
: Customizable parameters supported when mounting OSS. The format is:-o *** -o ***
.
Execute the following command to create a PV.
kubectl create -f pv-oss.yaml
Expected Result:
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
On the Persistent Volumes page, you can see the PV that was just created.
Create a PVC
Create an OSS storage claim PVC, use selector
to filter PVs, and accurately configurethe binding relationship between PVC and PV. Use storageClassName
to specify that the PVC is bound only to OSS-type PVs.
Create the pvc-oss.yaml file.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc-oss spec: accessModes: - ReadWriteMany storageClassName: oss resources: requests: storage: 5Gi selector: matchLabels: alicloud-pvname: pv-oss
Execute the following command to create a PVC.
kubectl create -f pvc-oss.yaml
Expected Result:
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
On the Storage Claims page, you can see the PVC that was just created.
Create an application
Create the oss-static.yaml file.
apiVersion: apps/v1 kind: Deployment metadata: name: oss-static labels: app: nginx spec: replicas: 1 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6 ports: - containerPort: 80 volumeMounts: - name: pvc-oss mountPath: "/data" - name: pvc-oss mountPath: "/data1" livenessProbe: exec: command: - sh - -c - cd /data initialDelaySeconds: 30 periodSeconds: 30 volumes: - name: pvc-oss persistentVolumeClaim: claimName: pvc-oss
NoteFor a detailed explanation of the
livenessProbe
health check, see Overview of OSS Persistent Volumes.Execute the following command to create a deployment.
kubectl create -f oss-static.yaml d
Expected Result:
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
On the Stateless page, you can see the deployment that was just created.
OSS persistent storage
Execute the following command to view the name of the pod where the deployed deployment is located.
kubectl get pod
Expected Output:
NAME READY STATUS RESTARTS AGE oss-static-66fbb85b67-dqbl2 1/1 Running 0 1h
Execute the following command to view the files in the /data path.
kubectl exec oss-static-66fbb85b67-dqbl2 -- ls /data | grep tmpfile
NoteThe /data path is empty, with no files present.
Execute the following command to create a file named tmpfile in the /data path.
kubectl exec oss-static-66fbb85b67-dqbl2 -- touch /data/tmpfile
Execute the following command to view the files in the /data path.
kubectl exec oss-static-66fbb85b67-dqbl2 -- ls /data | grep tmpfile
Expected Output:
tmpfile
Execute the following command to delete the pod named oss-static-66fbb85b67-dqbl2.
kubectl delete pod oss-static-66fbb85b67-dqbl2
Expected Output:
pod "oss-static-66fbb85b67-dqbl2" deleted
In another window, execute the following command to monitor the process of pod deletion and Kubernetes recreating the pod.
kubectl get pod -w -l app=nginx
Expected Output:
NAME READY STATUS RESTARTS AGE oss-static-66fbb85b67-dqbl2 1/1 Running 0 78m oss-static-66fbb85b67-dqbl2 1/1 Terminating 0 78m oss-static-66fbb85b67-zlvmw 0/1 Pending 0 <invalid> oss-static-66fbb85b67-zlvmw 0/1 Pending 0 <invalid> oss-static-66fbb85b67-zlvmw 0/1 ContainerCreating 0 <invalid> oss-static-66fbb85b67-dqbl2 0/1 Terminating 0 78m oss-static-66fbb85b67-dqbl2 0/1 Terminating 0 78m oss-static-66fbb85b67-dqbl2 0/1 Terminating 0 78m oss-static-66fbb85b67-zlvmw 1/1 Running 0 <invalid>
Execute the following command to view the name of the pod recreated by Kubernetes.
kubectl get pod
Expected Output:
NAME READY STATUS RESTARTS AGE oss-static-66fbb85b67-zlvmw 1/1 Running 0 40s
Execute the following command to view the files in the /data path. The presence of the file tmpfile indicates that the data on OSS is persistently stored.
kubectl exec oss-static-66fbb85b67-zlvmw -- ls /data | grep tmpfile
Expected Output:
tmpfile