The FlexVolume plug-in is deprecated. New Container Service for Kubernetes (ACK) clusters no longer support FlexVolume. For existing clusters, we recommend that you upgrade from FlexVolume to Container Storage Interface (CSI). This topic describes how to use CSI to take over the statically provisioned Object Storage Service (OSS) volumes that are managed by FlexVolume.
Table of contents
Differences between FlexVolume and CSI
The following table describes the differences between CSI and FlexVolume.
Plug-in | Component | kubelet parameter | References |
CSI |
| The kubelet parameters required by the CSI plug-in are different from those required by the FlexVolume plug-in. To run the CSI plug-in, you must set the kubelet parameter | |
FlexVolume |
| The kubelet parameters required by the FlexVolume plug-in are different from those required by the CSI plug-in. To tun the FlexVolume plug-in, you must set the kubelet parameter |
Scenarios
FlexVolume is installed in your cluster and used to mount statically provisioned OSS volumes. If you also have disk volumes managed by FlexVolume in the cluster, see Use csi-compatible-controller to migrate from FlexVolume to CSI.
Usage notes
When you upgrade from FlexVolume to CSI, persistent volume claims (PVCs) are recreated. As a result, pods are recreated and your business is interrupted. We recommend that you upgrade to CSI, recreate PVCs, modify applications, or perform other operations that result in pod restarts during off-peak hours.
Preparations
Manually install CSI
Create files named csi-plugin.yaml and csi-provisioner.yaml.
Run the following command to deploy csi-plugin and csi-provisioner in the cluster:
kubectl apply -f csi-plugin.yaml -f csi-provisioner.yaml
Run the following command to check whether CSI runs as normal:
kubectl get pods -nkube-system | grep csi
Expected output:
csi-plugin-577mm 4/4 Running 0 3d20h csi-plugin-k9mzt 4/4 Running 0 41d csi-provisioner-6b58f46989-8wwl5 9/9 Running 0 41d csi-provisioner-6b58f46989-qzh8l 9/9 Running 0 6d20h
If the preceding output is returned, CSI runs as normal.
In this example, FlexVolume is used to mount a statically provisioned OSS volume to a pod created by a StatefulSet. The credentials of the volume are saved in a Secret named oss-secret. This example shows how to use CSI to take over the OSS volume that is mounted by using FlexVolume. The following figure shows the procedure.
Step 1: Check the status of the volume in the cluster
Run the following command to query the status of the pods:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE oss-sts-1 1/1 Running 0 11m
Run the following command to query the PVC used by the pod:
kubectl describe pod oss-sts-1 |grep ClaimName
Expected output:
ClaimName: oss-pvc
Run the following command to query the current status of the PVC:
kubectl get pvc
Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE oss-pvc Bound oss-pv 5Gi RWX 7m23s
Step 2: Create a statically provisioned OSS volume supported by CSI by defining a PVC and PV
Method 1: Use the Flexvolume2CSI CLI to convert PVs and PVCs
Convert PVs and PVCs managed by FlexVolume to PVs and PVCs managed by CSI.
Run the following command to create a PVC and persistent volume (PV) for the OSS volume:
oss-pv-pvc-csi.yaml
is the YAML file that defines the PVC and PV managed by CSI after you use the Flexvolume2CSI CLI to convert the original PVC and PV.kubectl apply -f oss-pv-pvc-csi.yaml
Run the following command to query the current status of the PVC:
kubectl get pvc
Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE oss-pvc-csi Bound oss-pv-csi 5Gi RWO 7m15s oss-pvc Bound oss-pv 5Gi RWX 52m
Method 2: Save PVCs and PVs managed by FlexVolume and change the volume plug-in
Save the PV and PVC objects managed by FlexVolume.
Run the following command to save the PVC object supported by FlexVolume:
kubectl get pvc oss-pvc -oyaml > oss-pvc-flexvolume.yaml cat oss-pvc-flexvolume.yaml
Expected output:
apiVersion: v1 kind: PersistentVolumeClaim name: oss-pvc namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi volumeMode: Filesystem volumeName: oss-pv
Run the following command to save the PV object supported by FlexVolume:
kubectl get pv oss-pv -oyaml > oss-pv-flexvolume.yaml cat oss-pv-flexvolume.yaml
Expected output:
apiVersion: v1 kind: PersistentVolume metadata: name: oss-pv spec: accessModes: - ReadWriteMany capacity: storage: 5Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: oss-pvc namespace: default flexVolume: driver: alicloud/oss nodePublishSecretRef: name: oss-secret namespace: default options: bucket: xxx otherOpts: -o max_stat_cache_size=0 -o allow_other url: xxx.aliyuncs.com persistentVolumeReclaimPolicy: Retain volumeMode: Filesystem
Create a statically provisioned NAS volume managed by CSI by defining a PVC and PV.
Create a file named oss-pv-pvc-csi.yaml and add the following YAML content to the file to create a statically provisioned OSS volume managed by CSI:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: oss-pvc-csi spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi selector: matchLabels: alicloud-pvname: oss-pv-csi --- apiVersion: v1 kind: PersistentVolume metadata: name: oss-pv-csi labels: alicloud-pvname: oss-pv-csi spec: capacity: storage: 5Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain csi: driver: ossplugin.csi.alibabacloud.com volumeHandle: oss-pv-csi nodePublishSecretRef: name: oss-secret namespace: default volumeAttributes: bucket: "***" url: "***.aliyuncs.com" otherOpts: "-o max_stat_cache_size=0 -o allow_other"
Run the following command to create a PVC and PV for the OSS volume:
kubectl apply -f oss-pv-pvc-csi.yaml
Run the following command to query the current status of the PVC:
kubectl get pvc
Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE oss-pvc-csi Bound oss-pv-csi 5Gi RWO 7m15s oss-pvc Bound oss-pv 5Gi RWX 52m
Step 3: Change the PVC associated with the application
Run the following command to modify the configuration file of the application:
kubectl edit sts oss-sts
Change the PVC to the one supported by CSI.
volumes: - name: oss persistentVolumeClaim: claimName: oss-pvc-csi
Run the following command to check whether the pod is restarted:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE oss-sts-1 1/1 Running 0 70s
Run the following command to query the mount information:
kubectl exec oss-sts-1 -- mount |grep ossfs
Expected output:
# View the mount information. ***:/ on /var/lib/kubelet/pods/ac02ea3f-125f-4b38-9bcf-9b117f62eaf0/volumes/kubernetes.io~csi/oss-pv-csi/mount type ossfs (rw,relatime,max_stat_cache_size=0,allow_other)
If the preceding output is returned, the pod is migrated.
Step 4: Uninstall FlexVolume
Log on to the OpenAPI Explorer console and call the UnInstallClusterAddons operation to uninstall the FlexVolume plug-in.
ClusterId: Set the value to the ID of your cluster. You can view the cluster ID on the Basic Information tab of the cluster details page of your cluster.
name: Set the value to Flexvolume.
For more information, see Uninstall components from a cluster.
Run the following command to delete the alicloud-disk-controller and alicloud-nas-controller components:
kubectl delete deploy -nkube-system alicloud-disk-controller alicloud-nas-controller
Run the following command to check whether the FlexVolume plug-in is uninstalled from your cluster:
kubectl get pods -n kube-system | grep 'flexvolume\|alicloud-disk-controller\|alicloud-nas-controller'
If no output is displayed, the FlexVolume plug-in is uninstalled from your cluster.
Run the following command to delete the StorageClass that uses FlexVolume from the cluster. The provisioner of the StorageClass that uses FlexVolume is alicloud/disk.
kubectl delete storageclass alicloud-disk-available alicloud-disk-efficiency alicloud-disk-essd alicloud-disk-ssd
Expected output:
storageclass.storage.k8s.io "alicloud-disk-available" deleted storageclass.storage.k8s.io "alicloud-disk-efficiency" deleted storageclass.storage.k8s.io "alicloud-disk-essd" deleted storageclass.storage.k8s.io "alicloud-disk-ssd" deleted
If the preceding output is displayed, the StorageClass is deleted from your cluster.
Step 5: Call the API to install CSI
Log on to the OpenAPI Explorer console and call the InstallClusterAddons to install the CSI plug-in.
ClusterId: Set the value to the ID of your cluster.
name: Set the value to csi-provisioner.
version: The latest version is automatically specified. For more information about CSI versions, see csi-provisioner.
For more information about how to install the CSI plug-in, see Install a component in an ACK cluster.
Run the following command to check whether the CSI plug-in runs as expected in your cluster:
kubectl get pods -nkube-system | grep csi
Expected output:
csi-plugin-577mm 4/4 Running 0 3d20h csi-plugin-k9mzt 4/4 Running 0 41d csi-provisioner-6b58f46989-8wwl5 9/9 Running 0 41d csi-provisioner-6b58f46989-qzh8l 9/9 Running 0 6d20h
If the preceding output is displayed, the CSI plug-in runs as expected in the cluster.
Step 6: Modify the configurations of existing nodes
Create a YAML file based on the following code block. Then, deploy the YAML file to modify the kubelet parameters on which the CSI plug-in relies. This DaemonSet can change the value of the kubelet parameter --enable-controller-attach-detach
of an existing node to true
. After this step is complete, you can delete the DaemonSet.
When you deploy the YAML file, kubelet is restarted. We recommend that you evaluate the impact on the applications before you deploy the YAML file.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: kubelet-set
spec:
selector:
matchLabels:
app: kubelet-set
template:
metadata:
labels:
app: kubelet-set
spec:
tolerations:
- operator: "Exists"
hostNetwork: true
hostPID: true
containers:
- name: kubelet-set
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.26.5-56d1e30-aliyun
imagePullPolicy: "Always"
env:
- name: enableADController
value: "true"
command: ["sh", "-c"]
args:
- echo "Starting kubelet flag set to $enableADController";
ifFlagTrueNum=`cat /host/etc/systemd/system/kubelet.service.d/10-kubeadm.conf | grep enable-controller-attach-detach=$enableADController | grep -v grep | wc -l`;
echo "ifFlagTrueNum is $ifFlagTrueNum";
if [ "$ifFlagTrueNum" = "0" ]; then
curValue="true";
if [ "$enableADController" = "true" ]; then
curValue="false";
fi;
sed -i "s/enable-controller-attach-detach=$curValue/enable-controller-attach-detach=$enableADController/" /host/etc/systemd/system/kubelet.service.d/10-kubeadm.conf;
restartKubelet="true";
echo "current value is $curValue, change to expect "$enableADController;
fi;
if [ "$restartKubelet" = "true" ]; then
/nsenter --mount=/proc/1/ns/mnt systemctl daemon-reload;
/nsenter --mount=/proc/1/ns/mnt service kubelet restart;
echo "restart kubelet";
fi;
while true;
do
sleep 5;
done;
volumeMounts:
- name: etc
mountPath: /host/etc
volumes:
- name: etc
hostPath:
path: /etc