The FlexVolume plug-in is deprecated. New Container Service for Kubernetes (ACK) clusters no longer support FlexVolume. For existing clusters, we recommend that you upgrade from FlexVolume to Container Storage Interface (CSI). This topic describes how to use CSI to take over the statically provisioned File Storage NAS (NAS) volumes that are managed by FlexVolume.
Table of contents
Differences between FlexVolume and CSI
The following table describes the differences between CSI and FlexVolume.
Plug-in | Component | kubelet parameter | References |
CSI |
| The kubelet parameters required by the CSI plug-in are different from those required by the FlexVolume plug-in. To run the CSI plug-in, you must set the kubelet parameter | |
FlexVolume |
| The kubelet parameters required by the FlexVolume plug-in are different from those required by the CSI plug-in. To tun the FlexVolume plug-in, you must set the kubelet parameter |
Scenarios
FlexVolume is installed in your cluster and used to mount statically provisioned NAS volumes. If you also have disk volumes managed by FlexVolume in the cluster, see Use csi-compatible-controller to migrate from FlexVolume to CSI.
Usage notes
When you upgrade from FlexVolume to CSI, persistent volume claims (PVCs) are recreated. As a result, pods are recreated and your business is interrupted. We recommend that you upgrade to CSI, recreate PVCs, modify applications, or perform other operations that result in pod restarts during off-peak hours.
Preparations
Manually install CSI
Create files named csi-plugin.yaml and csi-provisioner.yaml.
Run the following command to deploy csi-plugin and csi-provisioner in the cluster:
kubectl apply -f csi-plugin.yaml -f csi-provisioner.yaml
Run the following command to check whether CSI runs as normal:
kubectl get pods -nkube-system | grep csi
Expected output:
csi-plugin-577mm 4/4 Running 0 3d20h csi-plugin-k9mzt 4/4 Running 0 41d csi-provisioner-6b58f46989-8wwl5 9/9 Running 0 41d csi-provisioner-6b58f46989-qzh8l 9/9 Running 0 6d20h
If the preceding output is returned, CSI runs as normal.
In this example, FlexVolume is used to mount a statically provisioned NAS volume to a pod created by a StatefulSet. This example shows how to use CSI to take over the NAS volume that is mounted by using FlexVolume. The following figure shows the procedure.
Step 1: Check the status of the volume in the cluster
Run the following command to query the status of the pod:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE nas-static-1 1/1 Running 0 11m
Run the following command to query the status of the PVC used by the pod:
kubectl describe pod nas-static-1 |grep ClaimName
Expected output:
ClaimName: nas-pvc
Run the following command to query the status of the PVC:
kubectl get pvc
Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nas-pvc Bound nax-pv 512Gi RWX 7m23s
Step 2: Create a statically provisioned NAS volume managed by CSI by defining a PVC and PV
Method 1: Use the Flexvolume2CSI CLI to convert PVs and PVCs
Convert PVs and PVCs managed by FlexVolume to PVs and PVCs managed by CSI.
Run the following command to create a PVC and PV for the NAS volume:
nas-pv-pvc-csi.yaml
is the YAML file that defines the PVC and PV managed by CSI after you use the Flexvolume2CSI CLI to convert the original PVC and PV.kubectl apply -f nas-pv-pvc-csi.yaml
Run the following command to query the status of the PVC:
kubectl get pvc
Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nas-pvc Bound nas-pv 512Gi RWX nas 30m nas-pvc-csi Bound nas-pv-csi 512Gi RWX nas 2s
Method 2: Save PVCs and PVs managed by FlexVolume and change the volume plug-in
Save the PV and PVC objects managed by FlexVolume.
Run the following command to save the PVC object managed by FlexVolume:
kubectl get pvc nas-pvc -oyaml > nas-pvc-flexvolume.yaml cat nas-pvc-flexvolume.yaml
Expected output:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nas-pvc namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 512Gi selector: matchLabels: alicloud-pvname: nas-pv storageClassName: nas
Run the following command to save the persistent volume (PV) object managed by FlexVolume:
kubectl get pv nas-pv -oyaml > nas-pv-flexvolume.yaml cat nas-pv-flexvolume.yaml
Expected output:
apiVersion: v1 kind: PersistentVolume metadata: labels: alicloud-pvname: nas-pv name: nas-pv spec: accessModes: - ReadWriteMany capacity: storage: 512Gi flexVolume: driver: alicloud/nas options: path: /aliyun server: ***.***.nas.aliyuncs.com vers: "3" persistentVolumeReclaimPolicy: Retain storageClassName: nas
Create a statically provisioned NAS volume managed by CSI by defining a PVC and PV.
Create a file named nas-pv-pvc-csi.yaml and add the following YAML content to the file to create a statically provisioned NAS volume managed by CSI:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nas-pvc-csi namespace: default spec: accessModes: - ReadWriteMany resources: requests: storage: 512Gi selector: matchLabels: alicloud-pvname: nas-pv-csi storageClassName: nas --- apiVersion: v1 kind: PersistentVolume metadata: labels: alicloud-pvname: nas-pv-csi name: nas-pv-csi spec: accessModes: - ReadWriteMany capacity: storage: 512Gi csi: driver: nasplugin.csi.alibabacloud.com volumeHandle: nas-pv-csi volumeAttributes: server: "***.***.nas.aliyuncs.com" path: "/aliyun" mountOptions: - nolock,tcp,noresvport - vers=3 persistentVolumeReclaimPolicy: Retain storageClassName: nas
Run the following command to create a PVC and PV for the NAS volume:
kubectl apply -f nas-pv-pvc-csi.yaml
Run the following command to query the status of the PVC:
kubectl get pvc
Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nas-pvc Bound nax-pv 512Gi RWX 7m23s
Step 3: Change the PVC associated with the application
Run the following command to modify the configuration file of the application:
kubectl edit sts nas-static
Change the PVC to the one managed by CSI.
volumes: - name: pvc-nas persistentVolumeClaim: claimName: nas-pvc-csi
Run the following command to check whether the pod is restarted:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE nas-static-1 1/1 Running 0 70s
Run the following command to query the mount information:
kubectl exec nas-static-1 -- mount |grep nas
Expected output:
# View the mount information ***.***.nas.aliyuncs.com:/aliyun on /var/lib/kubelet/pods/ac02ea3f-125f-4b38-9bcf-9b117f62***/volumes/kubernetes.io~csi/nas-pv-csi/mount type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,noresvport,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.XX.XX,mountvers=3,mountport=2049,mountproto=tcp,local_lock=all,addr=192.168.XX.XX)
If the preceding output is returned, the pod is migrated.
Step 4: Uninstall FlexVolume
Log on to the OpenAPI Explorer console and call the UnInstallClusterAddons operation to uninstall the FlexVolume plug-in.
ClusterId: Set the value to the ID of your cluster. You can view the cluster ID on the Basic Information tab of the cluster details page of your cluster.
name: Set the value to Flexvolume.
For more information, see Uninstall components from a cluster.
Run the following command to delete the alicloud-disk-controller and alicloud-nas-controller components:
kubectl delete deploy -nkube-system alicloud-disk-controller alicloud-nas-controller
Run the following command to check whether the FlexVolume plug-in is uninstalled from your cluster:
kubectl get pods -n kube-system | grep 'flexvolume\|alicloud-disk-controller\|alicloud-nas-controller'
If no output is displayed, the FlexVolume plug-in is uninstalled from your cluster.
Run the following command to delete the StorageClass that uses FlexVolume from the cluster. The provisioner of the StorageClass that uses FlexVolume is alicloud/disk.
kubectl delete storageclass alicloud-disk-available alicloud-disk-efficiency alicloud-disk-essd alicloud-disk-ssd
Expected output:
storageclass.storage.k8s.io "alicloud-disk-available" deleted storageclass.storage.k8s.io "alicloud-disk-efficiency" deleted storageclass.storage.k8s.io "alicloud-disk-essd" deleted storageclass.storage.k8s.io "alicloud-disk-ssd" deleted
If the preceding output is displayed, the StorageClass is deleted from your cluster.
Step 5: Call the API to install CSI
Log on to the OpenAPI Explorer console and call the InstallClusterAddons to install the CSI plug-in.
ClusterId: Set the value to the ID of your cluster.
name: Set the value to csi-provisioner.
version: The latest version is automatically specified. For more information about CSI versions, see csi-provisioner.
For more information about how to install the CSI plug-in, see Install a component in an ACK cluster.
Run the following command to check whether the CSI plug-in runs as expected in your cluster:
kubectl get pods -nkube-system | grep csi
Expected output:
csi-plugin-577mm 4/4 Running 0 3d20h csi-plugin-k9mzt 4/4 Running 0 41d csi-provisioner-6b58f46989-8wwl5 9/9 Running 0 41d csi-provisioner-6b58f46989-qzh8l 9/9 Running 0 6d20h
If the preceding output is displayed, the CSI plug-in runs as expected in the cluster.
Step 6: Modify the configurations of existing nodes
Create a YAML file based on the following code block. Then, deploy the YAML file to modify the kubelet parameters on which the CSI plug-in relies. This DaemonSet can change the value of the kubelet parameter --enable-controller-attach-detach
of an existing node to true
. After this step is complete, you can delete the DaemonSet.
When you deploy the YAML file, kubelet is restarted. We recommend that you evaluate the impact on the applications before you deploy the YAML file.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: kubelet-set
spec:
selector:
matchLabels:
app: kubelet-set
template:
metadata:
labels:
app: kubelet-set
spec:
tolerations:
- operator: "Exists"
hostNetwork: true
hostPID: true
containers:
- name: kubelet-set
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: registry.cn-hangzhou.aliyuncs.com/acs/csi-plugin:v1.26.5-56d1e30-aliyun
imagePullPolicy: "Always"
env:
- name: enableADController
value: "true"
command: ["sh", "-c"]
args:
- echo "Starting kubelet flag set to $enableADController";
ifFlagTrueNum=`cat /host/etc/systemd/system/kubelet.service.d/10-kubeadm.conf | grep enable-controller-attach-detach=$enableADController | grep -v grep | wc -l`;
echo "ifFlagTrueNum is $ifFlagTrueNum";
if [ "$ifFlagTrueNum" = "0" ]; then
curValue="true";
if [ "$enableADController" = "true" ]; then
curValue="false";
fi;
sed -i "s/enable-controller-attach-detach=$curValue/enable-controller-attach-detach=$enableADController/" /host/etc/systemd/system/kubelet.service.d/10-kubeadm.conf;
restartKubelet="true";
echo "current value is $curValue, change to expect "$enableADController;
fi;
if [ "$restartKubelet" = "true" ]; then
/nsenter --mount=/proc/1/ns/mnt systemctl daemon-reload;
/nsenter --mount=/proc/1/ns/mnt service kubelet restart;
echo "restart kubelet";
fi;
while true;
do
sleep 5;
done;
volumeMounts:
- name: etc
mountPath: /host/etc
volumes:
- name: etc
hostPath:
path: /etc