Container Service for Kubernetes (ACK) Edge Pro clusters support Logical Volume Manager (LVM) for local storage management. LVM automatically manages the lifecycle of logical volumes and schedules volumes based on node storage capacity. You manage local storage as persistent volumes (PVs) and persistent volume claims (PVCs) by defining the topological relationship of local disks on nodes.
Prerequisites
Local disks are available on cluster nodes.
The TCP port 1736 on the node where the storage is deployed can be accessed from the cloud node.
Step 1: Install the node-resource-manager, csi-plugin, and csi-provisioner add-ons
The node-resource-manager, csi-plugin, and csi-provisioner add-ons provide Container Storage Interface (CSI) support for local LVM volumes. You must install all three add-ons before you can use LVM to manage local storage.
Log on to the Container Service Management Console . In the navigation pane on the left, click Clusters.
On the Clusters page, click the name of the cluster that you want to manage and choose in the left-side navigation pane.
On the Add-ons page, click the Storage tab, find the node-resource-manager, csi-plugin, and csi-provisioner add-ons, and then click Install.
In the dialog box that appears, click OK.
Step 2: Configure the VolumeGroup
A VolumeGroup defines which local disks on a node are managed by LVM. You configure VolumeGroups by creating a ConfigMap that maps node labels to disk topologies. The VolumeGroup in prose corresponds to the volumegroup field in the ConfigMap YAML.
To ensure data security, the add-ons do not delete VolumeGroups or physical volumes (LVM physical volumes, distinct from Kubernetes PersistentVolumes). Before you can redefine a VolumeGroup, you must delete the existing VolumeGroup.
Use the following YAML template to configure a ConfigMap that defines the topology for the VolumeGroup:
apiVersion: v1 kind: ConfigMap metadata: name: node-resource-topo namespace: kube-system data: volumegroup: |- volumegroup: - name: volumegroup1 key: kubernetes.io/storagetype operator: In value: lvm topology: type: device devices: - /dev/sdb1 - /dev/sdb2 - /dev/sdcThe following table describes the parameters.
Parameter
Description
name
The name of the VolumeGroup.
key
The key used to match the key of the label on the nodes.
operator
The label selector operator. Valid values: In, NotIn, Exists, DoesNotExist. See the operator values table below.
value
The value used to match the value of the label that has the specified key.
topology
The topology of devices on the node.
topology.devicesspecifies the paths of local disks on the node. The specified disks are added to the VolumeGroup.Operator values
Value Behavior In A match is found only if the value of the valueparameter is the same as the value of the node label that has the specified key.NotIn A match is found only if the value of the valueparameter is different from the value of the node label that has the specified key.Exists A match is found when the node has a label that has the specified key. DoesNotExist A match is found when the node does not have a label that has the specified key. Add labels to the nodes.
After you create the ConfigMap, add the following labels to the storage nodes:
Custom topology label: Add a custom label to storage nodes based on the label specified in the ConfigMap. This allows you to select nodes that meet the topological requirements. The label specified in the example above is
kubernetes.io/storagetype=lvm.Local storage enablement label: Add the
alibabacloud.com/edge-enable-localstorage='true'label to a storage node so that the pod of the local storage management component can be scheduled to the node.
After you add both labels, the node-resource-manager add-on on the node automatically creates a physical volume based on the preceding configurations and adds the physical volume to the VolumeGroup.
Step 3: Create a PVC and deploy a workload
Use the following YAML file to define a PVC that specifies the StorageClass. Run the kubectl apply -f ****.yaml command to create the PVC. One PVC corresponds to one logical volume on the node. After the pod is created, the logical volume is mounted on the pod.
In this example, the default storageClassName is csi-local-lvm.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: lvm-pvc-test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
storageClassName: csi-local-lvm
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: local-test
name: local-test
namespace: default
spec:
replicas: 1
selector:
matchLabels:
k8s-app: local-test
template:
metadata:
labels:
k8s-app: local-test
spec:
hostNetwork: true
containers:
- image: nginx:1.15.7-alpine
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
volumeMounts:
- name: local-pvc
mountPath: /data
dnsPolicy: ClusterFirst
restartPolicy: Always
tolerations:
- operator: Exists
nodeSelector:
alibabacloud.com/is-edge-worker: "true"
volumes:
- name: local-pvc
persistentVolumeClaim:
claimName: lvm-pvc-testVerify the result
Run the following command to check whether the logical volume is mounted:
kubectl exec -it local-test-564dfcf6dc-qhfsf sh
/ # ls /dataExpected output:
lost+foundThe output indicates that the logical volume is mounted to the pod.