All Products
Search
Document Center

Container Service for Kubernetes:Use LVM to dynamically create local volumes

Last Updated:Feb 28, 2026

Container Service for Kubernetes (ACK) Edge Pro clusters support Logical Volume Manager (LVM) for local storage management. LVM automatically manages the lifecycle of logical volumes and schedules volumes based on node storage capacity. You manage local storage as persistent volumes (PVs) and persistent volume claims (PVCs) by defining the topological relationship of local disks on nodes.

Prerequisites

  • Local disks are available on cluster nodes.

  • The TCP port 1736 on the node where the storage is deployed can be accessed from the cloud node.

Step 1: Install the node-resource-manager, csi-plugin, and csi-provisioner add-ons

The node-resource-manager, csi-plugin, and csi-provisioner add-ons provide Container Storage Interface (CSI) support for local LVM volumes. You must install all three add-ons before you can use LVM to manage local storage.

  1. Log on to the Container Service Management Console . In the navigation pane on the left, click Clusters.

  2. On the Clusters page, click the name of the cluster that you want to manage and choose Operations > Add-ons in the left-side navigation pane.

  3. On the Add-ons page, click the Storage tab, find the node-resource-manager, csi-plugin, and csi-provisioner add-ons, and then click Install.

  4. In the dialog box that appears, click OK.

Step 2: Configure the VolumeGroup

A VolumeGroup defines which local disks on a node are managed by LVM. You configure VolumeGroups by creating a ConfigMap that maps node labels to disk topologies. The VolumeGroup in prose corresponds to the volumegroup field in the ConfigMap YAML.

Note

To ensure data security, the add-ons do not delete VolumeGroups or physical volumes (LVM physical volumes, distinct from Kubernetes PersistentVolumes). Before you can redefine a VolumeGroup, you must delete the existing VolumeGroup.

  1. Use the following YAML template to configure a ConfigMap that defines the topology for the VolumeGroup:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: node-resource-topo
      namespace: kube-system
    data:
      volumegroup: |-
        volumegroup:
        - name: volumegroup1
          key: kubernetes.io/storagetype
          operator: In
          value: lvm
          topology:
            type: device
            devices:
            - /dev/sdb1
            - /dev/sdb2
            - /dev/sdc

    The following table describes the parameters.

    Parameter

    Description

    name

    The name of the VolumeGroup.

    key

    The key used to match the key of the label on the nodes.

    operator

    The label selector operator. Valid values: In, NotIn, Exists, DoesNotExist. See the operator values table below.

    value

    The value used to match the value of the label that has the specified key.

    topology

    The topology of devices on the node. topology.devices specifies the paths of local disks on the node. The specified disks are added to the VolumeGroup.

    Operator values

    ValueBehavior
    InA match is found only if the value of the value parameter is the same as the value of the node label that has the specified key.
    NotInA match is found only if the value of the value parameter is different from the value of the node label that has the specified key.
    ExistsA match is found when the node has a label that has the specified key.
    DoesNotExistA match is found when the node does not have a label that has the specified key.
  2. Add labels to the nodes.

    After you create the ConfigMap, add the following labels to the storage nodes:

    • Custom topology label: Add a custom label to storage nodes based on the label specified in the ConfigMap. This allows you to select nodes that meet the topological requirements. The label specified in the example above is kubernetes.io/storagetype=lvm.

    • Local storage enablement label: Add the alibabacloud.com/edge-enable-localstorage='true' label to a storage node so that the pod of the local storage management component can be scheduled to the node.

    After you add both labels, the node-resource-manager add-on on the node automatically creates a physical volume based on the preceding configurations and adds the physical volume to the VolumeGroup.

Step 3: Create a PVC and deploy a workload

Use the following YAML file to define a PVC that specifies the StorageClass. Run the kubectl apply -f ****.yaml command to create the PVC. One PVC corresponds to one logical volume on the node. After the pod is created, the logical volume is mounted on the pod.

Note

In this example, the default storageClassName is csi-local-lvm.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: lvm-pvc-test
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 50Mi
  storageClassName: csi-local-lvm

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: local-test
  name: local-test
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: local-test
  template:
    metadata:
      labels:
        k8s-app: local-test
    spec:
      hostNetwork: true
      containers:
      - image: nginx:1.15.7-alpine
        imagePullPolicy: IfNotPresent
        name: nginx
        resources: {}
        volumeMounts:
          - name: local-pvc
            mountPath: /data
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      tolerations:
      - operator: Exists
      nodeSelector:
        alibabacloud.com/is-edge-worker: "true"
      volumes:
      - name: local-pvc
        persistentVolumeClaim:
          claimName: lvm-pvc-test

Verify the result

Run the following command to check whether the logical volume is mounted:

kubectl exec -it local-test-564dfcf6dc-qhfsf sh
/ # ls /data

Expected output:

lost+found

The output indicates that the logical volume is mounted to the pod.