After you use volume groups (VGs) to virtualize disks, you can use Logical Volume Manager (LVM) to divide the VGs into logical volumes (LVs) and mount the LVs to pods. This topic describes how to use LVs.
Background Information
To enable pods to use the storage of the node, you can use hostPath volumes or local volumes. However, hostPath volumes and local volumes have the following limits:
Kubernetes does not manage the lifecycle of hostPath volumes and local volumes. You must manually manage and maintain the volumes.
When multiple pods use the same local storage, these pods share the same directory or each pod uses a subdirectory. As a result, storage isolation cannot be implemented among these pods.
When multiple pods use the same local storage, the input/output operations per second (IOPS) and throughput of each pod equal those of the entire storage. You cannot limit the IOPS and throughput of each pod.
When you create a pod that uses the local storage, the available storage on each node is unknown. As a result, the volumes mounted to the nodes cannot be scheduled properly.
Container Service for Kubernetes (ACK) uses LVs to resolve the preceding issues.
Introduction
Lifecycle management of LVs: automatic creation, deletion, mounting, and unmounting.
Expansion of LVs.
Monitoring of LVs.
IOPS limiting of LVs.
Automatic operations and maintenance of VGs. This enables you to manage the local storage of nodes.
Storage usage monitoring for clusters that use LVs.
Usage notes
LVs cannot be migrated. Therefore, LVs are not suitable for high availability scenarios.
Storage usage monitoring for clusters that use LVs is not supported in current versions. To initialize VGs, you must manually initialize local storage resources or configure automatic initialization of local storage resources. Both methods require you to have knowledge of local storage resources. Otherwise, we recommend that you use cloud storage resources such as disks and file systems managed by Container Network File System (CNFS).
Architecture
Basic features of LVs, such as lifecycle management, expansion, mounting, and formatting, are implemented by CSI-Provisioner and CSI-Plugin.
Item | Description |
Storage manager | This component is used to manage operations and maintenance of VGs, and to monitor the storage usage of LVs. You can also set worker nodes to manage and maintain VGs. |
Custom Resource Definition | This component is used to save local storage information of worker nodes, such as the storage capacity and VGs. |
LV scheduler | This component is used to create persistent volume claims (PVCs) and monitors storage usage for clusters that use LVs. |
Step 1: Grant CSI-Plugin and CSI-Provisioner the RBAC permissions to manage Secrets
Clusters that run Kubernetes 1.20 and earlier
Share the same ServiceAccount with CSI-Plugin. You must grant clusterrole/alicloud-csi-plugin
the role-based access control (RBAC) permissions to manage Secrets.
Run the following command to check whether clusterrole/alicloud-csi-plugin
has the permissions to create
Secrets:
echo `JSONPATH='{range .rules[*]}{@.resources}:{@.verbs} \r\n
{end}' \
&& kubectl get clusterrole alicloud-csi-plugin -o jsonpath="$JSONPATH";` | grep secrets
Expected output:
["secrets"]:["get","list"]
If clusterrole/alicloud-csi-plugin
does not have the permissions to create
Secrets, run the following command to grant the permissions:
kubectl patch clusterrole alicloud-csi-plugin --type='json' -p='[{"op": "add", "path": "/rules/0", "value":{ "apiGroups": [""], "resources": ["secrets"], "verbs": ["create"]}}]'
Expected output:
clusterrole.rbac.authorization.k8s.io/alicloud-csi-plugin patched
Run the following command to check whether clusterrole/alicloud-csi-plugin
has the permissions to create
Secrets:
echo `JSONPATH='{range .rules[*]}{@.resources}:{@.verbs} \r\n
{end}' \
&& kubectl get clusterrole alicloud-csi-plugin -o jsonpath="$JSONPATH";` | grep secrets
Expected output:
["secrets"]:["create"]
["secrets"]:["get","list"]
The output indicates that clusterrole/alicloud-csi-plugin
has the permissions to create
Secrets.
Clusters that run Kubernetes 1.22 and later
Use the following YAML content to create and grant permissions to a ServiceAccount:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: alibaba-cloud-csi-local
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: alibaba-cloud-csi-local
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update", "create", "delete", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "update", "patch", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "create", "list", "watch", "delete", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments/status"]
verbs: ["patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: alibaba-cloud-csi-local
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: alibaba-cloud-csi-local
subjects:
- kind: ServiceAccount
name: alibaba-cloud-csi-local
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: alibaba-cloud-csi-local
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["csi-local-plugin-cert"]
verbs: ["get"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create"]
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["csi-plugin", "ack-cluster-profile"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: alibaba-cloud-csi-local
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: alibaba-cloud-csi-local
subjects:
- kind: ServiceAccount
name: alibaba-cloud-csi-local
namespace: kube-system
Step 2: Deploy CSI-Plugin and CSI-Provisioner
CSI components for LVs consist of CSI-Plugin and CSI-Provisioner. CSI-Plugin is used to mount and unmount LVs. CSI-Provisioner is used to create LVs and persistent volumes (PVs).
Note Replace {{ regionId }}
in the following YAML content with the region ID of your cluster.
Deploy LVM CSI-Plugin in a cluster that runs Kubernetes 1.20 or earlier
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
name: localplugin.csi.alibabacloud.com
spec:
attachRequired: false
podInfoOnMount: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: csi-local-plugin
name: csi-local-plugin
namespace: kube-system
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: csi-local-plugin
template:
metadata:
labels:
app: csi-local-plugin
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: NotIn
values:
- virtual-kubelet
containers:
- args:
- '--v=5'
- '--csi-address=/csi/csi.sock'
- >-
--kubelet-registration-path=/var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com/csi.sock
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: >-
registry-vpc.{{ regionId }}.aliyuncs.com/acs/csi-node-driver-registrar:v1.3.0-6e9fff3-aliyun
imagePullPolicy: Always
name: driver-registrar
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /csi
name: plugin-dir
- mountPath: /registration
name: registration-dir
- args:
- '--endpoint=$(CSI_ENDPOINT)'
- '--v=5'
- '--nodeid=$(KUBE_NODE_NAME)'
- '--driver=localplugin.csi.alibabacloud.com'
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: SERVICE_PORT
value: '11293'
- name: CSI_ENDPOINT
value: >-
unix://var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com/csi.sock
image: >-
registry-vpc.{{ regionId }}.aliyuncs.com/acs/csi-plugin:v1.20.7-aafce42-aliyun
imagePullPolicy: Always
name: csi-localplugin
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- SYS_ADMIN
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/kubelet
mountPropagation: Bidirectional
name: pods-mount-dir
- mountPath: /dev
mountPropagation: HostToContainer
name: host-dev
- mountPath: /var/log/
name: host-log
- mountPath: /mnt
mountPropagation: Bidirectional
name: quota-path-dir
- mountPath: /tls/local/grpc
name: tls-token-dir
readOnly: true
dnsPolicy: ClusterFirst
hostNetwork: true
hostPID: true
priorityClassName: system-node-critical
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: csi-admin
serviceAccountName: csi-admin
terminationGracePeriodSeconds: 30
tolerations:
- operator: Exists
volumes:
- name: tls-token-dir
secret:
defaultMode: 420
secretName: csi-local-plugin-cert
- hostPath:
path: /var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com
type: DirectoryOrCreate
name: plugin-dir
- hostPath:
path: /var/lib/kubelet/plugins_registry
type: DirectoryOrCreate
name: registration-dir
- hostPath:
path: /var/lib/kubelet
type: Directory
name: pods-mount-dir
- hostPath:
path: /dev
type: ''
name: host-dev
- hostPath:
path: /var/log/
type: ''
name: host-log
- hostPath:
path: /mnt
type: Directory
name: quota-path-dir
updateStrategy:
rollingUpdate:
maxUnavailable: 10%
type: RollingUpdate
Deploy LVM CSI-Provisioner in a cluster that runs Kubernetes 1.20 or earlier
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: csi-local-provisioner
name: csi-local-provisioner
namespace: kube-system
spec:
selector:
matchLabels:
app: csi-local-provisioner
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: csi-local-provisioner
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
weight: 1
containers:
- args:
- --csi-address=$(ADDRESS)
- --feature-gates=Topology=True
- --volume-name-prefix=local
- --strict-topology=true
- --timeout=150s
- --extra-create-metadata=true
- --enable-leader-election=true
- --leader-election-type=leases
- --retry-interval-start=500ms
- --v=5
env:
- name: ADDRESS
value: /socketDir/csi.sock
image: registry-vpc.{{ regionId }}.aliyuncs.com/acs/csi-provisioner:v1.6.0-71838bd-aliyun
imagePullPolicy: Always
name: external-local-provisioner
volumeMounts:
- mountPath: /socketDir
name: socket-dir
- name: csi-localprovisioner
securityContext:
privileged: true
image: registry-vpc.{{ regionId }}.aliyuncs.com/acs/csi-plugin:v1.20.7-aafce42-aliyun
imagePullPolicy: "Always"
args:
- "--endpoint=$(CSI_ENDPOINT)"
- "--v=2"
- "--driver=localplugin.csi.alibabacloud.com"
env:
- name: CSI_ENDPOINT
value: unix://var/lib/kubelet/csi-provisioner/localplugin.csi.alibabacloud.com/csi.sock
- name: SERVICE_TYPE
value: "provisioner"
- name: SERVICE_PORT
value: "11290"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/kubelet/csi-provisioner/localplugin.csi.alibabacloud.com
- mountPath: /var/log/
name: host-log
- mountPath: /tls/local/grpc/
name: tls-token-dir
- args:
- --v=5
- --csi-address=$(ADDRESS)
- --leader-election
env:
- name: ADDRESS
value: /socketDir/csi.sock
image: registry-vpc.{{ regionId }}.aliyuncs.com/acs/csi-resizer:v1.1.0-7b30758-aliyun
imagePullPolicy: Always
name: external-local-resizer
volumeMounts:
- mountPath: /socketDir/
name: socket-dir
hostNetwork: true
serviceAccount: csi-admin
tolerations:
- effect: NoSchedule
operator: Exists
key: node-role.kubernetes.io/master
- effect: NoSchedule
operator: Exists
key: node.cloudprovider.kubernetes.io/uninitialized
volumes:
- name: socket-dir
emptyDir: {}
- name: tls-token-dir
emptyDir: {}
- hostPath:
path: /dev
type: ""
name: host-dev
- hostPath:
path: /var/log/
type: ""
name: host-log
- hostPath:
path: /mnt
type: Directory
name: quota-path-dir
- hostPath:
path: /var/lib/kubelet
type: Directory
name: pods-mount-dir
Deploy LVM CSI-Plugin in a cluster that runs Kubernetes 1.22 or later
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: localplugin.csi.alibabacloud.com
spec:
attachRequired: false
podInfoOnMount: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: csi-local-plugin
name: csi-local-plugin
namespace: kube-system
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: csi-local-plugin
template:
metadata:
labels:
app: csi-local-plugin
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: NotIn
values:
- virtual-kubelet
containers:
- args:
- '--v=5'
- '--csi-address=/csi/csi.sock'
- >-
--kubelet-registration-path=/var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com/csi.sock
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
image: >-
registry-vpc.{{ regionId }}.aliyuncs.com/acs/csi-node-driver-registrar:v2.3.1-038aeb6-aliyun
imagePullPolicy: Always
name: driver-registrar
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /csi
name: plugin-dir
- mountPath: /registration
name: registration-dir
- args:
- '--endpoint=$(CSI_ENDPOINT)'
- '--v=5'
- '--nodeid=$(KUBE_NODE_NAME)'
- '--driver=localplugin.csi.alibabacloud.com'
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: SERVICE_PORT
value: '11293'
- name: CSI_ENDPOINT
value: >-
unix://var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com/csi.sock
image: >-
registry-vpc.{{ regionId }}.aliyuncs.com/acs/csi-plugin:v1.24.3-55228c1-aliyun
imagePullPolicy: Always
name: csi-localplugin
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- SYS_ADMIN
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/kubelet
mountPropagation: Bidirectional
name: pods-mount-dir
- mountPath: /dev
mountPropagation: HostToContainer
name: host-dev
- mountPath: /var/log/
name: host-log
- mountPath: /mnt
mountPropagation: Bidirectional
name: quota-path-dir
- mountPath: /tls/local/grpc
name: tls-token-dir
readOnly: true
dnsPolicy: ClusterFirst
hostNetwork: true
hostPID: true
priorityClassName: system-node-critical
restartPolicy: Always
securityContext: {}
serviceAccountName: alibaba-cloud-csi-local
terminationGracePeriodSeconds: 30
tolerations:
- operator: Exists
volumes:
- name: tls-token-dir
secret:
defaultMode: 420
secretName: csi-local-plugin-cert
- hostPath:
path: /var/lib/kubelet/csi-plugins/localplugin.csi.alibabacloud.com
type: DirectoryOrCreate
name: plugin-dir
- hostPath:
path: /var/lib/kubelet/plugins_registry
type: DirectoryOrCreate
name: registration-dir
- hostPath:
path: /var/lib/kubelet
type: Directory
name: pods-mount-dir
- hostPath:
path: /dev
type: ''
name: host-dev
- hostPath:
path: /var/log/
type: ''
name: host-log
- hostPath:
path: /mnt
type: Directory
name: quota-path-dir
updateStrategy:
rollingUpdate:
maxUnavailable: 10%
type: RollingUpdate
Deploy LVM CSI-Provisioner in a cluster that runs Kubernetes 1.22 or later
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: csi-local-provisioner
name: csi-local-provisioner
namespace: kube-system
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: csi-local-provisioner
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: csi-local-provisioner
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- preference:
matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
weight: 1
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: NotIn
values:
- virtual-kubelet
containers:
- args:
- --csi-address=$(ADDRESS)
- --feature-gates=Topology=True
- --volume-name-prefix=local
- --strict-topology=true
- --timeout=150s
- --extra-create-metadata=true
- --enable-leader-election=true
- --leader-election-type=leases
- --retry-interval-start=500ms
- --default-fstype=ext4
- --v=5
env:
- name: ADDRESS
value: /socketDir/csi.sock
image: registry-vpc.{{ regionId }}.aliyuncs.com/acs/csi-provisioner:v3.0.0-080f01e64-aliyun
imagePullPolicy: Always
name: external-local-provisioner
resources: {}
volumeMounts:
- mountPath: /socketDir
name: socket-dir
- args:
- --endpoint=$(CSI_ENDPOINT)
- --v=2
- --driver=localplugin.csi.alibabacloud.com
env:
- name: CSI_ENDPOINT
value: unix://var/lib/kubelet/csi-provisioner/localplugin.csi.alibabacloud.com/csi.sock
- name: SERVICE_TYPE
value: provisioner
- name: SERVICE_PORT
value: "11290"
image: registry-vpc.{{ regionId }}.aliyuncs.com/acs/csi-plugin:v1.24.3-55228c1-aliyun
imagePullPolicy: Always
name: csi-localprovisioner
resources: {}
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/lib/kubelet/csi-provisioner/localplugin.csi.alibabacloud.com
name: socket-dir
- mountPath: /var/log/
name: host-log
- mountPath: /tls/local/grpc/
name: tls-token-dir
- args:
- --v=5
- --csi-address=$(ADDRESS)
- --leader-election
env:
- name: ADDRESS
value: /socketDir/csi.sock
image: registry-vpc.{{ regionId }}.aliyuncs.com/acs/csi-resizer:v1.3-ca84e84-aliyun
imagePullPolicy: Always
name: external-local-resizer
resources: {}
volumeMounts:
- mountPath: /socketDir/
name: socket-dir
dnsPolicy: ClusterFirst
hostNetwork: true
restartPolicy: Always
securityContext: {}
serviceAccountName: alibaba-cloud-csi-local
terminationGracePeriodSeconds: 30
tolerations:
- operator: Exists
volumes:
- emptyDir: {}
name: socket-dir
- emptyDir: {}
name: tls-token-dir
- hostPath:
path: /dev
type: ""
name: host-dev
- hostPath:
path: /var/log/
type: ""
name: host-log
- hostPath:
path: /mnt
type: Directory
name: quota-path-dir
- hostPath:
path: /var/lib/kubelet
type: Directory
name: pods-mount-dir
Step 3: Use LVs
When you use CSI-Provisioner to create PVs, take note of the following limits:
You must specify the name of the VG in a StorageClass.
If you want to create a PV on a specified node, you must add the volume.kubernetes.io/selected-node: nodeName annotation to the related PVC.
Use the following template to create a StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-local
provisioner: localplugin.csi.alibabacloud.com
parameters:
volumeType: LVM
vgName: volumegroup1
fsType: ext4
lvmType: "striping"
writeIOPS: "10000"
writeBPS: "1M"
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
Parameter | Description |
volumeType | The type of volume. The volume type must be LV. Other types of volumes will soon be supported. |
vgName | The name of the VG. This parameter is required. |
fsType | The type of file system. |
lvmType | The type of LV. Valid values: linear and striping. |
writeIOPS | The write IOPS of an LV that is created by using the StorageClass. |
writeBPS | The size of data that can be written per second into an LV that is created by using the StorageClass. Unit: bytes. |
Use the following template to create a PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: lvm-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: csi-local
Use the following template to create an application:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-lvm
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
volumeMounts:
- name: lvm-pvc
mountPath: "/data"
volumes:
- name: lvm-pvc
persistentVolumeClaim:
claimName: lvm-pvc
Query the status of the application.
Run the following command to query the pods that are created for the applications:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE
deployment-lvm-9f798687c-m**** 1/1 Running 0 9s
Run the following command to query information about the PVC:
kubectl get pvc
Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
lvm-pvc Bound disk-afacf7a9-3d1a-45da-b443-24f8fb35**** 2Gi RWO csi-local 16s
Run the following command to query information about the PV:
kubectl get pv
Expected output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
disk-afacf7a9-3d1a-45da-b443-24f8fb35**** 2Gi RWO Delete Bound default/lvm-pvc csi-local 12s
Run the following command to query the volume that is mounted to the pod:
kubectl exec -ti deployment-lvm-9f798687c-m**** sh -- df /data
Expected output:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/volumegroup1-disk--afacf7a9--3d1a--45da--b443--24f8fb35**** 1998672 6144 1976144 1% /data
Run the following command to query all directories in the /data directory:
ls /data
Expected output:
lost+found
Run the following command to create a directory named test in the /data directory:
touch /data/test
ls /data
Expected output:
lost+found test
Run the following command to exit:
Run the following command to delete the pod:
kubectl delete pod deployment-lvm-9f798687c-m****
Expected output:
pod "deployment-lvm-9f798687c-m****" deleted
Run the following command to query the pods that are created for the applications:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE
deployment-lvm-9f798687c-j**** 1/1 Running 0 2m19s
Run the following command to query the volume that is mounted to the pod:
kubectl exec deployment-lvm-9f798687c-j**** -- ls /data
Expected output:
lost+found
test
Expand the LV.
Run the following command to query information about the PVC:
kubectl get pvc
Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
lvm-pvc Bound disk-afacf7a9-3d1a-45da-b443-24f8fb35**** 2Gi RWO csi-local 6m50s
Run the following command to expand the PVC to 4 GiB:
kubectl patch pvc lvm-pvc -p '{"spec":{"resources":{"requests":{"storage":"4Gi"}}}}'
Expected output:
persistentvolumeclaim/lvm-pvc patched
Run the following command to query information about the PVC:
kubectl get pvc
Expected output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
lvm-pvc Bound disk-afacf7a9-3d1a-45da-b443-24f8fb35**** 4Gi RWO csi-local 7m26s
Run the following command to check whether the LV is expanded to 4 GiB:
kubectl exec deployment-lvm-9f798687c-j**** -- df /data
Expected output:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/volumegroup1-disk--afacf7a9--3d1a--45da--b443--24f8fb35**** 4062912 8184 4038344 1% /data
Run the following command to monitor the LV:
curl -s localhost:10255/metrics | grep lvm-pvc
Expected output:
kubelet_volume_stats_available_bytes{namespace="default",persistentvolumeclaim="lvm-pvc"} 1.917165568e+09
kubelet_volume_stats_capacity_bytes{namespace="default",persistentvolumeclaim="lvm-pvc"} 1.939816448e+09
kubelet_volume_stats_inodes{namespace="default",persistentvolumeclaim="lvm-pvc"} 122400
kubelet_volume_stats_inodes_free{namespace="default",persistentvolumeclaim="lvm-pvc"} 122389
kubelet_volume_stats_inodes_used{namespace="default",persistentvolumeclaim="lvm-pvc"} 11
kubelet_volume_stats_used_bytes{namespace="default",persistentvolumeclaim="lvm-pvc"} 5.873664e+06
The preceding monitoring data can be imported to Prometheus and displayed in the console. For more information, see Use open source Prometheus to monitor an ACK cluster.