You can manually initialize local storage resources in a Kubernetes cluster or use Ansible to initialize the local storage resources in batches. The process of initializing local storage resources is complex, especially in large clusters. The node-resource-manager component can automatically initialize and update local storage resources on a node based on a ConfigMap. This topic describes how to use node-resource-manager to automatically initialize local storage resources on nodes in a Kubernetes cluster.
Background information
- It is complex to initialize local storage resources by using Ansible based on Kubernetes
node metadata. You must first install kubelet on the node and run
shell commands
. Then, you need to manually parse the command output. - In clusters that contain a large number of nodes, it is difficult to log on to each node and initialize local storage resources.
- Initialized storage resources cannot be automatically maintained for long-term business. You need to manually update these resources. In addition, the usage information of initialized storage resources is not reported to the Kubernetes control plane. As a result, initialized storage resources cannot be allocated to newly created pods.
Step 1: Create a ConfigMap to specify the nodes on which you want to initialize local storage resources
key: kubernetes.io/hostname
operator: In
value: xxxxx
Parameter | Description |
---|---|
key |
The key that is used to select nodes based node labels.
|
operator |
The operator that is used in the label selector. Valid values:
|
value |
The value that is used to select nodes based on node labels.
|
Use Logical Volume Manager (LVM) or QuotaPath to define the resource topology.
Use LVM to define the resource topology
type: device
: node-resource-manager claims avolume group (VG)
that is provisioned based on the block storage devices specified in thedevices
parameter. TheVG
is named based on thename
parameter. When an application that requests a logical volume (LV) is started, the LV can be allocated based on the VG.type: alibabacloud-local-disk
: node-resource-manager creates aVG
based on all local disks of the host. The VG is named based on thename
parameter. To use this method, you must deploy the host on an Elastic Compute Service (ECS) instance that is equipped with local disks.Important Block storage devices that are manually attached to ECS instances of the i2 instance family with local SSDs are cloud disks and are not considered local disks.type: pmem
: node-resource-manager creates aVG
based on the persistent memory (PMEM) resources on the host. The VG is named based on thename
parameter. You can configure theregions
parameter to specify theregions
to which the PMEM resources belong.
apiVersion: v1
kind: ConfigMap
metadata:
name: unified-resource-topo
namespace: kube-system
data:
volumegroup: |-
volumegroup:
- name: volumegroup1
key: kubernetes.io/hostname
operator: In
value: cn-zhangjiakou.192.168.XX.XX
topology:
type: device
devices:
- /dev/vdb
- /dev/vdc
- name: volumegroup2
key: kubernetes.io/nodetype
operator: NotIn
value: localdisk
topology:
type: alibabacloud-local-disk
- name: volumegroup1
key: kubernetes.io/hostname
operator: Exists
value: cn-beijing.192.168.XX.XX
topology:
type: pmem
regions:
- region0
Use QuotaPath to define the resource topology
type: device
: node-resource-manager initializes QuotaPath volumes based on the block storage devices on the host. The QuotaPath volume is mounted to the path specified in thename
parameter.type: pmem
: node-resource-manager initializes QuotaPath volumes based on the PMEM resources on the host. The QuotaPath volume is mounted to the path that is specified in thename
parameter.
apiVersion: v1
kind: ConfigMap
metadata:
name: unified-resource-topo
namespace: kube-system
data:
quotapath: |-
quotapath:
- name: /mnt/path1
key: kubernetes.io/hostname
operator: In
value: cn-beijing.192.168.XX.XX
topology:
type: device
options: prjquota
fstype: ext4
devices:
- /dev/vdb
- name: /mnt/path2
key: kubernetes.io/hostname
operator: In
value: cn-beijing.192.168.XX.XX
topology:
type: pmem
options: prjquota,shared
fstype: ext4
regions:
- region0
The following table describes the parameters.
Parameter | Description |
---|---|
options |
The options for mounting block storage devices. |
fstype |
The file system type that is used to format the block storage devices. Default value:
ext4 .
|
devices |
The block storage devices to be mounted. If you specify multiple block storage devices, the devices are mounted in sequence. |
Step 2: Deploy node-resource-manager
cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-resource-manager
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-resource-manager
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-resource-manager-binding
subjects:
- kind: ServiceAccount
name: node-resource-manager
namespace: kube-system
roleRef:
kind: ClusterRole
name: node-resource-manager
apiGroup: rbac.authorization.k8s.io
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: node-resource-manager
namespace: kube-system
spec:
selector:
matchLabels:
app: node-resource-manager
template:
metadata:
labels:
app: node-resource-manager
spec:
tolerations:
- operator: "Exists"
priorityClassName: system-node-critical
serviceAccountName: node-resource-manager
hostNetwork: true
hostPID: true
containers:
- name: node-resource-manager
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: registry.cn-hangzhou.aliyuncs.com/acs/node-resource-manager:v1.18.8.0-983ce56-aliyun
imagePullPolicy: "Always"
args:
- "--nodeid=$(KUBE_NODE_NAME)"
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /dev
mountPropagation: "HostToContainer"
name: host-dev
- mountPath: /var/log/
name: host-log
- name: etc
mountPath: /host/etc
- name: config
mountPath: /etc/unified-config
volumes:
- name: host-dev
hostPath:
path: /dev
- name: host-log
hostPath:
path: /var/log/
- name: etc
hostPath:
path: /etc
- name: config
configMap:
name: node-resource-topo
EOF
After node-resource-manager is deployed, node-resource-manager automatically initializes
local storage resources on nodes based on the configurations in the ConfigMap that
you created. If you update the ConfigMap, node-resource-manager updates the initialized
local storage resources within 1 minute after the update is completed.