The Container Storage Interface (CSI) plug-in of Container Service for Kubernetes (ACK) allows you to mount a dynamically provisioned File Storage NAS (NAS) volume to an ACK cluster in subpath or filesystem mode in the ACK console or by using kubectl. This topic describes how to mount a dynamically provisioned NAS volume to an ACK cluster. This topic also describes how to test whether the NAS volume can persist and share data.
Prerequisites
An ACK cluster is created. For more information, see Create an ACK managed cluster.
The CSI plug-in is updated to the latest version. For more information, see Manage the CSI plug-in.
Scenarios
Your application requires high disk I/O.
You need a storage service that offers higher read and write throughput than Object Storage Service (OSS).
You want to share files across hosts. For example, you want to use a NAS file system as a file server.
Limits
NAS is a shared storage service. A persistent volume claim (PVC) that is used to mount a NAS file system can be shared among pods.
You cannot use the Container Storage Interface (CSI) plug-in to mount Server Message Block (SMB) file systems.
We recommend that you use the NFSv3 file sharing protocol.
You can mount a NAS volume only to ECS instances in the same virtual private cloud (VPC) as the NAS file system.
General-purpose and Extreme NAS file systems have different limits such as the limits on mounting connectivity, the number of file systems, and file sharing protocols. For more information, see Limits.
Before you use NAS volumes, we recommend that you update the CSI plug-in to the latest version.
After a mount target is created, wait until the mount target changes to the Available state.
Do not delete the mount target of a NAS file system before you unmount the NAS file system. Otherwise, an operating system hang issue may occur.
Usage notes
NAS is a shared storage service. A persistent volume claim (PVC) that is used to mount a NAS file system can be shared among pods. For more information about the limits on concurrent writes to NAS, see the How do I prevent exceptions that may occur when multiple processes or clients concurrently write data to a log file? and How do I resolve the latency in writing data to an NFS file system? sections of the "FAQ about read and write access to files" topic.
To mount an Extreme NAS file system, set the
path
parameter in the StorageClass of the NAS volume to a subdirectory of /share. For example, a value of0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/share/subpath
indicates that the mounted subdirectory of the NAS file system is/share/subpath
.The capacity of the PVC that is used to mount a NAS file system takes effect only if the file system is a General-purpose file system and the
allowVolumeExpansion
parameter of the StorageClass is set totrue
. In this case, CSI sets the quota of a NAS directory based on the PVC capacity. The actual quota is calculated by rounding up the PVC capacity to the next integer. The quota is measured in GiB.The NAS directory quota takes effect in an asynchronous manner. After a persistent volume (PV) is dynamically provisioned, the directory quota does not immediately take effect, and the quota may be exceeded if a large amount of data is written within a short period of time. For more information about NAS directory quotas, see Manage directory quotas.
If the securityContext.fsgroup parameter is specified in the application template, the kubelet performs the
chmod
orchown
operation after the volume is mounted, which increases the time consumption. For more information about how to accelerate the mounting, see Why does it require a long time to mount a NAS volume?
Mount a dynamically provisioned NAS volume
The CSI plug-in allows you to mount a dynamically provisioned NAS volume in subpath or filesystem mode in the ACK console or by using kubectl.
subpath mode: ACK console or kubectl can be used to mount NAS volumes in subpath mode.
If multiple applications or pods need to use the same NAS volume to share data, or you want to mount different subdirectories of a NAS file system to different pods, you can use the subpath mode.
filesystem mode: Only kubectl can be used to mount NAS volumes in filesystem mode.
If your application needs to dynamically create and delete NAS file systems and mount targets, you can use the filesystem mode.
sharepath mode: Deprecated. If you want to mount a directory in a NAS file system to multiple pods for data sharing, see Mount a statically provisioned NAS volume.
Mount a dynamically provisioned NAS volume in subpath mode in the ACK console
Step 1: Create a NAS file system and a mount target
To mount a dynamically provisioned NAS volume in subpath mode, you must create a NAS file system and a mount target.
Log on to the NAS console
Create a NAS file system. For more information, see Create a file system.
NoteIf you want to encrypt data in a NAS volume, configure the encryption settings when you create the NAS file system.
Create a mount target in the virtual private cloud (VPC) in which the cluster nodes are deployed. For more information, see Manage mount targets.
Step 2: Create a StorageClass
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
In the upper-right corner of the StorageClasses page, click Create.
In the Create dialog box, configure the StorageClass.
The following table describes the key parameters.
Parameter
Description
Name
The name of the StorageClass.
The name must start with a lowercase letter, and can contain only lowercase letters, digits, periods (.), and hyphens (-).
PV Type
Valid values: Cloud Disk and NAS. In this example, NAS is selected.
Select Mount Target
The mount target of the NAS file system. For more information about how to query the domain name of a mount target, see the View the domain name of a mount target section of the "Manage mount targets" topic.
If no mount target is available, create a NAS file system first. For more information, see Use CNFS to manage NAS file systems (recommended).
Reclaim Policy
Reclaim Policy Valid values: Delete and Retain. Default value: Delete.
Delete: If you use this policy, you must also specify the
archiveOnDelete
parameter.If you set the
archiveOnDelete
parameter totrue
, the PV and NAS file system associated with a PVC are renamed and retained after you delete the PVC.If you set the
archiveOnDelete
parameter tofalse
, the PV and NAS file system associated with a PVC are deleted after you delete the PVC.
Retain mode: If a PVC is deleted, the associated PV and NAS file system are retained and can only be manually deleted.
If you require higher data security, we recommend that you use the Retain policy to prevent data loss caused by user errors.
Mount Options
The mount options, such as the Network File System (NFS) version.
We recommend that you use NFS v3. Extreme NAS file systems support only NFS v3 For more information about the NFS protocol, see NFS.
Mount Path
The mount path of the NAS file system.
After you configure the parameters, click Create.
After the StorageClass is created, you can view the StorageClass on the StorageClasses page.
Step 3: Create a PVC
In the left-side navigation pane of the details page, choose .
In the upper-right corner of the Persistent Volume Claims page, click Create.
In the Create PVC dialog box, configure the parameters.
Parameter
Description
PVC Type
Valid values: Cloud Disk, NAS, and OSS. In this example, NAS is selected.
Name
The name of the PVC. The name must be unique within the cluster.
Allocation Mode
The allocation mode of the PVC. In this example, Use StorageClass is selected.
Existing Storage Class
The StorageClass that is used to enable dynamic provisioning. Click Select. In the Select Storage Class dialog box, find the StorageClass that you want to use and click Select in the Actions column.
Capacity
The capacity claimed by the PVC.
Access mode
The access mode of the PVC. Default value: ReadWriteMany. You can also select ReadWriteOnce or ReadOnlyMany.
Click Create.
Step 4: Create an application
In the left-side navigation pane of the details page, choose .
On the Deployments page, click Create from Image.
Configure the application parameters.
Add Local Storage: You can select HostPath, ConfigMap, Secret, or EmptyDir from the PV Type drop-down list. Then, set the Mount Source and Container Path parameters to mount the volume to a container path. For more information, see Volumes.
Add PVC: You can add cloud volumes.
After the application configuration is completed, click Create.
This example shows how to configure the volume parameters. For more information about other parameters, see Create a stateless application by using a Deployment.
You can add local volumes and cloud volumes.
In this example, a NAS volume is mounted to the /tmp path in the container.
Mount a dynamically provisioned NAS volume in subpath mode by using kubectl
To mount a dynamically provisioned NAS volume in subpath mode, you must create a NAS file system and a mount target.
Create a NAS file system and a mount target.
Log on to the NAS console
Create a NAS file system. For more information, see Create a file system.
NoteIf you want to encrypt data in a NAS volume, configure the encryption settings when you create the NAS file system.
Create a mount target in the virtual private cloud (VPC) in which the cluster nodes are deployed. For more information, see Manage mount targets.
Create a StorageClass.
Create a file named alicloud-nas-subpath.yaml and add the following content to the file:
allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: alicloud-nas-subpath mountOptions: - nolock,tcp,noresvport - vers=3 parameters: volumeAs: subpath server: "0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/k8s/" provisioner: nasplugin.csi.alibabacloud.com reclaimPolicy: Retain
Parameter
Description
allowVolumeExpansion
This parameter is available only for General-purpose NAS file systems. If you set this parameter to true, a NAS directory quota is configured for the dynamically provisioned PV based on the StorageClass. You can expand the volume by modifying the PVC.
mountOptions
The
mount options
of the NAS file system is specified in themountOptions
parameter. For example, you can specify the NFS version that you want to use.volumeAs
Valid values:
subpath
andfilesystem
. A value of subpath indicates that a subdirectory is mounted to the cluster. A value of filesystem indicates that a file system is mounted to the cluster.server
The mount target of the NAS file system if you mount a subdirectory of the NAS file system as a PV.
ImportantYou must specify the actual mount target. For more information about how to view the domain name of a mount target, see the View the domain name of a mount target section of the "Manage mount targets" topic.
provisioner
The type of the driver. In this example, the parameter is set to
nasplugin.csi.alibabacloud.com
. This indicates that the CSI plug-in provided by Alibaba Cloud is used.reclaimPolicy
The reclaim policy of the PV. Default value:
Delete
. You can also set the value toRetain
.Delete: If you use this policy, you must also specify the
archiveOnDelete
parameter.If you set the
archiveOnDelete
parameter totrue
, the PV and NAS file system associated with a PVC are renamed and retained after you delete the PVC.If you set the
archiveOnDelete
parameter tofalse
, the PV and NAS file system associated with a PVC are deleted after you delete the PVC.
Retain: When a PVC is deleted, the associated PV and NAS file system are retained and can only be manually deleted.
If you have high requirements on data security, we recommend that you use the
Retain
policy to prevent data loss caused by user errors.archiveOnDelete
Specifies whether to delete the backend storage if the
reclaimPolicy
parameter is set toDelete
. NAS is a shared storage service. You must specify both reclaimPolicy and archiveOnDelete parameters to ensure data security. This parameter is specified inparameters
.Default value:
true
, which indicates that the subdirectory or files are not deleted when the PVC is deleted. Instead, the subdirectory or files are renamed in the format ofarchived-{pvName}.{timestamp}
.If the value is set to
false
, the backend storage is deleted when the PVC is deleted.
NoteWe recommend that you do not set the value to false when the service receives a large amount of network traffic. For more information, see the What do I do if the task queue of alicloud-nas-controller is full and PVs cannot be created when I use a dynamically provisioned NAS volume? section of the "FAQ about NAS volumes" topic.
Run the following command to create a StorageClass:
kubectl create -f alicloud-nas-subpath.yaml
Run the following command to create a PVC:
Create a file named pvc.yaml and add the following content to the file:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nas-csi-pvc spec: accessModes: - ReadWriteMany storageClassName: alicloud-nas-subpath resources: requests: storage: 20Gi
Parameter
Description
name
The name of the PVC.
accessModes
The access mode of the PV. Default value:
ReadWriteMany
. You can also set the value toReadWriteOnce
orReadOnlyMany
.storageClassName
The name of the StorageClass that you want to associate with the PVC.
storage
The storage that is claimed by the PVC.
ImportantThis parameter does not limit the storage that the application can use. In addition, the storage claimed by the PVC does not automatically increase. Quotas are set on the subdirectory of the mounted NAS file system only if the file system is a General-purpose NAS file system and the
allowVolumeExpasion
parameter of the StorageClass is set to true.Run the following command to create a PVC:
kubectl create -f pvc.yaml
Create applications.
Create two applications named nginx-1 and nginx-2 to share the same subdirectory of the NAS file system.
Create a file named nginx-1.yml and add the following content to the file:
apiVersion: apps/v1 kind: Deployment metadata: name: deployment-nas-1 labels: app: nginx-1 spec: selector: matchLabels: app: nginx-1 template: metadata: labels: app: nginx-1 spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 volumeMounts: - name: nas-pvc mountPath: "/data" volumes: - name: nas-pvc persistentVolumeClaim: claimName: nas-csi-pvc
mountPath
: the path to which the NAS file system is mounted in the container.claimName
: the name of the PVC that the application uses to mount the NAS file system. In this example, nas-csi-pvc is used.
Create a file named nginx-2.yml and add the following content to the file:
apiVersion: apps/v1 kind: Deployment metadata: name: deployment-nas-2 labels: app: nginx-2 spec: selector: matchLabels: app: nginx-2 template: metadata: labels: app: nginx-2 spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 volumeMounts: - name: nas-pvc mountPath: "/data" volumes: - name: nas-pvc persistentVolumeClaim: claimName: nas-csi-pvc
mountPath
: the path to which the NAS file system is mounted in the container. In this example, /data is used.claimName
: The name of the PVC that is used by nginx-1. In this example, nas-csi-pvc is used.
Run the following command to deploy applications nginx-1 and nginx-2:
kubectl create -f nginx-1.yaml -f nginx-2.yaml
Run the following command to query the pods that are created for the applications:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE deployment-nas-1-5b5cdb85f6-n**** 1/1 Running 0 32s deployment-nas-2-c5bb4746c-4**** 1/1 Running 0 32s
NoteThe subdirectory
0cd8b4a576-g****.cn-hangzhou.nas.aliyuncs.com:/share/nas-79438493-f3e0-11e9-bbe5-00163e09****
of the NAS volume is mounted to the /data directory of podsdeployment-nas-1-5b5cdb85f6-n****
anddeployment-nas-2-c5bb4746c-4****
. The following information is displayed:/share
: the subdirectory that is specified in the StorageClass configurations.nas-79438493-f3e0-11e9-bbe5-00163e09****
: the name of the PV.
To mount different subdirectories of a NAS file system to different pods, you must create a PVC for each pod. You can create pvc-1 for nginx-1 and create pvc-2 for nginx-2.
Mount a dynamically provisioned NAS volume in filesystem mode by using kubectl
By default, if you delete a PV that is mounted in filesystem mode, the system retains the related NAS file system and mount target. To delete the NAS file system and mount target together with the PV, set the reclaimPolicy
parameter to Delete
and set the deleteVolume
parameter to true
in the StorageClass configurations.
If you mount a NAS volume to a pod in filesystem mode, you can create only one NAS file system and one mount target.
You must perform all the following steps for ACK dedicated clusters. For other types of clusters, start from Step 2.
Optional: Configure a Resource Access Management (RAM) policy and attach the policy to the RAM role assigned to your cluster.
If you use an ACK dedicated cluster, you must perform this step.
The filesystem mode allows you to dynamically create and delete NAS file systems and mount targets. To perform these operations in an ACK dedicated cluster, you must grant the required permissions to CSI-Provisioner. The following sample code shows a RAM policy that contains the required permissions:
{ "Action": [ "nas:DescribeMountTargets", "nas:CreateMountTarget", "nas:DeleteFileSystem", "nas:DeleteMountTarget", "nas:CreateFileSystem" ], "Resource": [ "*" ], "Effect": "Allow" }
You can grant the permissions by using one of the following methods:
Attach the preceding RAM policy to the master RAM role of your ACK dedicated cluster. For more information, see Modify the document and description of a custom policy.
Create a RAM user and attach the preceding RAM policy to the RAM user. Generate an AccessKey pair and then specify the AccessKey pair in the
env
variable in the CSI-Provisioner configurations.env: - name: CSI_ENDPOINT value: unix://socketDir/csi.sock - name: ACCESS_KEY_ID value: "" - name: ACCESS_KEY_SECRET value: ""
Create a StorageClass.
Create a file named alicloud-nas-fs.yaml and add the following content to the file:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: alicloud-nas-fs mountOptions: - nolock,tcp,noresvport - vers=3 parameters: volumeAs: filesystem fileSystemType: standard storageType: Performance regionId: cn-beijing zoneId: cn-beijing-e vpcId: "vpc-2ze2fxn6popm8c2mzm****" vSwitchId: "vsw-2zwdg25a2b4y5juy****" accessGroupName: DEFAULT_VPC_GROUP_NAME deleteVolume: "false" provisioner: nasplugin.csi.alibabacloud.com reclaimPolicy: Retain
Parameter
Description
volumeAs
The mount mode of the NAS file system. Valid values:
filesystem: The provisioner automatically creates a NAS file system. Each PV corresponds to a NAS file system.
subpath: The provisioner automatically creates a subdirectory in a NAS file system. Each PV corresponds to a subdirectory of the NAS file system.
fileSystemType
The type of the NAS file system. Valid values:
standard: General-purpose NAS file system
extreme: Extreme NAS file system
Default value: standard.
storageType
The storage type of the NAS file system.
If the fileSystemType parameter is set to standard, the valid values are Performance and Capacity. Default value: Performance.
If the fileSystemType parameter is set to extreme, the valid values are standard and advance. Default value: standard.
regionId
The ID of the region to which the NAS file system belongs.
zoneId
The ID of the zone to which the NAS file system belongs.
vpcId
The ID of the VPC to which the mount target of the NAS file system belongs.
vSwitchId
The ID of the vSwitch to which the mount target of the NAS file system belongs.
accessGroupName
The permission group to which the mount target of the NAS file system belongs. Default value: DEFAULT_VPC_GROUP_NAME.
deleteVolume
The reclaim policy of the NAS file system when the related PV is deleted. NAS is a shared storage service. Therefore, you must specify both deleteVolume and reclaimPolicy parameters to ensure data security.
provisioner
The type of the driver. In this example, the parameter is set to
nasplugin.csi.alibabacloud.com
. This indicates that the CSI plug-in provided by Alibaba Cloud is used.reclaimPolicy
The reclaim policy of the PV When you delete a PVC, the related NAS file system is automatically deleted only if you set the deleteVolume parameter to true and the reclaimPolicy parameter to Delete.
Run the following command to create a StorageClass:
kubectl create -f alicloud-nas-fs.yaml
Create a PVC and pods to mount a NAS volume.
Create a file named pvc.yaml and add the following content to the file:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: nas-csi-pvc-fs spec: accessModes: - ReadWriteMany storageClassName: alicloud-nas-fs resources: requests: storage: 20Gi
Create a file named nginx.yaml and add the following content to the file:
apiVersion: apps/v1 kind: Deployment metadata: name: deployment-nas-fs labels: app: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 volumeMounts: - name: nas-pvc mountPath: "/data" volumes: - name: nas-pvc persistentVolumeClaim: claimName: nas-csi-pvc-fs
Run the following command to create the PVC and pods:
kubectl create -f pvc.yaml -f nginx.yaml
In filesystem mode, the CSI driver automatically creates a NAS file system and a mount target when you create the PVC. When the PVC is deleted, the file system and the mount target are retained or deleted based on the settings of the deleteVolume and reclaimPolicy parameters.
Verify the storage performance of NAS
You can verify whether the NAS file system can persist and share data through the following examples.
Verify that the NAS file system can be used to persist data
NAS provides persistent storage. When a pod is deleted, the recreated pod automatically synchronizes the data of the deleted pod. Perform the following steps to verify that NAS file system can be used to persist data:
Query the pods that are created for the application and the files in the mounted NAS file system.
Run the following command to query the pods that are created for the application:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE deployment-nas-1-5b5cdb85f6-n**** 1/1 Running 0 32s deployment-nas-2-c5bb4746c-4**** 1/1 Running 0 32s
Run the following command to query files in the /data path of a pod. In this example, the pod
deployment-nas-1-5b5cdb85f6-n****
is used.kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- ls /data
No output is returned. This indicates that no file exists in the /data path.
Run the following command to create a file named nas in the /data path of the pod
deployment-nas-1-5b5cdb85f6-n****
:kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- touch /data/nas
Run the following command to query the files in the /data path of the pod
deployment-nas-1-5b5cdb85f6-n****
:kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- ls /data
Expected output:
nas
Run the following command to delete the pod:
kubectl delete pod deployment-nas-1-5b5cdb85f6-n****
Open another CLI and run the following command to view how the pod is deleted and recreated:
kubectl get pod -w -l app=nginx
Verify that the file still exists after the pod is deleted.
Run the following command to query the name of the recreated pod:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE deployment-nas-1-5b5cdm2g5-m**** 1/1 Running 0 32s deployment-nas-2-c5bb4746c-4**** 1/1 Running 0 32s
Run the following command to query files in the /data path of the pod
deployment-nas-1-5b5cdm2g5-m****
.kubectl exec deployment-nas-1-5b5cdm2g5-m**** -- ls /data
Expected output:
nas
The
nas
file still exists in the /data path. This indicates that data is persisted in the NAS file system.
Verify that data in the NAS file system can be shared across pods
You can mount a NAS volume to multiple pods. If the data is modified in one pod, the modifications are automatically synchronized to other pods. Perform the following steps to verify that data in the NAS file system can be shared across pods:
Query the pods that are created for the application and the files in the mounted NAS file system.
Run the following command to query the pods that are created for the application:
kubectl get pod
Expected output:
NAME READY STATUS RESTARTS AGE deployment-nas-1-5b5cdb85f6-n**** 1/1 Running 0 32s deployment-nas-2-c5bb4746c-4**** 1/1 Running 0 32s
Run the following command to query files in the /data path of each pod:
kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- ls /data kubectl exec deployment-nas-2-c5bb4746c-4**** -- ls /data
Run the following command to create a file named nas in the /data path of a pod:
kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- touch /data/nas
Run the following command to query files in the /data path of each pod:
Run the following command to query the files in the /data path of the pod
deployment-nas-1-5b5cdb85f6-n****
:kubectl exec deployment-nas-1-5b5cdb85f6-n**** -- ls /data
Expected output:
nas
Run the following command to query files in the /data path of the pod
deployment-nas-2-c5bb4746c-4****
.kubectl exec deployment-nas-2-c5bb4746c-4**** -- ls /data
Expected output:
nas
After you create a file in the /data path of one pod, you can find the file in the /data path of the other pod. This indicates that data in the NAS file system is shared by the two pods.
Enable user isolation or user group isolation in the NAS file system
To ensure the security of data between different users and user groups, you can perform the following steps to isolate users or user groups in the NAS file system.
Use the following YAML template to create an application. The containers of the application start processes and create directories as the nobody user. The user identifier (UID) and group identifier (GID) of the nobody user are 65534.
apiVersion: apps/v1 kind: StatefulSet metadata: name: nas-sts spec: selector: matchLabels: app: busybox serviceName: "busybox" replicas: 1 template: metadata: labels: app: busybox spec: securityContext: fsGroup: 65534 # The containers create directories as the nobody user. The UID and GID of the nobody user are 65534. fsGroupChangePolicy: "OnRootMismatch" # Permissions and ownership are changed only if the permissions and the ownership of the root directory do not meet the requirements of the volume. containers: - name: busybox image: busybox command: - sleep - "3600" securityContext: runAsUser: 65534 # All processes in the containers run as the nobody user (UID 65534). runAsGroup: 65534 # All processes in the containers run as the nobody user (GID 65534). allowPrivilegeEscalation: false volumeMounts: - name: nas-pvc mountPath: /data volumeClaimTemplates: - metadata: name: nas-pvc spec: accessModes: [ "ReadWriteOnce" ] storageClassName: "alicloud-nas-subpath" resources: requests: storage: 100Gi
Run the following
top
command in a container to check whether the command is run as the nobody user:kubectl exec nas-sts-0 -- "top"
Expected output:
Mem: 11538180K used, 52037796K free, 5052K shrd, 253696K buff, 8865272K cached CPU: 0.1% usr 0.1% sys 0.0% nic 99.7% idle 0.0% io 0.0% irq 0.0% sirq Load average: 0.76 0.60 0.58 1/1458 54 PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND 49 0 nobody R 1328 0.0 9 0.0 top 1 0 nobody S 1316 0.0 10 0.0 sleep 3600
The output shows that the
top
command is run as thenobody
user.Run the following to check whether the
nobody
user is used to create the directories and files in the mount directory of the NAS file system:kubectl exec nas-sts-0 -- sh -c "touch /data/test; mkdir /data/test-dir; ls -arlth /data/"
Expected output:
total 5K drwxr-xr-x 1 root root 4.0K Aug 30 10:14 .. drwxr-sr-x 2 nobody nobody 4.0K Aug 30 10:14 test-dir -rw-r--r-- 1 nobody nobody 0 Aug 30 10:14 test drwxrwsrwx 3 root nobody 4.0K Aug 30 10:14 .
The output shows that the nobody user is used to create the test file and the test-dir directory in the
/data
directory.
References
For more information about how to use CNFS to manage NAS file systems, see Use CNFS to manage NAS file systems (recommended) and Use CNFS to manage NAS file systems (recommended).
For more information about how to dynamically expand a NAS volume, see Use CNFS to automatically expand NAS volumes.
For more information about how to use the directory quota feature of NAS to manage the storage space of volumes, see Expand a NAS volume.