Fluid is an open source Kubernetes-native distributed dataset orchestrator and accelerator for data-intensive applications in cloud-native scenarios. Fluid enables observability, scalability, and access acceleration for datasets by managing and scheduling EFCRuntimes. This topic describes how to use Fluid EFCRuntime to accelerate access to File Storage NAS (NAS) file systems.
Prerequisites
Alibaba Cloud Linux 2 is used as the operating system of an Elastic Compute Service (ECS) instance and the kernel version of the operating system is 4.19.91-23 or later.
A Container Service for Kubernetes (ACK) Pro cluster that runs Kubernetes 1.18 or later is created. For more information, see Create an ACK Pro cluster.
NAS is activated and nodes in the created ACK Pro cluster can mount and access a Capacity NAS file system or Performance NAS file system.
NoteIn AI training scenarios, we recommend that you select NAS file system types based on the throughput that is required by the training jobs. For more information, see Select file systems
A kubectl client is connected to the ACK Pro cluster. For more information, see Obtain the kubeconfig file of a cluster and use kubectl to connect to the cluster.
Overview of EFC
Elastic File Client (EFC) is a FUSE-based POSIX client developed by the NAS technical team. You can use EFC to replace the kernel-mode NFS client. EFC allows you to access data based on multiple connections, cache metadata, and cache data in a distributed manner to increase read speeds. EFC also supports performance monitoring based on Managed Service for Prometheus. Compared with the kernel-mode NFS V3 and V4.x clients and other FUSE-based clients, EFC has the following advantages:
Strong semantic consistency: EFC uses the distributed locking mechanism to ensure strong consistency for files and directories. After you write data to a file, the data can be immediately read by other clients. After you create a file, the file can be immediately accessed by other clients. This advantage allows you to synchronize data among multiple nodes.
Cache reads and writes on individual servers: EFC optimizes the caching logic of FUSE and occupies a small amount of memory on a node to accelerate reads and writes on small files. Compared with traditional NFS clients, EFC improves the performance of cache reads and writes by more than 50%.
Distributed read-only caches: EFC supports distributed read-only caches and uses the memory of multiple nodes to create a cache pool that can automatically scale out to meet the increasing compute demand.
Small file prefetching: EFC prefetches hot data in frequently accessed directories to reduce the overhead associated with data fetching.
Hot updates and failover capabilities: EFC can perform a failover within seconds and perform hot updates for clients without interrupting your services.
Use Fluid EFCRuntime to accelerate access to NAS file systems
Fluid uses custom Kubernetes resources related to Fluid EFCRuntime to connect to EFC. This helps implement dataset observability and scalability.
Limits
The following limits apply to Fluid EFCRuntime:
Fluid EFCRuntime does not support DataLoad cache prefetching. Fluid EFCRuntime caches data only when the data is accessed for the first time.
Fluid EFCRuntime does not expose the caching status of datasets.
Fluid EFCRuntime is supported only in the following regions: China (Zhangjiakou), China (Beijing), China (Guangzhou), China (Shenzhen), and China (Shanghai).
How Fluid EFCRuntime works
The following figure shows how Fluid EFCRuntime caches data from NAS to the local storage to accelerate data access. The following section describes how Fluid EFCRuntime works:
You can create custom resource definitions (CRDs) of datasets and EFCRuntimes to specify information about the source NAS file systems.
Fluid controllers deploy the EFC Cache Worker and EFC FUSE components based on the information about the source file systems.
When you create a pod, you can use a persistent volume claim (PVC) to mount the mount target of a file system exposed by the EFC FUSE client to the pod.
When you access the data in a mounted NAS file system, the EFC FUSE client forwards the request to EFC Cache Worker. EFC Cache Worker checks whether the data is cached in the local storage. If the data is cached in the local storage, you can directly access the cache. If the data is not cached in the local storage, EFC Cache Worker reads the data from the NAS file system and caches the data to the local storage. Then, you can access the cache in the local storage.
Dataset: a CRD defined by Fluid. A dataset is a collection of logically related data that is used by upper-layer compute engines.
EFCRuntime: a runtime that accelerates access to datasets. EFCRuntimes use EFC as the caching engine. The EFC caching engines include the EFC Cache Worker and EFC FUSE components.
EFC Cache Worker: a server-side component that enables caching based on consistent hashing. You can disable this component based on your requirements. After you disable this component, distributed read-only caches are disabled. Other features are not affected.
EFC FUSE: a client-side component of EFC that exposes data access interfaces over the POSIX protocol.
Procedure
Step 1: Install ack-fluid
Install the cloud-native AI suite and ack-fluid 0.9.10 or later.
If you have installed open source Fluid, you must uninstall Fluid before you can install the ack-fluid component.
Install ack-fluid when the cloud-native AI suite is not installed
You can enable Fluid data acceleration when you install the cloud-native AI suite. For more information, see Deploy the cloud-native AI set.
Install ack-fluid when the cloud-native AI suite is installed
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
On the Cloud-native AI Suite page, find ack-fluid and click Deploy in the Actions column.
In the Install Component message, click Confirm.
Update ack-fluid to 0.9.10 or later
Log on to the ACK console. In the left-side navigation pane, click Clusters.
On the Clusters page, find the cluster that you want to manage and click its name. In the left-side pane, choose .
On the Cloud-native AI Suite page, find ack-fluid and click Upgrade in the Actions column.
In the Upgrade Component message, click Confirm.
Step 2: Write data to the NAS file system
If data is already stored in the NAS file system, you can skip this step.
Mount the NAS file system to an ECS instance. For more information, see Mount an NFS file system in the NAS console.
Run the following command to query the mount target of the CPFS file system:
findmnt /mnt
Expected output:
TARGET SOURCE FSTYPE OPTIONS /mnt/nfs xxxxxxxxxxx-xxxxx.cn-beijing.nas.aliyuncs.com:/ nfs rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,nolock,no
Run the following command to create a file of 10 GB in size in the mount directory of the NAS file system:
dd if=/dev/zero of=/mnt/nfs/allzero-demo count=1024 bs=10M
Expected output:
1024+0 records in 1024+0 records out 10737418240 bytes (11 GB) copied, 50.9437 s, 211 MB/s
Step 3: Create a dataset and an EFCRuntime
Create a file named dataset.yaml and copy the following content to the file:
Sample template of a NAS file system
apiVersion: data.fluid.io/v1alpha1 kind: Dataset metadata: name: efc-demo spec: mounts: - mountPoint: "nfs://<nas_url>:<nas_dir>" name: efc path: "/" --- apiVersion: data.fluid.io/v1alpha1 kind: EFCRuntime metadata: name: efc-demo spec: replicas: 3 master: networkMode: ContainerNetwork worker: networkMode: ContainerNetwork fuse: networkMode: ContainerNetwork tieredstore: levels: - mediumtype: MEM path: /dev/shm quota: 15Gi
The dataset.yaml file is used to create a dataset and an EFCRuntime.
The dataset specifies information about the NAS file system, such as the URL of the NAS file system and the directory that you want to mount.
The EFCRuntime starts an EFC caching system to provide caching services. You can specify the number of replicated pods for the worker component of the EFC caching system and the cache capacity of each worker component.
Parameter
Description
mountPoint
If you use a NAS file system, set the value in the
nfs://<nas_url>:<nas_dir>
format.nas_url: the URL of the NAS file system.
To obtain the URL of the NAS file system, perform the following operations: Log on to the NAS console. In the left-side navigation pane, choose File System > File System List. On the File System List page, find the NAS file system that you want to mount and click Manage in the Actions column. On the page that appears, click Mount Targets. For more information, see Manage mount targets.
nas_dir: the subdirectory that you want to mount. In most cases, you can set the value to the root directory. For example, a value of
efc://xxxxxxxxxxx-xxxxx.cn-beijing.nas.aliyuncs.com:/
specifies the root directory of a NAS file system.
replicas
The number of replicated pods that are created for the worker component of the EFC caching system. You can set the value based on the memory size of the compute node and the size of the dataset. We recommend that you ensure that the product of the value of quota and the value of replicas is greater than the size of the dataset.
networkMode
Valid values: ContainerNetwork and HostNetwork. In the ACK environment, we recommend that you set the value to ContainerNetwork. This network mode does not compromise the network performance.
mediumtype
The cache type. Valid values: HDD, SSD, and MEM. A value of MEM indicates memory. In AI training scenarios, we recommend that you set this parameter to MEM. If you set this parameter to MEM, you must set the path parameter to a memory file system, such as tmpfs.
path
The cache directory in the pods of the worker components of the EFC caching system. We recommend that you set the value of this parameter to /dev/shm.
quota
The cache capacity of each worker component. You can set the value based on the memory size of the compute node and the size of the dataset. We recommend that you ensure that the product of the value of quota and the value of replicas is greater than the size of the dataset.
Run the following command to create an EFCRuntime and a dataset:
kubectl create -f dataset.yaml
Run the following command to check whether the Dataset is deployed:
kubectl get dataset efc-demo
Expected output:
NAME UFS TOTAL SIZE CACHED CACHE CAPACITY CACHED PERCENTAGE PHASE AGE efc-demo Bound 24m
The dataset is in the Bound state. This indicates that the EFC caching system runs as expected in the cluster and application pods can access the data provided by the dataset.
Run the following command to check whether the EFCRuntime is deployed:
kubectl get efcruntime
Expected output:
NAME MASTER PHASE WORKER PHASE FUSE PHASE AGE efc-demo Ready Ready Ready 27m
The results indicate that the master, worker, FUSE components are in the Ready state.
Run the following commands to check whether the persistent volume (PV) and PVC are created:
After the dataset and the EFC caching system are ready, Fluid automatically creates a PVC and a PV.
kubectl get pv,pvc
Expected output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/default-efc-demo 100Gi ROX Retain Bound default/efc-demo fluid 94m NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/efc-demo Bound default-efc-demo 100Gi ROX fluid 94m
Step 4: Create an application to access data
Create an application to check whether access to data is accelerated. In this example, an application that provisions two pods is created and used to access the NAS file system multiple times from two nodes. You can evaluate the acceleration performance of Fluid EFCRuntime based on the amount of time that is required for accessing data.
Create a file named app.yaml and copy the following content to the file.
The following content defines a StatefulSet named efc-app. The StatefulSet contains two pods. Each pod has the efc-demo PVC mounted to the /data directory.
apiVersion: apps/v1 kind: StatefulSet metadata: name: efc-app labels: app: nginx spec: serviceName: nginx replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: registry.openanolis.cn/openanolis/nginx:1.14.1-8.6 command: ["/bin/bash"] args: ["-c", "sleep inf"] volumeMounts: - mountPath: "/data" name: data-vol volumes: - name: data-vol persistentVolumeClaim: claimName: efc-demo
Run the following command to create a StatefulSet named efc-app:
kubectl create -f app.yaml
Run the following command to query the size of the specified file:
kubectl exec -it efc-app-0 -- du -h /data/allzero-demo
Expected output:
10G /data/allzero-demo
Query the amount of time required for reading the specified file from the application.
NoteThe amount of time and throughput may vary based on the runtime environment and measuring method. In this topic, the cluster has three ECS instances of the ecs.g7ne.8xlarge type. The efc-demo EFCRuntime has three worker pods that run on the same ECS instance. The efc-app StatefulSet has two pods that separately run on the other two ECS instances. The amount of time required for data access is not affected by the kernel cache of the node on which the EFC FUSE client runs.
Run the following command to check the amount of time required for reading the specified file from the efc-app-0 pod of the StatefulSet:
NoteIf you want to read another file, replace /data/allzero-demo with the path of the file.
kubectl exec -it efc-app-0 -- bash -c "time cat /data/allzero-demo > /dev/null"
Expected output:
real 0m15.792s user 0m0.023s sys 0m2.404s
The results indicate that 15.792 seconds is required for reading a file of 10 GB in size and the read speed is 648 MiB/s.
Run the following command to check the amount of time required for reading the same file of 10 GB in size from the other pod of the StatefulSet:
NoteIf you want to read another file, replace
/data/allzero-demo
with the path of the file.kubectl exec -it efc-app-1 -- bash -c "time cat /data/allzero-demo > /dev/null"
Expected output:
real 0m9.970s user 0m0.012s sys 0m2.283s
The results indicate that 9.970 seconds is required for reading a file of 10 GB in size and the read speed is 1,034.3 MiB/s.
After Fluid EFCRuntime is used, the read speed is increased from 648 MiB/s to 1,034.3 MiB/s. For the same file, Fluid EFCRuntime increases the read speed by about 100%.