Fluid allows you to use JindoRuntime to accelerate access to data stored in Object Storage Service (OSS) in serverless cloud computing scenarios. You can accelerate data access in cache mode and no cache mode. This topic describes how to accelerate Jobs in no cache mode.
Prerequisites
A Container Service for Kubernetes (ACK) Pro cluster with non-containerOS is created, and the Kubernetes version of the cluster is 1.18 or later. For more information, see Create an ACK Pro cluster.
ImportantThe ack-fluid component is not currently supported on the ContainerOS.
The cloud-native AI suite is installed and the ack-fluid component is deployed.
ImportantIf you have already installed open source Fluid, uninstall Fluid and deploy the ack-fluid component.
If you have not installed the cloud-native AI suite, enable Fluid acceleration when you install the suite. For more information, see Deploy the cloud-native AI suite.
If you have already installed the cloud-native AI suite, go to the Cloud-native AI Suite page of the ACK console and deploy the ack-fluid component.
Virtual nodes are deployed in the ACK Pro cluster. For more information, see Schedule pods to elastic container instances that are deployed as virtual nodes.
A kubectl client is connected to the ACK Pro cluster. For more information, see Connect to a cluster by using kubectl.
OSS is activated and a bucket is created. For more information, see Activate OSS and Create buckets.
Limits
This feature is mutually exclusive with the elastic scheduling feature of ACK. For more information about the elastic scheduling feature of ACK, see Configure priority-based resource scheduling.
Step 1: Upload the test dataset to the OSS bucket
Create a test dataset of 2 GB in size. In this example, the test dataset is used.
Upload the test dataset to the OSS bucket that you created.
You can use the ossutil tool provided by OSS to upload data. For more information, see Install ossutil.
Step 2: Create a dataset and a JindoRuntime
After you set up the ACK cluster and OSS bucket, you need to deploy the dataset and JindoRuntime. The deployment requires only a few minutes.
Create a file named secret.yaml based on the following content.
The file contains the
fs.oss.accessKeyId
andfs.oss.accessKeySecret
that are used to access the OSS bucket.apiVersion: v1 kind: Secret metadata: name: access-key stringData: fs.oss.accessKeyId: **** fs.oss.accessKeySecret: ****
Run the following command to deploy the Secret:
kubectl create -f secret.yaml
Create a file named resource.yaml based on the following content.
The YAML file stores the following information:
Dataset
: specifies the dataset that is stored in a remote datastore and the Unix file system (UFS) information.JindoRuntime
: enables JindoFS for data caching in the cluster.
apiVersion: data.fluid.io/v1alpha1 kind: Dataset metadata: name: serverless-data spec: mounts: - mountPoint: oss://large-model-sh/ name: demo path: / options: fs.oss.endpoint: oss-cn-shanghai.aliyuncs.com encryptOptions: - name: fs.oss.accessKeyId valueFrom: secretKeyRef: name: access-key key: fs.oss.accessKeyId - name: fs.oss.accessKeySecret valueFrom: secretKeyRef: name: access-key key: fs.oss.accessKeySecret accessModes: - ReadWriteMany --- apiVersion: data.fluid.io/v1alpha1 kind: JindoRuntime metadata: name: serverless-data spec: master: disabled: true worker: disabled: true
The following table describes some of the parameters that are specified in the preceding code block.
Parameter
Description
mountPoint
The path to which the UFS file system is mounted. The format of the path is
oss://<oss_bucket>/<bucket_dir>
.Do not include endpoint information in the path.
<bucket_dir>
is optional if you can directly access the bucket.fs.oss.endpoint
The public or private endpoint of the OSS bucket.
You can specify the private endpoint of the bucket to enhance data security. However, if you specify the private endpoint, make sure that your ACK cluster is deployed in the region where OSS is activated. For example, if your OSS bucket is created in the China (Hangzhou) region, the public endpoint of the bucket is
oss-cn-hangzhou.aliyuncs.com
and the private endpoint isoss-cn-hangzhou-internal.aliyuncs.com
.fs.oss.accessKeyId
The AccessKey ID that is used to access the bucket.
fs.oss.accessKeySecret
The AccessKey secret that is used to access the bucket.
accessModes
The access mode. Valid values:
ReadWriteOnce
,ReadOnlyMany
,ReadWriteMany
, andReadWriteOncePod
. Default value:ReadOnlyMany
.disabled
If you set this parameter to
true
for both master and worker nodes, the no cache mode is used.Run the following command to deploy the dataset and JindoRuntime:
kubectl create -f resource.yaml
Run the following command to check whether the dataset is deployed:
kubectl get dataset serverless-data
Expected output:
NAME UFS TOTAL SIZE CACHED CACHE CAPACITY CACHED PERCENTAGE PHASE AGE serverless-data Bound 1d
Bound
is displayed in thePHASE
column of the output. This indicates that the dataset is deployed.Run the following command to check whether the JindoRuntime is deployed:
kubectl get jindo serverless-data
Expected output:
NAME MASTER PHASE WORKER PHASE FUSE PHASE AGE serverless-data Ready 3m41s
Ready
is displayed in theFUSE
column of the output. This indicates that the JindoRuntime is deployed.
Step 3: Use a Job to create containers to access OSS
You can create containers to test data access accelerated by JindoFS, or submit machine learning jobs to use relevant features. This section describes how to use a Job to create containers to access the data stored in OSS.
Create a file named job.yaml based on the following content:
apiVersion: batch/v1 kind: Job metadata: name: demo-app spec: template: metadata: labels: alibabacloud.com/fluid-sidecar-target: eci alibabacloud.com/eci: "true" annotations: k8s.aliyun.com/eci-use-specs: ecs.g7.4xlarge spec: containers: - name: demo image: debian:buster args: - -c - du -sh /data && time cp -r /data/ /tmp command: - /bin/bash volumeMounts: - mountPath: /data name: demo restartPolicy: Never volumes: - name: demo persistentVolumeClaim: claimName: serverless-data backoffLimit: 4
Run the following command to deploy the Job:
kubectl create -f job.yaml
Run the following command to print the container log:
kubectl logs demo-app--1-5fr74 -c demo
Expected output:
real 0m23.644s user 0m0.004s sys 0m1.036s
The
real
field in the output shows that it took 23.644 seconds (0m23.644s
) to replicate the Serving file. The duration varies based on the network latency and bandwidth. If you want to accelerate data access, refer to Accelerate Jobs in cache mode.
Step 4: Clear data
After you test data access acceleration, clear the relevant data at the earliest opportunity.
Run the following command to delete the containers:
kubectl delete job demo-app
Run the following command to delete the dataset:
kubectl delete dataset serverless-data