You can perform the best practices for GPU-accelerated instances by using the Function Compute console, SDKs, or Serverless Devs. This topic describes how to use the Serverless Devs and the Function Compute console to process raw images by using function code to perform style synthesis and object detection. In this topic, Python is used as an example.
Scenarios and benefits
Various issues often occur in the traditional GPU infrastructure for AI applications. These issues include long construction cycles, high O&M complexity, low cluster utilization, and high costs. GPU-accelerated instances of Function Compute hand over the issues from users to the cloud vendors. This way, you can focus on your business and achieve your business goals without the need to worry about the underlying GPU infrastructure.
This section describes the benefits of GPU-accelerated instances compared with the instances with no GPU acceleration in Function Compute:
Cost prioritized AI application scenarios
Reserve GPU-accelerated instances based on their business requirements. This way, GPU-accelerated instances of Function Compute offer much higher cost efficiency than self-built GPU clusters.
Use GPU resources in 1/2 or exclusive mode through the GPU virtualization technology. This way, GPU-accelerated instances can be configured in a fine-grained manner.
Efficiency prioritized AI application scenarios
Focus on code development and the business objectives without the need to perform O&M on GPU clusters, such as driver and CUDA version management, machine operation management, and faulty GPU management.
For more information about GPU-accelerated instances, see Instance types and instance modes.
Tutorial for neural style transfer
Neural style transfer (NST) is a technology that is used to blend two images. During this process, the content is extracted from one image and style is extracted from the other image to create a new image. In the following example, the TensorFlow Hub preset model is used to complete the stylistic synthesis of an image.
Synthesis effect
Content image | Style image | Synthesized image |
Before you start
General preparations
For optimal user experience, join the DingTalk group (ID 11721331) and provide the following information:
The organization name, such as the name of your company.
The ID of your Alibaba Cloud account.
The region where you want to use GPU-accelerated instances, such as China (Shenzhen).
The contact information, such as your mobile number, email address, or DingTalk account.
Upload the audio and video resources that you want to process to an Object Storage Service (OSS) bucket in the region where the GPU-accelerated instances are located. Make sure that you have the read and write permissions on the objects in the bucket. For more information about how to upload audio and video resources, see Upload objects. For more information about permissions, see Modify the ACL of a bucket.
GPU application deployment by using Serverless Devs
Perform the following operations in the region where the GPU-accelerated instances reside:
Create a Container Registry Enterprise Edition instance or Personal Edition instance. We recommend that you create an Enterprise Edition instance. For more information, see Step 1: Create a Container Registry Enterprise Edition instance.
Create a namespace and an image repository. For more information, see Step 2: Create a namespace and Step 3: Create an image repository.
- Install Serverless Devs and Docker
- Configure Serverless Devs
Use Serverless Devs to deploy GPU applications
Create a project.
s init devsapp/start-fc-custom-container-event-python3.9 -d fc-gpu-prj
The following sample code shows the directory of the created project:
fc-gpu-prj ├── code │ ├── app.py # Function code. │ └── Dockerfile # Dockerfile: The image Dockerfile that contains the code. ├── README.md └── s.yaml # Project configurations, which specify how the image is deployed in Function Compute
Go to the project directory.
cd fc-gpu-prj
Modify the configurations of the directory file based on your business requirements.
Edit the s.yaml file.
For more information about the parameters in the YAML file, see YAML specifications.
edition: 1.0.0 name: container-demo access: default vars: region: cn-shenzhen services: customContainer-demo: component: devsapp/fc props: region: ${vars.region} service: name: tgpu_tf_service internetAccess: true function: name: tgpu_tf_func description: test gpu for tensorflow handler: not-used timeout: 600 caPort: 9000 instanceType: fc.gpu.tesla.1 gpuMemorySize: 8192 cpu: 4 memorySize: 16384 diskSize: 512 runtime: custom-container customContainerConfig: #1. Make sure that the namespace:demo namespace and the repo:gpu-tf-style-transfer_s repository are created in advance in Alibaba Cloud Container Registry. #2. Change the tag from v0.1 to v0.2 when you update the function and run s build && s deploy again. image: registry.cn-shenzhen.aliyuncs.com/demo/gpu-tf-style-transfer_s:v0.1 codeUri: ./code triggers: - name: httpTrigger type: http config: authType: anonymous methods: - GET
Edit the app.py file.
Example:
# -*- coding: utf-8 -*- # python2 and python3 from __future__ import print_function from http.server import HTTPServer, BaseHTTPRequestHandler from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.models import Sequential import json import sys import logging import os import matplotlib.pyplot as plt import matplotlib as mpl import numpy as np import os import PIL import tensorflow as tf import pathlib import urllib.request import random class Resquest(BaseHTTPRequestHandler): def upload(self, url, path): print("enter upload:", url) headers = { 'Content-Type': 'application/octet-stream', 'Content-Length': os.stat(path).st_size, } req = urllib.request.Request(url, open(path, 'rb'), headers=headers, method='PUT') urllib.request.urlopen(req) def tensor_to_image(self, tensor): tensor = tensor*255 tensor = np.array(tensor, dtype=np.uint8) if np.ndim(tensor)>3: assert tensor.shape[0] == 1 tensor = tensor[0] return PIL.Image.fromarray(tensor) def load_img(self, path_to_img): max_dim = 512 img = tf.io.read_file(path_to_img) img = tf.image.decode_image(img, channels=3) img = tf.image.convert_image_dtype(img, tf.float32) shape = tf.cast(tf.shape(img)[:-1], tf.float32) long_dim = max(shape) scale = max_dim / long_dim new_shape = tf.cast(shape * scale, tf.int32) img = tf.image.resize(img, new_shape) img = img[tf.newaxis, :] return img def do_style_transfer(self): mpl.rcParams['figure.figsize'] = (12,12) mpl.rcParams['axes.grid'] = False # Use the path of the Object Storage Service (OSS) object in your Alibaba Cloud account. You must have the read and write permissions on the object. # Read the content and style images from your OSS buckets. content_path = tf.keras.utils.get_file(str(random.randint(0,100000000)) + ".jpg", 'https://your_public_oss/c1.png') style_path = tf.keras.utils.get_file(str(random.randint(0,100000000)) + ".jpg",'https://your_public_oss/c2.png') content_image = self.load_img(content_path) style_image = self.load_img(style_path) print("load image ok") import tensorflow_hub as hub hub_model = hub.load('https://hub.tensorflow.google.cn/google/magenta/arbitrary-image-stylization-v1-256/2') # You can package the hub model into an image for loading to accelerate the processing. #hub_model = hub.load('/usr/src/app/style_transfer_model') stylized_image = hub_model(tf.constant(content_image), tf.constant(style_image))[0] print("load model ok") path = "/tmp/" + str(random.randint(0,100000000)) + ".png" self.tensor_to_image(stylized_image).save(path) print("generate stylized image ok") # Use the path of the Object Storage Service (OSS) object in your Alibaba Cloud account. You must have the read and write permissions on the object. # Save the synthesized images to the OSS bucket. self.upload("https://your_public_oss/stylized-image.png" ,path) return "transfer ok" def style_transfer(self): msg = self.do_style_transfer() data = {"result": msg} self.send_response(200) self.send_header("Content-type", "application/json") self.end_headers() self.wfile.write(json.dumps(data).encode()) def pong(self): data = {"function":"tf_style_transfer"} self.send_response(200) self.send_header('Content-type', 'application/json') self.end_headers() self.wfile.write(json.dumps(data).encode()) def dispatch(self): mode = self.headers.get('RUN-MODE') if mode == "ping": self.pong() elif mode == "normal": self.style_transfer() else: self.pong() def do_GET(self): self.dispatch() def do_POST(self): self.dispatch() if __name__ == "__main__": host = ("0.0.0.0", 9000) server = HTTPServer(host, Resquest) print("Starting server, listen at: %s:%s" % host) server.serve_forever()
Edit the Dockerfile file.
Example:
FROM registry.cn-shanghai.aliyuncs.com/serverless_devs/tensorflow:2.7.0-gpu WORKDIR /usr/src/app RUN apt-get update RUN apt-get install -y python3 RUN apt-get install -y python3-pip RUN pip3 install matplotlib RUN pip install tensorflow_hub COPY . . CMD [ "python3", "-u", "/usr/src/app/app.py" ] EXPOSE 9000
Build an image.
s build --dockerfile ./code/Dockerfile
Deploy the code to Function Compute.
s deploy
NoteIf you run the preceding command repeatedly and service name and function name remain unchanged, run the
use local
command to use local configurations.Configure provisioned instances.
s provision put --target 1 --qualifier LATEST
Check whether the provisioned instances are ready.
s provision get --qualifier LATEST
If the value of
current
is 1, the provisioned mode of GPU-accelerated instances is ready. Example:[2022-06-21 11:53:19] [INFO] [FC] - Getting provision: tgpu_tf_service.LATEST/tgpu_tf_func helloworld: serviceName: tgpu_tf_service functionName: tgpu_tf_func qualifier: LATEST resource: 188077086902****#tgpu_tf_service#LATEST#tgpu_tf_func target: 1 current: 1 scheduledActions: (empty array) targetTrackingPolicies: (empty array) currentError: alwaysAllocateCPU: true
Invoke the function.
View online function versions
s invoke FC Invoke Result: {"function": "tf_style_transfer"}
Perform neural style transfer
s invoke -e '{"method":"GET","headers":{"RUN-MODE":"normal"}}' generate stylized image ok enter upload: https://your_public_oss/stylized-image.png # You can download this file to view the synthesis effect. FC Invoke Result: {"result": "transfer ok"}
Release the GPU-accelerated instances.
s provision put --target 0 --qualifier LATEST
Use the Function Compute console to deploy a GPU-accelerated application
Deploy an image.
Create a Container Registry Enterprise Edition instance or Container Registry Personal Edition instance.
We recommend that you create an Enterprise Edition instance. For more information, see Create a Container Registry Enterprise Edition instance.
Create a namespace and an image repository.
For more information, see the Step 2: Create a namespace and Step 3: Create an image repository sections of the "Use Container Registry Enterprise Edition instances to build images" topic.
Perform operations on Docker as prompted in the Container Registry console. Then, push the preceding sample app.py and Dockerfile to the instance image repository. For more information about the files, see app.py and Dockerfile in the /code directory when you deploy a GPU application by using Serverless Devs.
Create a service. For more information, see the "Create a service“ section of the Manage services section.
Create a function. For more information, see Create a custom container function.
NoteSelect GPU Instance for Instance Type and Process HTTP Requests for Request Handler Type.
Change the execution timeout period of the function.
Find the function that you want to manage and click Configure in the Actions column.
In the Environment Variables section, change the value of Execution Timeout Period and click Save.
NoteTranscoding duration by using CPU exceeds the default value of 60 seconds. Therefore, we recommend that you set the value of Execution Timeout Period to a larger value.
Configure a provisioned GPU-accelerated instance.
On the function details page, click the Auto Scaling tab and click Create Rule.
On the page that appears, configure the following parameters to provision GPU-accelerated instances and click Create.
For more information about how to configure provisioned instances, see Configure provisioned instances and auto scaling rules.
After the configuration is complete, you can check whether the provisioned GPU-accelerated instances are ready in the rule list. Specifically, check whether the value of Current Reserved Instances is the specified number of provisioned instances.
Use cURL to test the function.
On the function details page, click the Triggers tab to view trigger configurations and obtain the trigger endpoint.
Run the following command in the CLI to invoke the GPU-accelerated function:
View online function versions
curl -v "https://tgpu-ff-console-tgpu-ff-console-ajezot****.cn-shenzhen.fcapp.run" {"function": "trans_gpu"}
Perform neural style transfer
curl "https://tgpu-fu-console-tgpu-se-console-zpjido****.cn-shenzhen.fcapp.run" -H 'RUN-MODE: normal' {"result": "transfer ok"}
Verify the result
You can view the synthesized image by accessing the following domain name in your browser:
https://cri-zbtsehbrr8******-registry.oss-cn-shenzhen.aliyuncs.com/stylized-image.png
This domain name is used as an example. The actual domain name prevails.
Tutorial for object detection
You can use the object detection technology to construct a rectangular border for the desired object and track the object when multiple objects appear at the same time. Object detection applications are used to mark and identify a large number of different types of objects. In the following example, OpenCV Deep Neural Network (DNN) is used to perform multi-object detection.
Detection performance
The following table describes the source image of the object to be detected (left), and the detected objects by using OpenCV DNN (right). The right figure also displays the name of the detected objects and the detection accuracy.
Original image | Detected objects |
Before you start
For optimal user experience, join the DingTalk group (ID 11721331) and provide the following information:
The organization name, such as the name of your company.
The ID of your Alibaba Cloud account.
The region where you want to use GPU-accelerated instances, such as China (Shenzhen).
The contact information, such as your mobile number, email address, or DingTalk account.
Perform the following operations in the region where the GPU-accelerated instances reside:
Create a Container Registry Enterprise Edition instance or Personal Edition instance. We recommend that you create an Enterprise Edition instance. For more information, see Step 1: Create a Container Registry Enterprise Edition instance.
Create a namespace and an image repository. For more information, see Step 2: Create a namespace and Step 3: Create an image repository.
- Install Serverless Devs and Docker
- Configure Serverless Devs
Compile OpenCV.
OpenCV must be compiled before you can use GPU acceleration. The following items describe how to compile OpenCV:
(Recommended) Use compiled OpenCV through Docker. Download address: opencv-cuda-docker and cuda-opencv.
Manually compile OpenCV. For more information, see the compilation guide.
Upload the audio and video resources that you want to process to an Object Storage Service (OSS) bucket in the region where the GPU-accelerated instances are located. Make sure that you have the read and write permissions on the objects in the bucket. For more information about how to upload audio and video resources, see Upload objects. For more information about permissions, see Modify the ACL of a bucket.
Procedure
Create a project.
s init devsapp/start-fc-custom-container-event-python3.9 -d fc-gpu-prj
The following sample code shows the directory of the created project:
fc-gpu-prj ├── code │ ├── app.py # Function code. │ └── Dockerfile # Dockerfile: The image Dockerfile that contains the code. ├── README.md └── s.yaml # Project configurations, which specify how the image is deployed in Function Compute
Go to the project directory.
cd fc-gpu-prj
Modify the configurations of files in the directory based on your business requirements.
Edit the s.yaml file.
For more information about the parameters in the YAML file, see YAML specifications.
edition: 1.0.0 name: container-demo access: default vars: region: cn-shenzhen services: customContainer-demo: component: devsapp/fc props: region: ${vars.region} service: name: tgpu_object_detect_service internetAccess: true function: name: tgpu_object_detect_func description: test gpu for opencv handler: not-used timeout: 600 caPort: 9000 memorySize: 16384 gpuMemorySize: 8192 instanceType: fc.gpu.tesla.1 runtime: custom-container customContainerConfig: #1. Make sure that the namespace:demo namespace and the repo:gpu-transcoding_s repository are created in advance in Alibaba Cloud Container Registry. #2. Change the tag from v0.1 to v0.2 when you update the function and run s build && s deploy again. image: registry.cn-shenzhen.aliyuncs.com/demo/gpu-object-detect_s:v0.1 codeUri: ./code triggers: - name: httpTrigger type: http config: authType: anonymous methods: - GET
Edit the app.py file.
Example:
# -*- coding: utf-8 -*- # python2 and python3 from __future__ import print_function from http.server import HTTPServer, BaseHTTPRequestHandler import json import sys import logging import os import numpy as np import cv2 import urllib.request class Resquest(BaseHTTPRequestHandler): def download(self, url, path): print("enter download:", url) f = urllib.request.urlopen(url) with open(path, "wb") as local_file: local_file.write(f.read()) def upload(self, url, path): print("enter upload:", url) headers = { 'Content-Type': 'application/octet-stream', 'Content-Length': os.stat(path).st_size, } req = urllib.request.Request(url, open(path, 'rb'), headers=headers, method='PUT') urllib.request.urlopen(req) def core(self): CLASSES = ["background", "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow", "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"] COLORS = np.random.uniform(0, 255, size=(len(CLASSES), 3)) print("[INFO] loading model...") prototxt = "/usr/src/app/m.prototxt.txt" model = "/usr/src/app/m.caffemodel" net = cv2.dnn.readNetFromCaffe(prototxt, model) msg = "" mode = "" if not cv2.cuda.getCudaEnabledDeviceCount(): msg = "No CUDA-capable device is detected |" else: msg = "CUDA-capable device supported |" net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA) net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA) path = "/tmp/target.png" # Use the path of the OSS object in your Alibaba Cloud account. You must have the read and write permissions on the object. Read the image from your bucket. self.download("https://your_public_oss/a.png", path) image = cv2.imread(path) (h, w) = image.shape[:2] blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)), 0.007843, (300, 300), 127.5) print("[INFO] computing object detections...") net.setInput(blob) detections = net.forward() # loop over the detections for i in np.arange(0, detections.shape[2]): confidence = detections[0, 0, i, 2] if confidence > 0.2: idx = int(detections[0, 0, i, 1]) box = detections[0, 0, i, 3:7] * np.array([w, h, w, h]) (startX, startY, endX, endY) = box.astype("int") cv2.rectangle(image, (startX, startY), (endX, endY), COLORS[idx], 2) x = startX + 10 if startY - 15 < 15 else startX y = startY - 15 if startY - 15 > 15 else startY + 20 label = "{}: {:.2f}%".format(CLASSES[idx], confidence * 100) cv2.putText(image, label, (x, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, COLORS[idx], 2) print("[INFO] {}".format(label)) cv2.imwrite(path, image) # Use the path of the OSS object in your Alibaba Cloud account. You must have the read and write permissions on the object. Read the image from your bucket. self.upload("https://your_public_oss/target.jpg", path) msg = msg + " process image ok!" data = {'result': msg} self.send_response(200) self.send_header('Content-type', 'application/json') self.end_headers() self.wfile.write(json.dumps(data).encode()) def pong(self): data = {"function":"object-detection"} self.send_response(200) self.send_header('Content-type', 'application/json') self.end_headers() self.wfile.write(json.dumps(data).encode()) def dispatch(self): mode = self.headers.get('RUN-MODE') if mode == "ping": self.pong() elif mode == "normal": self.core() else: self.pong() def do_GET(self): self.dispatch() def do_POST(self): self.dispatch() if __name__ == '__main__': host = ('0.0.0.0', 9000) server = HTTPServer(host, Resquest) print("Starting server, listen at: %s:%s" % host) server.serve_forever()
Edit the Dockerfile file.
Example:
FROM registry.cn-shanghai.aliyuncs.com/serverless_devs/opencv-cuda:cuda-10.2-opencv-4.2 WORKDIR /usr/src/app RUN sed -i s@/archive.ubuntu.com/@/mirrors.aliyun.com/@g /etc/apt/sources.list RUN apt-get clean RUN apt-get update --fix-missing RUN apt-get install -y build-essential RUN apt-get install -y python3 COPY . . CMD [ "python3", "-u", "/usr/src/app/app.py" ] EXPOSE 9000
Download the following files and store the files to the /code directory.
Build an image.
s build --dockerfile ./code/Dockerfile
Deploy the code to Function Compute.
s deploy
NoteIf you run the preceding command repeatedly and service name and function name remain unchanged, run the
use local
command to use local configurations.Configure provisioned instances.
s provision put --target 1 --qualifier LATEST
Check whether the provisioned instances are ready.
s provision put --target 1 --qualifier LATEST
If the value of
current
is 1, the provisioned mode of GPU-accelerated instances is ready. Example:[2021-12-07 02:20:55] [INFO] [S-CLI] - Start ... [2021-12-07 02:20:55] [INFO] [FC] - Getting provision: tgpu_object_detect_service.LATEST/tgpu_object_detect_func customContainer-demo: serviceName: tgpu_object_detect_service functionName: tgpu_object_detect_func qualifier: LATEST resource: 188077086902****#tgpu_object_detect_service#LATEST#tgpu_object_detect_func target: 1 current: 1 scheduledActions: (empty array) targetTrackingPolicies: (empty array)
Invoke the function.
View online function versions
s invoke FC Invoke Result: {"result": "CUDA-capable device supported | process image ok!"}
Detect objects
s invoke -e '{"method":"GET","headers":{"RUN-MODE":"normal"}}' enter upload: https://your_public_oss/target.jpg # You can download this file to view the inference results. FC Invoke Result: {"result": "CUDA-capable device supported | process image ok!"}
Release the GPU-accelerated instances.
s provision put --target 0 --qualifier LATEST
Verify the result
You can view the image after objects are detected by accessing the following domain name in your browser:
https://cri-zbtsehbrr8******-registry.oss-cn-shenzhen.aliyuncs.com/target2.jpg
This domain name is used as an example. The actual domain name prevails.