Platform for AI (PAI) provides prebuilt images for multiple machine learning frameworks and Compute Unified Device Architecture (CUDA) versions. You can use the images to easily create an AI development environment in the Deep Learning Container (DLC), Elastic Algorithm Service (EAS), and Data Science Workshop (DSW) modules of PAI. This topic describes the official PAI images for mainstream frameworks and the common scenarios.
Naming rule
The name of an official PAI image follows a fixed format to encapsulate the essential information about the image. The following table describes the naming rule for official PAI images. We recommend that you use the same naming rule for custom images.
Sample name | Name breakdown | Module identifier |
|
| The following identifiers may be used in the name of an official PAI image to indicate the applicable module of the image:
|
|
|
Images for mainstream frameworks
PAI provides prebuilt images for multiple machine learning frameworks. This section describes the official PAI images for mainstream frameworks. You can view the full list of official PAI images on the AI Computing Asset Management > Images page in the PAI console.
TensorFlow
Framework version | CUDA version (GPU only) | Operating system |
|
|
|
TensorFlow Serving
Framework version | CUDA version (GPU only) | Operating system |
|
|
|
Pytorch
Framework version | CUDA version (GPU only) | Operating system |
|
|
|
DeepRec
Framework version | CUDA version (GPU only) | Operating system |
| CUDA 114 | Ubuntu 18.04 |
XGBoost
Framework version | CUDA version (GPU only) | Operating system |
XGBoost1.6.0 | N/A (Only supports CPU) | Ubuntu 18.04 |
Triton Inference Server
Framework version | CUDA version (GPU only) | Operating system |
|
| Ubuntu 20.04 |
Images for common scenarios
Lingjun Intelligent Computing Service (Serverless Edition)
Image name | Framework | Instance type | CUDA version | Operating system | Supported region | Programming language and version |
deepspeed-training:23.06-gpu-py310-cu121-ubuntu22.04 |
| GPU | 121 | ubuntu22.04 | China (Ulanqab) | Python3.10 |
megatron-training:23.06-gpu-py310-cu121-ubuntu22.04 |
| GPU | 121 | ubuntu22.04 | China (Ulanqab) | Python3.10 |
nemo-training:23.06-gpu-py310-cu121-ubuntu22.04 |
| GPU | 121 | ubuntu22.04 | China (Ulanqab) | Python3.10 |
Artificial Intelligence Generated Content (AIGC)
Image name | Framework | Instance type | CUDA version | Operating system | Supported region | Programming language and version |
stable-diffusion-webui:3.0 | StableDiffusionWebUI3.0 | GPU | 117 | ubuntu22.04 |
| Python3.10 |
stable-diffusion-webui:2.2 | StableDiffusionWebUI2.2 | GPU | 117 | ubuntu22.04 | Python3.10 | |
stable-diffusion-webui:1.1 | StableDiffusionWebUI1.1 | GPU | 117 | ubuntu22.04 | Python3.10 | |
stable-diffusion-webui-env:pytorch1.13-gpu-py310-cu117-ubuntu22.04 | SD-WebUI-ENV | GPU | 117 | ubuntu22.04 | Python3.10 |
EAS deployment
The following table describes the official PAI images that you can use in EAS. To view the list of all images, go to the AI Computing Asset Management > Images page in the PAI console. The image addresses in the following table use the China (Hangzhou) region as an example.
Image name | Framework | Image description | Image address |
chat-llm-webui:3.0-blade |
| Uses Blade to implement inference services based on large language models (LLMs). The services can be accessed by using web applications or API endpoints. |
|
chatbot-langchain:1.0 | ChatbotLangChain 1.0 | Suitable for a chatbot service that uses LangChain to integrate an external knowledge base. |
|
comfyui:0.2-api | ComfyUI 0.2 | Contains ComfyUI and suitable for asynchronous API services used in text-to-image and image-to-image scenarios. |
|
comfyui:0.2 | ComfyUI 0.2 | Contains ComfyUI and suitable for text-to-image and image-to-image scenarios. |
|
comfyui:0.2-cluster | ComfyUI 0.2 | Contains ComfyUI and suitable for text-to-image and image-to-image scenarios. |
|
kohya_ss:2.2 | Kohya 2.2 | Uses Kohya to deploy applications based on fine-tuned Stable Diffusion models. |
|
modelscope-inference:1.9.1 | ModelScope 1.9.1 | Suitable for Modelscope models. |
|
stable-diffusion-webui:4.2-cluster-webui | StableDiffusionWebUI 4.2 | Contains Stable Diffusion WebUI and suitable for text-to-image and image-to-image scenarios. Services that are deployed by using this image support concurrent user access and resource isolation between users. |
|
stable-diffusion-webui:4.2-api | StableDiffusionWebUI 4.2 | Contains Stable Diffusion WebUI and suitable for asynchronous API services used in text-to-image and image-to-image scenarios. |
|
stable-diffusion-webui:4.2-standard | StableDiffusionWebUI 4.2 | Contains Stable Diffusion WebUI and suitable for text-to-image and image-to-image scenarios. |
|
tensorflow-serving:2.14.1 | TensorflowServing 2.14.1 | Contains TensorFlow Serving and suitable for inference services based on TensorFlow models. This image supports only CPU instances. |
|
tensorflow-serving:2.14.1-gpu | TensorflowServing 2.14.1 | Contains TensorFlow Serving and suitable for inference services based on TensorFlow models. This image supports only GPU instances. |
|
chat-llm-webui:3.0 |
| Uses HuggingFace to implement inference services based on LLMs. These services can be accessed by using web applications or API endpoints. |
|
chat-llm-webui:3.0-vllm |
| Uses vLLM to implement inference services based on LLMs. These services can be accessed by using web applications or API endpoints. |
|
huggingface-inference:1.0-transformers4.33 | Transformers 4.33 | Suitable for HuggingFace models. |
|
tritonserver:23.11-py3 | TritonServer 23.11 | Contains TritonServer and suitable for inference services. |
|