The installation package of Blade includes a wheel package and an SDK. In the CPU and Compute Unified Device Architecture (CUDA) environments, you need to install the wheel package for model optimization and the SDK for model inference. On terminal devices, you need to install only the wheel package for model optimization. After model optimization, Blade exports a mobile neural network (MNN) model that can be used for model inference. This topic describes how to install Blade on different types of devices.
Limits
Blade supports only the following operating systems, Python versions, device types, and framework versions:
Operating system: Linux
Python versions: 3.6 to 3.8
Device types: GPUs whose CUDA versions range from 10.0 to 11.3, CPUs, and terminal devices such as MNN devices
Framework versions: TensorFlow 1.15, 2.4, and 2.7, PyTorch 1.6.0 and later, and TensorRT 8.0 and later
SDK for C++: cxx11 and pre-cxx11 application binary interfaces (ABIs), and Red-Hat Package Manager (RPM), Debian Software Package (DEB), and TGZ formats
Usage notes
When you install Blade, take note of the following items:
Blade does not automatically install TensorFlow or PyTorch. Make sure that a supported framework is installed in your environment before you install Blade.
Blade provides installation packages for different device types and CUDA versions. We recommend that you install a wheel package for Blade based on your device type and CUDA version.
The official PyTorch 1.6.0 does not support CUDA 10.0. To resolve this issue, you can use the wheel package provided by Platform for AI (PAI). For other PyTorch versions, use the official PyTorch installation package.
Procedure
The procedure for installing Blade varies based on the device type. The following section describes the installation procedures for different types of devices:
If you install Blade in a CUDA environment, you must install TensorFlow or PyTorch, the wheel package and SDK of Blade, and TensorRT. Perform the following steps:
Install the framework.
If your model uses the TensorFlow framework, you can install the TensorFlow package provided by the TensorFlow community. If you want to install TensorFlow with which TensorRT is integrated, you can use the precompiled TensorFlow package provided by PAI. For more information, see the "Install TensorFlow" section of this topic.
If your model uses the PyTorch framework, you can install the PyTorch package provided by the PyTorch community. If you want to install PyTorch 1.6.0 that supports CUDA 10.0, you can use the precompiled PyTorch package provided by PAI. For more information, see the "Install PyTorch" section of this topic.
Install a wheel package for Blade. For more information, see the "Install a wheel package for Blade" section of this topic.
Download and install the SDK for Blade. For more information, see the "Install the SDK for Blade" section of this topic.
Obtain an access token. For more information, see the "Obtain an access token" section of this topic.
To use Blade for model optimization on on-premises devices, you must install TensorFlow, MNN, and a wheel package. For more information, see the "Install Blade for model optimization on on-premises devices" section of this topic.
Install TensorFlow
Blade supports TensorFlow 1.15 and 2.4. Make sure that the versions of Python and dependencies meet the requirements. For more information, see the "Limits" section of this topic.
You can install a TensorFlow package provided by the TensorFlow community by using one of the following commands:
# Tensorflow of GPU version
pip3 install tensorflow-gpu==1.15.0
# Or:
pip3 install tensorflow-gpu==2.4.0
# Tensorflow of CPU version
pip3 install tensorflow==1.15.0
# TensorFlow 2.4 for CPU
pip3 install tensorflow==2.4.0
Install PyTorch
Blade supports PyTorch 1.6.0 and later. You can install PyTorch of a specific device type or CUDA version based on the installation guide provided on the PyTorch official website. If you want to install PyTorch 1.7.1 for which CUDA 11.0 is supported, run the following command:
pip3 install torch==1.7.1+cu110 torchvision==0.8.2+cu110 \
-f https://download.pytorch.org/whl/torch/
Install a wheel package for Blade
The wheel package for Blade varies based on the model framework and version, device type, and CUDA version. You must install an appropriate wheel package based on your actual environment. To install the wheel package for the latest version of Blade, run one of the following commands. For information about the installation commands for historical versions, see Appendix: Installation commands and SDK download URLs for PAI-Blade of earlier versions.
CPU
TensorFlow 1.15.0 and PyTorch 1.6.0
# pai_blade_cpu pip3 install pai_blade_cpu==3.27.0+1.15.0.1.6.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html # tensorflow_blade_cpu pip3 install tensorflow_blade_cpu==3.27.0+1.15.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html # torch_blade_cpu pip3 install torch_blade_cpu==3.27.0+1.6.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html
TensorFlow 2.4.0 and PyTorch 1.7.1
# pai_blade_cpu pip3 install pai_blade_cpu==3.27.0+2.4.0.1.7.1 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html # tensorflow_blade_cpu pip3 install tensorflow_blade_cpu==3.27.0+2.4.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html # torch_blade_cpu pip3 install torch_blade_cpu==3.27.0+1.7.1 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html
PyTorch 1.8.1
# pai_blade_cpu pip3 install pai_blade_cpu==3.27.0+1.8.1 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html # torch_blade_cpu pip3 install torch_blade_cpu==3.27.0+1.8.1 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html
PyTorch 1.9.0
# pai_blade_cpu pip3 install pai_blade_cpu==3.27.0+1.9.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html # torch_blade_cpu pip3 install torch_blade_cpu==3.27.0+1.9.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html
TensorFlow 2.7.0 and PyTorch 1.10.0
# pai_blade_cpu pip3 install pai_blade_cpu==3.27.0+2.7.0.1.10.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html # tensorflow_blade_cpu pip3 install tensorflow_blade_cpu==3.27.0+2.7.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html # torch_blade_cpu pip3 install torch_blade_cpu==3.27.0+1.10.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html
CUDA 11.0
TensorFlow 2.4.0 and PyTorch 1.7.1
# pai_blade_gpu pip3 install pai_blade_gpu==3.27.0+cu110.2.4.0.1.7.1 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html # tensorflow_blade_gpu pip3 install tensorflow_blade_gpu==3.27.0+cu110.2.4.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html # torch_blade pip3 install torch_blade==3.27.0+1.7.1.cu110 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html
CUDA 11.1
PyTorch 1.8.1
# pai_blade_gpu pip3 install pai_blade_gpu==3.27.0+cu111.1.8.1 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html # torch_blade pip3 install torch_blade==3.27.0+1.8.1.cu111 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html
PyTorch 1.9.0
# pai_blade_gpu pip3 install pai_blade_gpu==3.27.0+cu111.1.9.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html # torch_blade pip3 install torch_blade==3.27.0+1.9.0.cu111 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html
PyTorch 1.10.0
# pai_blade_gpu pip3 install pai_blade_gpu==3.27.0+cu111.1.10.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html # torch_blade pip3 install torch_blade==3.27.0+1.10.0.cu111 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html
CUDA 11.2
TensorFlow 2.7.0
# pai_blade_gpu pip3 install pai_blade_gpu==3.27.0+cu112.2.7.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html # tensorflow_blade_gpu pip3 install tensorflow_blade_gpu==3.27.0+cu112.2.7.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html
CUDA 11.3
PyTorch 1.10.0
# pai_blade_gpu pip3 install pai_blade_gpu==3.27.0+cu113.1.11.0 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html # torch_blade pip3 install torch_blade==3.27.0+1.11.0.cu113 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html
PyTorch 1.12.1
# pai_blade_gpu pip3 install pai_blade_gpu==3.27.0+cu113.1.12.1 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html # torch_blade pip3 install torch_blade==3.27.0+1.12.1.cu113 -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo_ext.html
Install the SDK for Blade
The SDK for Blade supports only the GNU Compiler Collection (GCC) in Linux. For your convenience, Blade provides two types of SDK packages that contain different GCC ABIs. For more information, see the official GCC ABI documentation.
If your GCC version is earlier than 5.1 or the
_GLIBCXX_USE_CXX11_ABI=0
macro is configured, install the SDK that contains the pre-cxx11 ABI.If your GCC version is 5.1 or later and no
_GLIBCXX_USE_CXX11_ABI=0
macro is configured, install the SDK that contains the cxx11 ABI.
PAI-Blade SDK also provides the following package formats for different Linux distributions:
SDK package in the RPM format: applicable to CentOS and Red Hat, and can be installed by using the rpm command.
DEB package: applicable to Ubuntu and Debian. You can run a
dpkg
command to install the DEB package.SDK package in the TGZ format: applicable to various Linux distributions, and can be used after decompression.
For example, you can run one of the following commands to install an SDK package that contains the pre-cxx11 ABI and supports CUDA 11.0 for Blade V3.23.0:
SDK package in the RPM format
rpm -ivh https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/3.23.0/py3.6.8_cu110_tf2.4.0_torch1.7.1_abiprecxx11/blade_cpp_sdk_gpu-3.23.0-Linux.rpm
SDK package in the DEB format
wget https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/3.23.0/py3.6.8_cu110_tf2.4.0_torch1.7.1_abiprecxx11/blade_cpp_sdk_gpu-3.23.0-Linux.deb dpkg -i blade_cpp_sdk_gpu-3.23.0-Linux.deb
By default, the SDK package in the RPM or DEB format is installed in the /usr/local directory. The following section shows the directory structure of the SDK package after installation or decompression:
/usr/local/
├── bin
│ ├── disc_compiler_main
│ └── tao_compiler_main
└── lib
├── libral_base_context.so
├── libtao_ops.so
├── libtf_blade.so
├── libtorch_blade.so
└── mlir_disc_builder.so
When you deploy the model, the dynamic-link libraries in the /usr/local/lib subdirectory are used.
The following section shows the download URLs for the latest version of the SDK for C++. For information about the download URLs of historical versions, see Appendix: Installation commands and SDK download URLs for PAI-Blade of earlier versions.
CXX11 ABI
Pre-CXX11 ABI
CPU
CUDA 11.0
CUDA 11.1
CUDA 11.2
CUDA 11.3
Obtain an access token
The SDK for Blade requires an access token to run and can run in Alibaba Cloud. To obtain an access token for a free trial, you can join the DingTalk group of Blade users (Group ID: 21946131).
Install Blade for model optimization on on-premises devices
Blade allows you to perform on-device model optimization by using an MNN model converted from a TensorFlow model. You must pre-install TensorFlow and MNN. Run the following command to install TensorFlow and MNN:
pip3 install tensorflow==1.15 MNN==1.1.0
You can use Blade of the GPU or CPU version for on-device model optimization. Run one of the following commands to install a wheel package for Blade:
If you installed a GPU, run the following command:
pip3 install pai-blade-gpu \ -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html
If no GPU is installed, run the following command:
pip3 install pai-blade-cpu \ -f https://pai-blade.oss-cn-zhangjiakou.aliyuncs.com/release/repo.html