Machine Learning Platform for AI (PAI) provides AI accelerators for training acceleration and inference acceleration. AI accelerators can facilitate training and inference, and improve the speed and stability of AI training and inference by using various methods such as dataset acceleration, computing acceleration, optimization algorithms, scheduling algorithms, and resource optimization technologies. You can use AI accelerators to improve the efficiency of AI computing. This topic describes the features of AI accelerator.
Features
The following table describes the technical methods and capabilities supported by AI accelerators.
Technical method | Capability |
EPL (large-scale framework for distributed training) |
|
Rapidformer (Transformer training acceleration) |
|
PAI-Blade (general inference optimization) |
|
Use AI accelerators
You can refer to the following documents to quickly get started with AI accelerators.
EPL (large-scale framework for distributed training)
PAI-EPL is an efficient and easy-to-use framework for distributed model training. You can use PAI-EPL to implement high-performance distributed model training at a low cost. For more information about how to use EPL to accelerate training, see Use EPL to accelerate AI model training.
Rapidformer (Transformer training acceleration)
PAI-Rapidformer is a training optimization tool for PyTorch Transformer models. You can choose to enable one or more optimization technologies for PAI-Rapidformer so as to improve the speed and efficiency of model training. For more information about how to use PAI-Rapidformer, see Rapidformer overview.
PAI-Blade (general inference optimization)
PAI-Blade is a general-purpose inference optimization tool provided by PAI. PAI-Blade integrates various optimization technologies. You can use PAI-Blade to optimize the inference performance of a trained model so that the model can run at optimal inference performance. For more information about how to use PAI-Blade, see Blade overview.