Platform for AI (PAI) provides AutoML to help you search for optimal hyperparameter combinations based on specific policies. You can use AutoML to improve the efficiency of model tuning.
Background information
In machine learning, hyperparameters are a set of parameters that are used to train models.
HPO is the process of finding the optimal hyperparameters. If a model has multiple hyperparameters, such as a multi-dimensional vector, HPO finds the specific vector value that provides the optimal model performance across all the ranges of the vector values of this model.
For example, a model has two hyperparameters A and B. Possible values for A are a, b, and c, and possible values for B are d and e. In this case, the model has six hyperparameter combinations. For this model, HPO finds the specific combination of A and B that allows the model to outperform other models. To obtain the optimal hyperparameter combination, separately use the six combinations of A and B for model training on the same training dataset. Then, compare the model performance by using the same test dataset.
HPO in AutoML
Hyperparameter fine-tuning is complex because the process involves a large amount of model hyperparameters and various data types and value ranges of hyperparameters. For example, a model has multiple hyperparameters, in which some hyperparameters are of the integer type and some parameters are of the floating-point type. In this case, manual hyperparameter tuning requires a large amount of computing resources. In this case, an automated system is required to complete the task. The HPO feature of AutoML can help you automatically fine-tune various hyperparameters.
You can use AutoML to fine-tune hyperparameters in a simple, efficient, and accurate manner. The following section describes the benefits of using AutoML:
Simplified fine-tuning: AutoML greatly simplifies the process of hyperparameter fine-tuning and saves time by using automated tools.
Improved model quality: AutoML integrates multiple algorithms of PAI to quickly find the optimal hyperparameter combination. This helps you train models in a more accurate and efficient manner.
Reduced computing resources: AutoML evaluates the model performance during the training to determine whether to terminate the current training and evaluate another hyperparameter combination. AutoML allows you to obtain the optimal hyperparameter combination without the need to evaluate all combinations. This helps you save computing resources.
Flexible use of computing power: You can use DLC and MaxCompute resources in AutoML in a convenient and flexible manner.
Scenarios
AutoML is suitable for all hyperparameter fine-tuning scenarios in machine learning training. The following section provides common scenarios in machine learning:
Binary classification tasks, such as determining whether a user is a paying user.
Regression tasks, such as estimating the payment amount a user makes within seven days.
Clustering tasks, such as determining the number of branches of a cosmetic brand in a city.
Recommendation tasks, such as fine-tuning ranking and retrieval models, or improving the area under curve (AUC) metric.
Deep learning tasks, such as improving the accuracy of image multi-classification and video multi-classification.
Reference
(Recommend.) This topic describes how AutoML works and the relationship between experiments, trials, and training tasks. This helps you become familiar with the concepts and facilitate configuration.
This topic describes how to create an experiment in the PAI console and how to configure key parameters.
This topic provides use cases of how to use AutoML to fine-tune hyperparameters.