All Products
Search
Document Center

Platform For AI:Limits and usage notes of AutoML

Last Updated:Mar 18, 2024

This topic describes the limits and usage notes of AutoML, including the supported regions, supported search algorithms, and scenarios for which the algorithms are suitable. The algorithms include Tree-structured Parzen Estimator (TPE), grid search, random search, evolutionary algorithms (EA), Gaussian Process (GP) for Bayesian Optimization (BO), and Population Based Training (PBT).

Supported regions

You can use AutoML in the following regions:

China (Hangzhou), China (Shanghai), China (Beijing), China (Shenzhen), and China (Hong Kong).

Note

You can select a region from the drop-down list in the upper part of the Platform for AI (PAI) console.

image.png

Supported search algorithms

The following section describes the supported search algorithms of AutoML and the scenarios for which the algorithms are suitable.

  • TPE: a lightweight algorithm that does not require additional dependencies. TPE supports all types of search space types and is used as the default algorithm in HPO. TPE can be used to resolve complex, nonlinear, high-latitude issues that require a large amount of calculation. However, TPE cannot find connections between different parameters. For more information, see Algorithms for Hyper-Parameter Optimization.

  • Grid search: a search algorithm that evenly divides the search space into a grid and traverses all possible combinations to determine the optimal combination. If the search space has a small number of possible combinations, you can use grid search to find the optimal combination.

  • Random search: a search algorithm that randomly generates hyperparameter combinations. Similar to grid search, the algorithm divides the search space into a grid, and randomly selects a hyperparameter combination in each trial. You can use the random algorithm to narrow the scope of grid search for nonlinear and high-dimensional issues that require a large amount of calculation to improve efficiency.

  • EA: a search algorithm that is developed based on the paper Large-Scale Evolution of Image Classifiers. The algorithm initializes the search space, selects better-performing hyperparameters for each generation, and performs operations such as selection, recombination, and mutation to generate offspring combinations. The mutation operation may include modifying a hyperparameter, and increasing or decreasing the number of network layers. EA requires multiple trials, but the algorithm logic is simple and easy to apply to new features.

  • GP: a BO method that uses the Gaussian Process to calculate loss. Bayesian optimization uses the posterior distribution of the metrics obtained from trials to optimize combinations by using the Gaussian Process. As the amount of obtained data increases, the posterior distribution becomes increasingly accurate to allow the algorithm to determine whether the specific search space is valid.

  • PBT: an algorithm that is developed based on the paper Population Based Training of Neural Networks. PBT is an asynchronous optimization algorithm for fixed computing resources and improves model performance by optimizing a fixed number of models and hyperparameters. The PBT algorithm constantly iterates a single set of hyperparameters to obtain the optimal combination, instead of finding an existing optimal combination by global search.

References